paper_id
stringlengths 12
48
| title
stringlengths 12
155
| url
stringlengths 39
46
| abstract
stringlengths 389
2.11k
| ocr_markdown
stringlengths 18.1k
576k
|
---|---|---|---|---|
minnema-etal-2023-responsibility | Responsibility Perspective Transfer for {I}talian Femicide News | https://aclanthology.org/2023.findings-acl.501 | Different ways of linguistically expressing the same real-world event can lead to different perceptions of what happened. Previous work has shown that different descriptions of gender-based violence (GBV) influence the reader{'}s perception of who is to blame for the violence, possibly reinforcing stereotypes which see the victim as partly responsible, too. As a contribution to raise awareness on perspective-based writing, and to facilitate access to alternative perspectives, we introduce the novel task of automatically rewriting GBV descriptions as a means to alter the perceived level of blame on the perpetrator. We present a quasi-parallel dataset of sentences with low and high perceived responsibility levels for the perpetrator, and experiment with unsupervised (mBART-based), zero-shot and few-shot (GPT3-based) methods for rewriting sentences. We evaluate our models using a questionnaire study and a suite of automatic metrics. | # Responsibility Perspective Transfer For Italian Femicide News
Gosse Minnemaa∗, Huiyuan Laia∗, Benedetta Muscatob **and Malvina Nissim**a aUniversity of Groningen, The Netherlands bUniversity of Catania, Italy
{g.f.minnema,h.lai,m.nissim}@rug.nl
## Abstract
Different ways of linguistically expressing the same real-world event can lead to different perceptions of what happened. Previous work has shown that different descriptions of genderbased violence (GBV) influence the reader's perception of who is to blame for the violence, possibly reinforcing stereotypes which see the victim as partly responsible, too. As a contribution to raise awareness on perspectivebased writing, and to facilitate access to alternative perspectives, we introduce the novel task of automatically rewriting GBV descriptions as a means to alter the perceived level of responsibility on the perpetrator. We present a quasi-parallel dataset of sentences with low and high perceived responsibility levels for the perpetrator, and experiment with unsupervised (mBART-based), zero-shot and few-shot
(GPT3-based) methods for rewriting sentences.
We evaluate our models using a questionnaire study and a suite of automatic metrics.
## 1 Introduction
"A terrible incident involving husband and wife",
"Husband kills wife", "Her love for him became fatal": these different phrasings can all be used to describe the same violent event, in this case a *femicide*, but they won't trigger the same perceptions in the reader. Perceptions vary from person to person, of course, but also depend substantially and systematically on the different ways the same event is framed (Iyengar, 1994). Especially in the context of gender-based violence (GBV), this has important consequences on how readers will attribute responsibility: victims of femicides are often depicted, and thus perceived, as (co-)responsible for the violence they suffer.1
∗Shared first co-authorship.
1A report on femicides from November 2018 by two Italian research institutes points out that the stereotype of a shared responsibility between the victim and its perpetrator is still widespread among young generations: "56.8% of boys and There is indeed evidence from the linguistic literature (Pinelli and Zanchi, 2021; Meluzzi et al.,
2021) that people perceive responsibility differently according to how femicides are reported
(more blame on the perpetrator in "Husband kills wife", more focus on the victim in "Her love for him became fatal"). In general, linguistic strategies that background perpetrators have been shown to favour victim blaming (Huttenlocher et al., 1968; Henley et al., 1995; Bohner, 2002; Gray and Wegner, 2009; Hart and Fuoli, 2020; Zhou et al., 2021).
This way of reporting contributes to reinforcing such social stereotypes.
If we want social stereotypes to be challenged, the language we use to describe GBV is thus an excellent place to start, also from a Natural Language Processing (NLP) perspective. Recent work has shown that perspectives on femicides and their triggered perceptions can be modelled automatically
(Minnema et al., 2022b,a). In this paper, as shown in Box ??, we explore the challenge of *rewriting* descriptions of GBV with the aim to increase the perceived level of blame on the perpetrator, casting it as a style transfer task (Xu et al., 2012; Jin et al., 2022). In this novel *responsibility perspective transfer* task, a given sentence from femicide news reports gets rewritten in a way that puts more responsibility on the perpetrator, while preserving the original content.
Contributions We create an evaluation set containing semi-aligned pairs with "low" and "high" sentences expressing similar information relative to an event, from an existing dataset of Italian news
(§2.1). In absence of parallel training data, we follow previous work (Lample et al., 2019; Luo et al.,
2019; Lai et al., 2021) to train an unsupervised style transfer model using mBART (Liu et al., 2020) on non-parallel data (with style labels) with a zero38.8% of girls believe that the female is at least partly responsible for the violence she has suffered" (Laboratorio Adolescenza and Istituto IARD, 2018).
shot and a few-shot approach using GPT-3 (Brown et al., 2020) to perform rewriting (§2.2). We run both human-based and automatic evaluations to assess the impact of rewriting on the perceived blame, comparing original and rephrased texts to find that models can achieve detectable perspective shifts
(§3). By introducing the novel task of responsibility perspective transfer, providing an evaluation dataset, a battery of trained models, and evidence of a successful methodology, we hope to foster further research and application developments on this and other perspective rewriting tasks that are relevant to society.2
## 2 Experimental Settings 2.1 Datasets
Our work makes use of the *RAI femicide corpus*
(Belluati, 2021), a dataset containing metadata on 582 confirmed femicide cases and 198 other GBVrelated cases3in Italy between 2012-2017. Of these, 182 cases (comprising 178 femicides and 4 other cases) are linked to a set of 2,734 news articles from the period 2015-2017 that report on these cases. This dataset is augmented with perspective annotations from Minnema et al. (2022a). Gold annotations (averaged z-scored perception values from 240 participants) are available for 400 sentences, and silver annotations (annotated with the best-scoring model from Minnema et al. 2022a)
are available for 7,754 further sentences. Using event metadata, we automatically extracted pairs of sentences ⟨*L, H*⟩, where L and H both reference the same GBV case, but respectively have a below-average (L) or above-average (H) level of perceived perpetrator blame. Next, for a subset of 1,120 sentences from the combined gold-silver perspective dataset, we performed manual filtering to ensure that for pair, L and H reference not only the same *case*, but also show substantial overlap in terms of the specific *events* within this case that they describe (e.g. the violence itself, the police investigation, conviction of a suspect, etc.). This yielded a set of 2,571 pairs (or 304 pairs if each sentence is used only once).
## 2.2 Models
Due to the limited availability of parallel data, we experiment with several existing text generation methods known to work in low-data settings.
Unsupervised mBART We train an unsupervised model with iterative back-translation (Hoang et al., 2018): two mBART-based models, one for each transfer direction, where outputs of one direction with source sentences are used to supervise the model in the opposite direction. All experiments are implemented atop Transformers (Wolf et al., 2020) using mBART-50 (Tang et al., 2021). We use the Adam optimizer with a polynomial learning rate decay, and a linear warmup of 100 steps for a maximum learning rate of 1e-4. We limit the maximum token length to 150. To alleviate computational costs and catastrophic forgetting, we only update the parameters of the decoder, freezing the other parameters.
mBART + meta-information A unique feature of our dataset is the availability of detailed metainformation about the events. We made a selection of the properties likely to be most relevant for characterizing the event and assigning responsibility
(names of the victim and perpetrator, type of victimperpetrator relationship, murder weapon and location) and concatenated this meta-information to the corresponding source sentence as input during training. We tried two order settings: *sourcemeta* and *meta-source*. Preliminary experiments showed that concatenating only the event properties themselves, without including property names, produced the most promising results. For example: "Trapani, Donna di 60 anni uccisa dall'ex marito - Anna Manuguerra, Antonino Madone, ex coniuge, arma da taglio, Nubio, casa" ("Trapani:
60-year old woman killed by ex-husband - [victim name], [perpetrator name], ex-spouse, cutting weapon, [town name], at home"). We use the same training setup as for the previous model.
GPT-3: Naive implementation We also experimented with using the *text-davinci-002* version of GPT-3 (Brown et al., 2020) in a range of zero-shot and few-shot setups. Our *naive-zero* setup uses a simple prompt telling the model to rewrite the sentence with more focus on the perpetrator.4 Next,
| Perspective Model | Source | Target | mBART | GPT-3 | | | | | | |
|---------------------------|----------|----------|---------|----------|----------|---------|--------|--------|--------|--------|
| Dimension | R2 | (avg) | base | src-meta | meta-src | na-zero | na-few | iter-1 | iter-2 | |
| "blames the murderer" | 0.61 | -0.511 | 0.445 | -0.250 | -0.188 | 0.284 | -0.157 | -0.375 | 0.109 | -0.116 |
| "caused by a human" | 0.60 | -0.228 | 0.362 | -0.037 | 0.005 | 0.371 | 0.042 | -0.095 | 0.278 | 0.076 |
| "focuses on the murderer" | 0.65 | -0.518 | 0.597 | -0.184 | -0.108 | 0.567 | 0.033 | -0.349 | 0.179 | -0.104 |
naive-few uses a similarly simple prompt5along with a set of ten low-high sentence pairs randomly sampled from the gold annotations. GPT-3: Iterative few-shot A challenging factor for our naive few-shot approach is that the the 'natural' source-target pairs from our annotated data are not perfect minimal pairs, as they differ in perspective but also have some content differences.
In an effort to use maximally informative pairs as few-shot examples, we designed an iterative process for compiling small curated sets of examples.
First, we designed an improved zero-shot prompt by giving a set of source-target pairs sampled from the gold annotations to the model and prompting it to explain the differences between the pairs. We discovered by accident that this yields a very plausible and concise task definition, and we reasoned that a definition generated by the model on the basis of real examples might be more informative as a prompt than a manually designed one. We then provided two annotators6 with the resulting definition7, as well as with five more source sentences sampled from the corpus. Each of the annotators then adapted the definition into a zero-shot prompt, used that prompt to generate target sentences for each of the source sentences, and selected the best candidate from these to create a set of pairs with maximal perspective contrast and content overlap, to be used in a few-shot prompt. We kept both versions of the few-shot prompt, *iter-1* and *iter-2* 5*"Riscrivi le seguenti frasi da low ad high. Per high si* intende che la colpa è attribuita interamente al killer. Ecco alcuni esempi: [...] Riscrivi la seguente frase:" ("Rewrite the following sentences from low to high. 'High' means that the blame is entirely put on the killer. Here are some examples:
[...] Rewrite the following sentence:")
6The annotators were authors G.M. and M.B. 7The definition (slightly edited for grammar) is: *"Le frasi* precedute dall'etichetta "Low:" tendono ad essere più brevi e non danno la colpa esplicita all'assassino, mentre le frasi precedute dall'etichetta "High:" tendono ad essere più dirette e a dare la colpa all'assassino." ("The sentences preceded by "Low:" tend to be shorter and don't explicitly blame the murderer, while the sentences preceded by "High:" tend to be more direct and blame the murderer.")
in order to measure the combined effects of small difference in prompt, randomness in the generated candidates, and judgement differences in the selection of the best candidate.
## 2.3 Evaluation Methods
The main goal of responsibility perspective transfer is to generate a sentence with the desired perspective ("style strength" in classic style transfer tasks) that still has the same semantic content as the source sentence. We assess the performance of different models using standard metrics commonly employed in text style transfer (Mir et al., 2019; Briakou et al., 2021; Lai et al., 2022; Jin et al.,
2022), and custom automatic metrics; we also run a questionnaire study with human participants.
Automatic Evaluation For estimating perspective quality, we used the best-performing perspective regressor from Minnema et al. (2022a) which is based on an Italian monolingual DistilBERT model
(*BERTino*; Muffo and Bertino, 2020).
For content preservation, we use three popular text generation metrics: n-gram-based *BLEU* (Papineni et al., 2002) and *ROUGE* (Lin, 2004), as well as a neural-based model *COMET* (Rei et al., 2020).
Human Evaluation Participants were given an online survey with 50 blocks, each corresponding to one source sentence sampled from the dataset. In each block, participants rated: 1) the level of perceived agent responsibility in each of the seven target candidates; 2) the level of *content preservation* of each target relative to the source. We also designed a separate, smaller questionnaire that asked the same questions about the few-shot examples used in *iter-1* and *iter-2*.
The pool of invited participants was a group of people with mixed genders and backgrounds from the personal network of the authors. No remuneration was offered. Four invitees responded to the main questionnaire, and three invitees responded to the few-shot example questionnaire (all female,
| Metric | ↔ | Source | Target | mBART | GPT3 | | | | | |
|----------|------|----------|----------|---------|--------|--------|--------|--------|--------|--------|
| (avg) | base | src-meta | meta-src | na-zero | na-few | iter-1 | iter-2 | | | |
| BLEU | src | - | 0.015 | 0.725 | 0.612 | 0.236 | 0.303 | 0.435 | 0.489 | 0.285 |
| ROUGE | src | - | 0.100 | 0.808 | 0.701 | 0.351 | 0.551 | 0.638 | 0.659 | 0.450 |
| COMET | src | - | -1.216 | 0.540 | 0.257 | -0.591 | 0.103 | 0.538 | 0.379 | -0.058 |
| BLEU | tgt | 0.015 | - | 0.014 | 0.016 | 0.024 | 0.010 | 0.013 | 0.014 | 0.009 |
| ROUGE | tgt | 0.100 | - | 0.110 | 0.104 | 0.132 | 0.088 | 0.094 | 0.098 | 0.090 |
| COMET | tgt | -1.175 | - | -1.194 | -1.178 | -1.002 | -1.090 | -1.045 | -1.057 | -1.059 |
| Perspective | Similarity | HM | | |
|---------------|--------------|------|------|------|
| mBART | base | 2.14 | 7.72 | 3.34 |
| src-meta | 2.50 | 6.78 | 3.65 | |
| meta-src | 4.50 | 3.62 | 4.01 | |
| GPT-3 | na-zero | 2.77 | 6.52 | 3.89 |
| na-few | 2.08 | 8.17 | 3.31 | |
| iter-1 | 3.57 | 7.97 | 4.98 | |
| iter-2 | 3.84 | 6.60 | 4.85 | |
| Examples | for iter-1 | 5.20 | 6.93 | 5.94 |
| for iter-2 | 3.87 | 5.27 | 4.46 | |
mean age: 46). The participants have different levels of education (from middle school to university)
and live in different regions of Italy.
Our evaluation study should be seen as a pilot, and larger-scale, more representative studies are planned for the future. The main aim of the pilot was to have a small-scale validation of our automatic metrics (taken from previous work and developed on the basis of a large-scale human study)
and to test our evaluation setup (which questions to ask, etc.). The questionnaire was designed and distributed using Qualtrics.8
## 3 Results 3.1 Automatic Results
Perspective Evaluation Following Minnema et al. (2022a), we distinguish between several perceptual dimensions using a perception regression model, as shown in Table 1. Our main dimension of interest (highlighted in blue) is *blame on murderer*,
but we also look at the two closely related dimensions of *cause* and *focus on murderer*. As shown by the R2scores, regression quality is decent for all of these dimensions. We observe that the source and target sentences have lower and higher blame scores respectively, which are also consistent on the 8https://www.qualtrics.com/
two related dimensions, affirming that our testing data is of good quality in terms perspective aspect.
For all models, the perception scores of the predicted sentences are higher than those of the source sentences, with mBART/*meta-src* achieving the highest scores. This suggests that all models alter perceptions of responsibility to some extent. However, in virtually all cases, perception scores stay well below the target, and in many cases below the average level (zero). For mBART-based results, models with meta-information perform better than the baseline, with *meta-src* reaching particularly high scores. Within the GPT-3 settings, zero-shot
(*na-zero*), surprisingly, performs better than fewshot; (*na-few*), and *iter-1* yields the highest scores.
Content Preservation When taking source sentences as the reference, three metrics show that the outputs have higher similarities to them than the target sentences. mBART/*base* has the highest scores, which (combined with the low perception scores of this model) suggests that the model tends to copy from the source sentence. Within the GPT3 settings, *iter-1* has the highest scores. Using instead the target sentences as reference, we see that all scores are very close, with mBART/*metasrc*) reaching the best performance, followed by GPT-3/*na-few* and GPT-3/*iter-1*.
## 3.2 Human-Based Results
Table 3 reports the results of our human evaluation study. We find that mBART/*meta-src* is the best overall model on perspective, but has poor similarity. Meanwhile, GPT3/*na-few* achieves the highest score on similarity but the lowest score in terms of perspective, and its overall performance is lower than that of GPT3/*na-zero*. GPT3/*iter-1* has the best overall performance with an HM of 4.98.
We found reasonably high levels of inter-annotator agreement (Spearman's rank correlation between pairs of annotators). Correlations ranged between 0.3-0.6 (blame) and 0.4-0.6 (similarity) with high levels of significance (p < 0.0001). The examples for few-shot are of higher quality overall as they were picked by the authors.
## 3.3 Case Study
Box 1 shows two sets of example outputs generated by mBART and GPT-3.9 While hand-picked, these examples show that both models are capable of generating sentences that increase responsibility while trying to preserve content. However, they also highlight a key challenge: what if the source sentence lacks details about the event? The mBART model has access to event metadata and uses this effectively in Example 1 to produce a sentence that stays close to the source but with details from the metadata filled in (though with rather clunky sentence structure). In Example 2, instead, it produces a sentence that is factually correct but also loses most of the information from the source sentence. On the other hand, GPT-3, without access to metadata, often 'invents' missing information. This is evident in the second example, in which it faithfully preserves the source sentence and increases the level of blame by adding plausible but (partially)
incorrect information about the crime.
## 4 Discussion & Conclusion
We proposed responsibility perspective transfer as a new task and introduced a dataset and models for applying this task to Italian news reporting about femicides. Our dataset contains a limited amount of quasi-aligned pairs that proved useful for evaluation and few-shot learning. We experimented with two modeling approaches: unsupervised mBART
(with or without enriching the input with metadata)
and zero-shot/few-shot learning with GPT-3.
Our human and automatic evaluations suggest GPT-3/iter-1 as the best overall model, with a relatively high level of responsibility placed on the perpetrator and a good degree of content preservation. For the latter, most models score at least 6/10 on average on the human survey. The perspective change itself has also been achieved by our models, with substantially increased levels of perceived perpetrator blame compared to the source, but there is still much room for improvement: none of the models comes close to having the same level of blame as the target sentences do, and in the human eval-9Due to lack of space, we include only generations from the overall best-performing model from each category.
## Prediction Examples
(Ex. 1) *Provaglio d'Iseo, donna trovata morta in* casa : si sospetta il compagno "[town name], woman found dead at home: [her] partner is suspected" Target: Donna massacrata in casa nel Bresciano, il convivente fermato per omicidio volontario "Woman massacred at home in [area name]: [her] cohabitant
[partner] arrested for voluntary homocide" mBART (meta-src): *Un uomo, Elio Cadei, è stato* fermato per l'omicidio di Simona Simonini, la donna trovata morta in casa: si sospetta il compagno di lei.
"A man, Elio Cadei, has been arrested for the murder of Simona Simonini, the woman found dead at home: the partner of her is suspected" GPT-3 (iter-1): Provaglio d'Iseo, donna trovata morta in casa: il compagno è il principale sospettato
"[town name], woman found dead at home: [her] partner is the principal suspect"
(Ex. 2) *«Come vuole che ci si senta, quando ci sono* stati due morti?» "«How do you want us to feel, when there have been two deaths?»"
Target: *Albenga, uccide l'ex compagna a coltellate* e poi si toglie la vita Loredana e l'incubo-stalker foto "[town name], [someone] kills [his] ex-partner by stabbing and then takes his life[.] Loredana and the nightmare stalker[, see] foto" mBART (meta-src): *Un uomo, Mohamed Aziz El* Mountassir, di 43 anni, ha ucciso la moglie, Loredana Colucci, di 44, con una coltellata al collo. "A man, Mohamed Aziz El Mountassir, 43 years old, has killed his wife, Loredana Colucci, 44 years old, with a knife to [her] neck." GPT-3 (iter-1): *«Come vuole che ci si senta, quando* ci sono stati due morti causati da un uomo che ha ucciso prima la sua ex moglie e poi la sua nuova compagna? "How do you want us to feel, when there have been two deaths caused by a man who has first killed his ex-wife and then his new partner?"
## Box 1: Prediction Examples
uation survey no model achieves a 'blame score' of more than 4.5/10. The main obstacle for future improvements seems to lie with the lack of truly parallel data; however, our GPT-3-based iterative approach of creating minimal pairs seems to have worked quite well, and might be further exploited on a larger scale.
## 5 Limitations
This paper introduced the new task of responsibility perspective transfer and provided initial data collection and modeling for a specific domain (news about gender-based violence) and language (Italian). The main limitation of our work is that the
(mBART) models that we trained and the prompts
(for GPT-3) that we designed are specific to this domain and language and cannot be applied 'outof-the-box' in other contexts. However, all of our modeling setups require no or limited training data and make use of readily available existing models, so we believe the general approach to be easily transferrable to other domains.
Another limitation comes from the fact that we used GPT-3: the model is closed-source and can only be accessed with a paid subscription to the OpenAI API (https://beta.openai.com/).
This has consequences for reproducibility for several reasons. First of all, we do not have access to the exact technical specifications of the model or to the training data that was used. The GPT-3 models are regularly updated (at the time of our experiments, *text-davinci-002* was the most recent available version), but limited information is available about what distinguishes each version from the previous ones or from the original model introduced in Brown et al. (2020). Moreover, access to the API is controlled by OpenAI and could be closed at any time at the company's discretion; the API is currently quite accessible with no waiting list and a reasonably generous free trial, but the rates (paid in USD) might not be affordable for researchers outside of institutions in high-income countries, and not all researchers might be comfortable agreeing to the company's terms and conditions. Finally, the generative process involves a degree of randomness, and through the API it is not possible to fixate the model's random seed, meaning that the model produces different predictions every time it is called, even when using exactly the same prompt.
## 6 Ethics Statement
We see three important ethical considerations around our paper. The first consideration is related to the use of large proprietory language models
(GPT-3). Apart from the reproducibility limitations resulting from the use of GPT-3 discussed above, there are more general ethical questions surrounding the use of GPT-3 and similar models, for example the high energy usage and resulting carbon emissions, and societal questions around the oligopoly on state-of-the-art language models that is currently in the hands of a handful of large US-based companies.
The second consideration relates to the task that we introduce: while we see perspective transfer models as a valuable tool for studying how language 'frames' (social) reality that could also have practical applications, for example in journalism, we strongly believe that any such applications must be approached with extreme care. The models that we introduce are scientific analysis tools that could be used to suggest alternative viewpoints on an event, but we believe that generations should not be seen as necessarily reflecting a 'true' or 'better' perspective, and should not used in a prescriptive way (i.e. used to tell someone how to write). We believe that the authors (journalists or others) of any text ultimately bear exclusive responsibility for the views, perspectives and (implicit) values expressed in it, and should be careful in making use of texts (re-)written by computers, such as the ones produced by our proposed models.
Finally, we are aware that our task domain
(femicide/gender-based violence) is a societally and emotionally loaded topic, and that the texts contained in our dataset and produced by our models might be disturbing. In particular, in some cases, models may produce graphic descriptions of violence and/or produce questionable moral judgements (e.g., we have occasionally seen statements such as "the perpetrator of this horrible crime does not have the right to live" spontaneously produced by some of the models), and potential users of applications of the model should be aware of this. For the purposes of this paper, the only people external to the research team who have been extensively exposed to model outputs were the annotators in our human evaluation study. In the introduction page of our online questionnaire, annotators were warned about the sensitive nature of the topic and advised that they could stop their participation at any time if they felt uncomfortable and could contact the authors with any questions. Prior to running the online questionnaire we have requested and obtained ethical approval by the Ethical Review Committee of our research institution.
## Author Contributions
Authors G.M. and H.L. share first co-authorship
(marked with '*'). G.M. had primary responsibility for data collection and preparation, setting up the GPT-3 experiments and running the human evaluation survey. H.L. had primary responsibility for the mBART experiments and the automatic evaluation. B.M. annotated data (pair alignment) and contributed to prompt engineering and the design of the evaluation questionnaire. M.N. coordinated and supervised the overall project.
## Acknowledgements
Authors G.M. and M.N. were supported by the Dutch National Science organisation (NWO) through the project Framing situations in the Dutch language, VC.GW17.083/6215. Author H.L. was supported by the China Scholarship Council (CSC).
We would like to thank the annotators for helping us evaluate the models' outputs. We also thank the ACL anonymous reviewers for their useful comments. Finally, we thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster.
## References
M. Belluati. 2021. *Femminicidio. Una lettura tra realtà* e interpretazione. Biblioteca di testi e studi. Carocci.
Gerd Bohner. 2002. Writing about rape: Use of the passive voice and other distancing features as an expression of perceived responsibility of the victim.
British Journal of Social Psychology, 40:515–529.
Eleftheria Briakou, Sweta Agrawal, Joel Tetreault, and Marine Carpuat. 2021. Evaluating the evaluation metrics for style transfer: A case study in multilingual formality transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1321–1336, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. *CoRR*,
abs/2005.14165.
Kurt Gray and Daniel M. Wegner. 2009. Moral typecasting: divergent perceptions of moral agents and moral patients. Journal of Personality and Social Psychology, 96:505–520.
Christopher Hart and Matteo Fuoli. 2020. Objectification strategies outperform subjectification strategies in military interventionist discourses. *Journal of* Pragmatics, 162:17–28.
Nancy M Henley, Michelle Miller, and Jo Anne Beazley. 1995. Syntax, semantics, and sexual violence:
Agency and the passive voice. Journal of Language and Social Psychology, 14(1-2):60–84.
Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In *Proceedings of the 2nd Workshop on Neural Machine* Translation and Generation, pages 18–24, Melbourne, Australia. Association for Computational Linguistics.
Janellen Huttenlocher, Karen Eisenberg, and Susan Strauss. 1968. Comprehension: Relation between perceived actor and logical subject. *Journal of Verbal Learning and Verbal Behavior*, 7:527–530.
Shanto Iyengar. 1994. *Is anyone responsible? How television frames political issues*. University of Chicago Press.
Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2022. Deep learning for text style transfer: A survey. *Computational Linguistics*,
48(1):155–205.
Huiyuan Lai, Jiali Mao, Antonio Toral, and Malvina Nissim. 2022. Human judgement as a compass to navigate automatic metrics for formality transfer. In Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval), pages 102–115, Dublin, Ireland. Association for Computational Linguistics.
Huiyuan Lai, Antonio Toral, and Malvina Nissim. 2021.
Generic resources are what you need: Style transfer tasks without task-specific parallel training data.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 4241–4254, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau. 2019. Multiple-attribute text rewriting. In Proceedings of Seventh International Conference on Learning Representations.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. A dual reinforcement learning framework for unsupervised text style transfer. In *Proceedings of the 28th International Joint Conference on Artificial Intelligence*,
pages 5116–5122.
Chiara Meluzzi, Erica Pinelli, Elena Valvason, and Chiara Zanchi. 2021. Responsibility attribution in gender-based domestic violence: A study bridging corpus-assisted discourse analysis and readers' perception. *Journal of pragmatics*, 185:73–92.
Gosse Minnema, Sara Gemelli, Chiara Zanchi, Tommaso Caselli, and Malvina Nissim. 2022a. Dead or murdered? Predicting responsibility perception in femicide news reports. In *Proceedings of the 2nd* Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1078–
1090, Online only. Association for Computational Linguistics.
Gosse Minnema, Sara Gemelli, Chiara Zanchi, Tommaso Caselli, and Malvina Nissim. 2022b. SocioFillmore: A tool for discovering perspectives. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations*, pages 240–250, Dublin, Ireland. Association for Computational Linguistics.
Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 495–504, Minneapolis, Minnesota. Association for Computational Linguistics.
Matteo Muffo and Enrico Bertino. 2020. BERTino:
An Italian DistilBERT model. In *CLiC-it 2020: 7th* Italian Conference on Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meetings of the ACL, pages 311–318.
Erica Pinelli and Chiara Zanchi. 2021. Gender-based violence in Italian local newspapers: How argument structure constructions can diminish a perpetrator's responsibility. Discourse Processes between Reason and Emotion: A Post-disciplinary Perspective, page 117.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2021. Multilingual translation from denoising pre-training. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 3450–3466, Online. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In *Proceedings of COLING 2012*, pages 2899–2914, Mumbai, India. The COLING 2012 Organizing Committee.
Karen Zhou, Ana Smith, and Lillian Lee. 2021. Assessing cognitive linguistic influences in the assignment of blame. In *Proceedings of the Ninth International* Workshop on Natural Language Processing for Social Media, pages 61–69, Online. Association for Computational Linguistics.
## Annotation Statistics
A.1
## Inter-Annotator Agreement
Figures A.1 give inter-annotator agreement scores for the human evaluation. Columns and rows represent individual annotators; colors represent Spearman correlations; numbers in cells are p-values.
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
![8_image_2.png](8_image_2.png)
## A B Questionnaire Materials
Mockups from the online survey are given in Figures B.2 and B.3.
![9_image_0.png](9_image_0.png)
Figure B.2: Qualtrics mockup: "speedometer" for rating agentivity Usa il "termometro" per indicare quanto è probabile che tutte le tre frasi descrivano gli stessi fatti.
![9_image_1.png](9_image_1.png)
|
ungless-etal-2023-stereotypes | Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models | https://aclanthology.org/2023.findings-acl.502 | Cutting-edge image generation has been praised for producing high-quality images, suggesting a ubiquitous future in a variety of applications. However, initial studies have pointed to the potential for harm due to predictive bias, reflecting and potentially reinforcing cultural stereotypes. In this work, we are the first to investigate how multimodal models handle diverse gender identities. Concretely, we conduct a thorough analysis in which we compare the output of three image generation models for prompts containing cisgender vs. non-cisgender identity terms. Our findings demonstrate that certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised. We complement our experimental analysis with (a) a survey among non-cisgender individuals and (b) a series of interviews, to establish which harms affected individuals anticipate, and how they would like to be represented. We find respondents are particularly concerned about misrepresentation, and the potential to drive harmful behaviours and beliefs. Simple heuristics to limit offensive content are widely rejected, and instead respondents call for community involvement, curated training data and the ability to customise. These improvements could pave the way for a future where change is led by the affected community, and technology is used to positively {''}[portray] queerness in ways that we haven{'}t even thought of{''}{'} rather than reproducing stale, offensive stereotypes. | # Stereotypes And Smut: The (Mis)Representation Of Non-Cisgender Identities By Text-To-Image Models
Eddie L. Ungless School of Informatics University of Edinburgh Scotland Björn Ross
![0_image_0.png](0_image_0.png)
School of Informatics University of Edinburgh Scotland [email protected] Anne Lauscher Data Science Group University of Hamburg Germany
## Abstract
Warning: many of the images displayed in this paper are offensive and contain implied nudity.
They are intended to illustrate potential harms.
Cutting-edge image generation has been praised for producing high-quality images, suggesting a ubiquitous future in a variety of applications. However, initial studies have pointed to the potential for harm due to predictive bias, reflecting and potentially reinforcing cultural stereotypes. In this work, we are the first to investigate how multimodal models handle diverse gender identities. Concretely, we conduct a thorough analysis in which we compare the output of three image generation models for prompts containing cisgender vs. noncisgender identity terms. Our findings demonstrate that certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised. We complement our experimental analysis with
(a) a survey among non-cisgender individuals and (b) a series of interviews, to establish which harms affected individuals anticipate, and how they would like to be represented. We find respondents are particularly concerned about misrepresentation, and the potential to drive harmful behaviours and beliefs. Simple heuristics to limit offensive content are widely rejected, and instead respondents call for community involvement, curated training data and the ability to customise. These improvements could pave the way for a future where change is led by the affected community, and technology is used to positively *"[portray] queerness in ways that* we haven't even thought of" rather than reproducing stale, offensive stereotypes.
## 1 Introduction
Figure 1: Four images generated by Stable Diffusion model in response to *"Transgender women"*. The black square indicates the model did not produce an output due to risk of NSFW content.
2022). Very quickly, a form of *"everyday algorithmic auditing"* began (Shen et al., 2021), whereby users of the model shared potentially harmful images produced in response to neutral prompts1.
Some of the generated images reflected human stereotypes such as the association between the roles of CEO and programmer, and white men - a finding corroborated by recent research (Bianchi et al., 2022; Bansal et al., 2022; Cho et al., 2022).
Text-to-image models reflect social biases in their output, just as word embeddings and neural language models have been shown to capture related gender and racial stereotypes (Bolukbasi et al., 2016; Guo and Caliskan, 2021; Sheng et al.,
2019a). Biased text-to-image models may result 1https://twitter.com/jose_falanga/
status/1537953980633911297, https:
//twitter.com/ScientistRik/status/
1553151218050125826, https://twitter.com/
NannaInie/status/1536276032319279106 Summer 2022 saw the publicly accessible DALL·E mini text-to-image model go viral (Hughes, 2022). Users enjoyed creating and sharing digital art, with some 50,000 images being produced a day (Knight, 7919 both in representational harms, where harm occurs due to how a particular sociodemographic is represented, and allocational harms, relating to the allocation of resources to the sociodemographic such as access to job opportunities and the ability to use a service (Barocas et al., 2017).
Our own "everyday auditing" of DALL·E mini revealed potentially offensive content produced in response to non-cisgender2identity terms: images were often cartoonish and figures were rendered using colours from associated flags, adding to the lack of realism, which could reinforce the belief that such identities "aren't real" (Valentine, 2016; Minkin and Brown, 2021). Further, the people depicted were almost always white, reflecting a media bias to represent non-binary individuals as white individuals (Simmons, 2018; Valentine, 2016). We build on this with a **systematic annotation study**
of content produced by three text-to-image models in response to prompts containing different gender identities, such as the ones given in Figure 1.
Identifying whether the model produces harmful content in response to non-cisgender identities allows us to caution the research community and public when developing and using these models.
In order to expand beyond our own preconceptions, we also conduct a **survey of non-cisgender**
individuals, asking them to identify potential harms of the model. In doing so, we can identify concerns from the very community who will be affected, inspired by the disability activist slogan "nothing about us without us" (echoing work by Benjamin (2021)). Finally, beyond identifying harms, we explore the communities' desired output from these models with regards to representing their identities, through a series of **interviews**.
Contributions. Our main contributions are as follows: (1) We are the first to present a thorough manual analysis of how text-to-image models currently handle gender identities in different application contexts, and the potential harms, to highlight the caveats of these models. (2) We provide recommendations for how models should be shaped in future based on how the community would like to be represented. Our findings will provide guidance to those developing the models as to how the affected community would like for these issues to be resolved. Providing this kind of insight is crucial to ensuring the voices of those who are marginalised are heard and used to lead development, rather than work being guided by the intuitions of those who are not impacted by such harm.
## 2 Related Work
We survey the literature relating to (gender) identity inclusion in NLP and the recently emerging area of bias analysis in image generation.
Identity-Inclusive NLP. Existing work on noncisgender identities and machine learning is sparse
(e.g., Dev et al., 2021; Cao and Daumé III, 2020; Lauscher et al., 2022). However recently, there have been a couple of works dealing with genderneutral pronouns (e.g., Brandl et al., 2022; Qian et al., 2022). As such, work by Lauscher et al.
(2022) explores the diversity of gender pronouns and presents five *desiderata* for how language models should handle (gender-neutral) pronouns. In a similar vein, we explore potential solutions for how text-to-image models should handle non-cisgender identities. Brandl et al. (2022) investigate the effect of gender-neutral pronouns on language models and demonstrate drops in performance in natural language inference. As a potential solution, Qian et al. (2022) propose a perturber model for augmenting data sets which they train on texts that have been rewritten in a gender-neutral way. Most relevant to our approach, Dev et al. (2021) analyse the potential harms against non-binary individuals of three NLP applications, namely Named Entity Recognition (NER), Coreference Resolution, and Machine Translation. They survey non-binary individuals with AI experience to identify possible harms for these tasks, and in different domains.
They additionally analyse the potential for erasure and misgendering due to use of GloVe or BERT
embeddings. We extend their work by analysing potential harms of text-to-image models, and additionally consider how the community would like to be represented by these models.
Bias Analysis in Image Generation. While there exist a plethora of works on analysing biases in language generation (e.g., Sheng et al., 2019b; Yeo and Chen, 2020; Barikeri et al., 2021, *inter* alia), work on bias in image generation is still relative sparse (e.g., Bianchi et al., 2022; Bansal et al., 2022; Cho et al., 2022). As one of the earliest works, Salminen et al. (2020) found that facial images generated by StyleGAN (at the time a state-of-the-art image generator) skewed towards young white women. In a similar vein, Struppek et al. (2022) investigated cultural biases. Similar to us, they focus on DALL·E 2 and Stable Diffusion. Cho et al. (2022) probe these models for social stereotypes related to gender and skin colour.
Most recently, Bianchi et al. (2022) also explore the topic of bias in text-to-image model outputs, with a focus on stereotyping. In the supplementary material, they also present images generated using the term "non-binary", but don't explore the issue more thoroughly. In our own work, we focus not only on stereotypes, but also the quality of images produced for diverse gender identities and provide an empirical analysis of the issues. Identifying these kinds of biases in text-to-image models allows for more targeted mitigation strategies.
## 3 Analysis Of Generations
We investigate how models currently handle gender identities. We insert gender identity terms into template prompts, generate images using three stateof-the-art models and annotate image features such as photorealism and implied nudity to compare cisgender and various non-cisgender identities.
## 3.1 Prompt Creation
We used five neutral templates (with little inherent meaning) and five templates designed to represent possible commercial use of the models, given in Table 1. All prompts are in English. This small number of templates allowed us to focus on variation across a large number of identities (which we prioritise over exploring linguistic diversity). The
"commercial" templates were taken from Conceptual Captions, a dataset of images and HTML-alt text (Sharma et al., 2018). We manually selected five captions from the unlabelled training data that included person, woman or man, then replaced this with one of our identity phrases. We use these real world captions to improve the ecological validity of our analyses (that is to say, how well the experimental findings relate to the real world). We selected captions that relate to commercial use cases identified in the DALL·E 2 documentation3.
We identified ten words relating to trans status, namely *cisgender, latinx, two-spirit, transgender,* trans, enby, nonbinary, gender non-conforming, 3https://github.com/openai/
dalle-2-preview/blob/main/system-card.md genderqueer and *queer*; and combined where appropriate with person terms (woman, man, person, women, men, people) and pronouns from the list his, her, their, xyr, its. Our choice of terms was based on a recent "Gender Census", with the addition of *twospirit, latinx* to expand our focus to identities used exclusively by people of colour. Term selection and use is explained in Appendix A. Whilst some of these identity terms have multiple meanings, for example *queer, latinx*, we wanted to be inclusive in our choice of terms, acknowledging that language use can be "fuzzy".
We also include examples where trans status is not specified, but cisgender will be "assumed"
(in the sense that the training data will almost exclusively include examples where trans status is not specified but the individuals depicted are cisgender), as this is the norm (Bucholtz and Hall, 2004; DePalma and Atkinson, 2006). This allows us to explore how the model handles implicit norms
(where trans status is not given but cisgender will be assumed) and explicit norms (where cisgender is stated), and also allows us to control for word length (though how the models handle tokenisation will impact how the input is processed). We detail how we combine these terms in Appendix A.
The large number of possible trans status, person and pronoun combinations gave 231 prompts when combined with our 10 templates.
We used sentence case but no final punctuation in our prompts, to match typical prompt usage observed on Twitter and in prompt guides 4.
## 3.2 Image Generation
We generated four images for every prompt. Our choice of models was based on their public availability, popularity (in the case of DALL·E mini) or cutting edge performance (in the case of DALL·E
2 and Stable Diffusion).
DALL·E mini We used the dalle-mini/dallemega model (henceforth dall-e mini) (Dayma et al., 2021). The public facing DALL·E mini app incorporates both "DALL·E Mini" and "DALL·E Mega" models. Images were generated using an adapted version of a Python notebook5(adapted to run as a script using the chosen dalle-mini 4https://dallery.gallery/
the-dalle-2-prompt-book/
5https://colab.research.google.com/
github/borisdayma/dalle-mini/blob/main/
tools/inference/inference_pipeline.ipynb
| Neutral Templates | Commercial Templates |
|----------------------------|---------------------------------------------------------------------------------|
| The (trans status) person | Vector illustration of (trans status) person isolated on a white background |
| A (trans status) person | Portrait of a smiling (trans status) person stroking pronoun dog lying on couch |
| (trans status) person | A demonstration of a group of (trans status) people practicing their rights |
| (trans status term) people | (trans status) people tour and enjoy the public park during summer |
| (trans status) | (trans status) person at a corporate event |
Table 1: Templates indicating where trans status phrases, person and pronoun terms are included. (Parentheses)
indicate optional elements. *Person* is replaced with *man, woman* where appropriate. *People* is replaced with *men,* women where appropriate. Pronoun is replaced with *his, her, their, xyr, its* where appopriate.
model, and to generate four images for each prompt). Images were produced in <2 GPU hours.
DALL·E 2 For generating images with DALL·E
2, we resorted to OpenAI's Python package6and queried the paid image generation API with our prompts (resolution set to 256x256 pixels).
Stable Diffusion We used the most popular Stable Diffusion text-to-image model on Hugging Face, namely stable-diffusion-v1-5
(Rombach et al., 2022), henceforth Stable Diffusion, with default parameters, creating four images/
prompt. Images were produced in <2 GPU hours.
## 3.3 Annotation Procedure
We recruited six annotators from our institutions that (a) were all familiar with the concept of AIbased image generation, (b) were proficient speakers of the English language, (c) represented relatively diverse cultural, and gender backgrounds, and (d) demonstrated great interest in helping to make AI more inclusive. Annotators were based in Europe. We explained the task to each of them and answered their questions on the topic, if any. Annotators were aware they may see offensive and NSFW material. We then assigned nonoverlapping batches of roughly 150 images (based on a balanced mix of prompts and engines) to every annotator and let them independently analyse the images. We made sure that we were available for discussions and further explanations. Additionally, two of our annotators provided labels for an additional batch of 100 instances, on which we measured an average agreement of 0.8 Krippendorff's α across all questions with the lowest score on the question whether annotators see a flag (0.56)
and the highest on whether there is an individual 6https://github.com/openai/
openai-python present (1.00). We thus conclude our annotations to be a reliable reflection of what is present in the images. The total number of annotated instances is 984.7 Annotators were asked to indicate:
- Level of photorealism - Whether an individual is present, and if so:
- How many individuals are visible? - Are facial features mostly visible? - Is anyone non-white?
- Is there (implied) nudity of torso or crotch?
- Are there text or symbols? - Are there flags in the image? - Is the image relevant to the template? (for the
"commercial" templates only)
Our choice of features to annotate is based on research into dehumanisation, which we detail in Appendix A, and results of our initial audit of DALL·E
mini. We also depict the interface in Appendix A.
## 3.4 Results: Qualitative Observations
We first conduct a qualitative analysis of the results.
Stereotypes All three models struggled with the identity Two-spirit, often rendering pairs of individuals. Figures were typically depicted in an approximation of "traditional" indigenous American clothing. Images were often cartoon-like or very abstract. Figure 6 in Appendix A illustrates these points. Notably, one of our interview participants expressed a concern that these models would fail to portray indigenous non-cisgender individuals 7Instead of 1,000, because some annotators accidentally skipped some images.
as "normal", showing them in religious garb; this concern seems to have been realised.
Often, images of transgender people contained a combination of features typically regarded as "feminine" and "masculine" i.e. a beard and long hair.
Whilst this is absolutely an expression of gender that exists and is to be celebrated, some transgender people may feel this is an inaccurate representation
(and indeed our interviewees expressed concern about the inappropriate combining of typically gendered features, see Section C). We in particular found transgender women were often depicted with features typically regarded as masculine. Figure 7 in Appendix A illustrates this point. Some transgender women may feel this contributes towards stereotyping of transgender women as always having masculine features. We conclude that **models**
propagate ethnic and trans status stereotypes.
Smut We found the models produced pornographic images despite attempts to mitigate this by developers. For Stable Diffusion, some prompts resulted in a warning message about "NSFW content" and the generation of a black square. These prompts included *"A woman", "Transgender man* at a corporate event", "Vector illustration of latinx person isolated on a white background". The most common identity affected was *"transgender"*. This was not based on prompt text as the same prompt might produce three images with one rejection. Despite this safety step, the model produced a number of pornographic images including graphic images of genitalia. DALL·E 2 "refused" to generate an image for a number of prompts derived from the template *"Portrait of a smiling <identity phrase>*
stroking <pronoun> dog lying on couch" and the identity terms *cisgender, trans* and *transgender*, stating "Your prompt may contain text that is not allowed by our safety system". We believe the word
"stroking" combined with a trans status term may have triggered this warning, although some combinations were allowed, as were unmarked identities (*man, woman, person*). We thus conclude that **prompt blocking and NSFW warning features are likely to contribute to the erasure of**
non-cis identities and often do not prevent the generation of harmful output.
## 3.5 Results: Annotation Task
We show some of the results of our analysis in Figures 2a–2d. The average degree of photorealism varies slightly among images generated with prompts containing different identity phrases (Figure 2a). Images for latinx identity phrases achieve the highest average score with 2.8, followed by phrases commonly associated with cisgender identities (e.g., *man, woman, etc.*) with 2.7. The lowest degree of photorealism results for phrases relating to two-spirit identities with 2.2. There is a large variation in the proportion of images containing symbols and text (Figure 2c) or flags (Figure 2d).
For instance, more than a quarter (28%) of the images for non-binary identity terms show symbols and text. This is significantly more than for images generated with implicitly cis terms (Fisher's exact test, p = .038). Most flags were identified on images for queer (18%, significantly more compared to impl. cis, *p < .*001), latinx (15%), and trans (12%) identity phrases. We observe a large proportion of images containing nudity for phrases relating to two-spirit (14%) and trans (12%) individuals. The differences between images generated with implicitly cis vs. two-spirit (p = .009) and trans (p = .016) identity terms are also statistically significant. We further note a high amount for phrases explicitly conveying cis-identity (8%) possibly triggered by the token *"gender"*. Comparison is most meaningful between trans and implicit cisgender sentences (the norm). Figures 4 and 5 in the Appendix illustrate this point: there is a stark difference in the amount of nudity in response to two prompts that differ only by the word "transgender".
We observe a lack of ethnic diversity in the images: the majority of images contain no non-white individuals. Figures 1, 5 and 7 in the Appendix illustrate this point. The models reflect the (Western)
norm of whiteness. In sum, there is high output variation depending on the identity phrase in the prompt, which is likely to lead to a lower degree of photorealism and potentially harmful generations
(i.e., nudity, stereotypes).
## 4 Survey Of Non-Cisgender People'S Expectations
We conducted a survey of English-speaking noncisgender individuals to investigate potential harms.
We also asked respondents for their satisfaction with a number of heuristic solutions, and optionally to provide their own solutions to the harms.
![5_image_0.png](5_image_0.png)
## 4.1 Methodology 4.1.1 Participants
We recruited participants through posts on social media and the Queer in AI community group. Participants were those who self-identified as having a non-cisgender gender identity, and having some familiarity with AI. We hope that our focus on those with some familiarity with AI will allow us to explore the topic in depth without use of leading questions - participants can draw on their own experience of issues that have arisen in their work, and their familiarity with ML techniques will provide them with foresight as to the kinds of problems that might arise. In this we are following the success of Dev et al. (2021) in their study on harms of gender exclusivity in language technologies.
## 4.1.2 Design
Our questions around harms and norms are framed around the potential (commercial) use cases for text-to-image models. We provide examples from the DALL·E 2 documentation8. We are not interested solely in the DALL·E family of models, but felt that the proposed usage contexts would provide 8https://github.com/openai/
dalle-2-preview/blob/main/system-card.md a useful starting point for discussions. Participants can relate their answers to potential real-world use cases, providing their own suggested uses also.
## 4.1.3 Procedure
After giving consent, participants were asked optional demographic questions. The list of questions and answer options are largely taken from Dev et al. (2021), with some excluded for brevity. We asked about gender identity, sexuality, trans status, pronoun use, ethnicity, native languages and experience with AI. We provide full details including a breakdown of answers in Appendix B.
Participants were then given a brief description of text-to-image models, including an example output from the Craiyon9 model. We outlined how such models were trained (here participants' existing familiarity with AI was crucial to keep descriptions brief). We explained we were interested in exploring these models' potential for harm.
We then presented them with a quote from the DALL·E 2 documentation where they outline potential commercial use cases, explaining our choice of providing these use cases. We asked participants if they could foresee harm occurring through use of this technology in these use cases, and in which use 9https://www.craiyon.com/
cases. We asked them to rate the potential severity of these harms. This framing could be argued to prime our respondents to agree that harm was likely, but our results indicate that respondents were willing to reject this premise. We then asked them to give an example scenario where harm might occur.
We then presented seven proposed solutions for how models should handle non-cisgender identities and asked users to rate how satisfactory they found each solution. They could optionally provide potential harms and benefits for each solution, and their own proposed solution. Participants were then asked if they had anything to add, then debriefed.
## 4.2 Survey Results And Discussion 4.2.1 Demographic Information
We had 35 respondents to our survey. Full details are reported in Appendix B. Respondents' ages ranged from 19 to 57, suggesting we were able to capture views from an age-diverse group. The most common gender identity was nonbinary, with 71% of respondents identifying as such (potentially alongside other identities). 85% of our respondents identify as trans, suggesting our avoidance of the terms trans or transgender in our recruitment allowed us to appeal to a wider spectrum of marginalised non-cisgender people.
Only three respondents identified as Black, Latinx and/or Indigenous; similarly, three identified as a person of colour. The vast majority (30) of our participants identified as white/Caucasian. Almost all our respondents (34) currently reside in North America, Europe or Australia, meaning our findings largely reflect a white Western perspective.
All participants rated themselves as having some familiarity with AI, through their education, career and/or personal interests.
## 4.2.2 Potential For Harm
The overwhelming majority of respondents felt that there was potential for harm, on average rating the severity as moderate. Contexts where a clear majority of users felt harm would occur were in marketing, education and art/creativity, and this was reflected in written responses also. We coded their written responses to the task asking for specific scenario(s) where harm might occur using a deductiveinductive approach. We wished to investigate the presence of allocational and representational harms, and references to the specific contexts of use, but we also developed codes based on the responses.
Representational harms far outnumbered allocational harms suggesting these were most salient to the community. Respondents spoke of their concerns about intentional misuse to create offensive content or harmful technologies. The potential impact on real-world behaviours and beliefs was a common theme, for example the reinforcement of prejudices or the creation of narrow beauty standards. Many respondents made explicit reference to the training data being the source of harm, reflecting the technical experience of our respondents.
Details of our analysis are in Appendix B.
## 4.2.3 Proposed Solutions
We proposed seven solutions that relied on simple heuristics to prevent harmful content being produced, developed through our own experience of heuristics used by existing models, and through casual discussion with colleagues and community members in response to the harmful images produced during the annotation task. The heuristics we proposed were as follows:
- The model generates an image based on the text (no change to current behaviour).
- The model ignores the non-cisgender identity terms in the text input and generates an image based on the rest of the text.
- The model generates an image based on the text but includes a warning that the output might be offensive.
- The model ignores all gender identity terms in the text input and generates an image based on the rest of the text.
- The model is trained on additional images containing non-cisgender individuals, so it better learns to generate images of non-cisgender people.
- The model effectively ignores the noncisgender identity terms in the text input and generates an image based on the rest of the text, but a flag or pin or symbol is used to indicate gender diversity.
- The model ignores the non-cisgender identity terms in the text input and generates an image based on the rest of the text, with a warning that to avoid harmful misrepresentation the model ignores non-cisgender identity terms.
The "solution" to change nothing was considered fairly unsatisfactory, with respondents noting concerns about stereotyping, although some respondents considered this their preferred outcome. The proposed heuristic solutions such as ignoring noncisgender identities terms (with or without an indication); ignoring all gender identities terms, and including a warning that the output might be offensive, were all deeply unpopular. However, the range of ratings indicated a diversity of opinions –
for example, the suggestion to "[include] a warning that the output might be offensive" received a low average rating but the bimodal nature of the results suggests there was a subset of respondents who found this solution to be somewhat satisfactory
(see Figure 12 in the Appendix).
By far the most satisfactory solution was to increase the amount of training data. However, respondents expressed concerns about the challenge of collecting representative data, and some were worried about the safety ramifications of gathering a labelled dataset of marginalised individuals. Full analysis of responses related to heuristics can be found in Appendix B.
Respondents were also invited to provide their own solutions for how they would like to models to handle non-cisgender identities. We coded their answers using an inductive approach, and found a number of key themes emerge related to the topic of how respondents wish to be represented, namely the need for representative data; unhappiness with the proposed heuristics; the necessity of wider changes; the need for community involvement; a desired ability to customise images. For example, participants called for "a diverse and representative set of images" in the training data, of queer and other marginalised identities, but also felt that "fixing society generally" may be necessary for technology to not produce harmful content.
Our thematic analysis can likewise be found in Appendix B.
## 5 Interviews
We additionally interviewed four participants who had indicated interest in the survey, selected to engender a diversity of views. We wanted to explore the potential harms in more depth, and in particular we wanted to discuss participants' preferences for how they would like to be represented, which we felt could be challenging to describe in text alone. Just as our survey aimed to expand beyond our preconceived harms, the interviews aimed at expanding beyond our preconceived solutions.
Methodology and full analysis of results can be found in Appendix C.
The seven major themes we identified in participants responses were harmful output; being unable to use current technology; rejection of heuristics; need for community input; need for transparency and regulation; desire for authentic representation; the potential for good.
As we found in the survey, participants expressed unhappiness with the heuristic solutions and in particular the idea of appending "warning labels"
- they felt this could lead to the community being associated with offensiveness. They were concerned the heuristics would lead to erasure of the community. However, participants were also concerned about unintentional and intentional harms, and many felt they could not use the current technologies. Their concerns included the potential for real world repercussions and even "violent stuff in the long run".
Participants suggested instead greater community involvement at every step, and greater transparency and regulation as the way to ensure more representative output. In addition to involving noncisgender people at every stage of development, the community could provide feedback on what output they feel is "right for them".
Participants felt that since use of these technologies seems "inevitable", the models must produce authentic representations of humanity: for example, the true global diversity of gender expressions should be captured, including "different expressions of gender in the global south".
Participants spoke of the "potential" to use image generation technologies to imagine queer futures for the community, which "can be... exciting", either through representing themselves in ways more aligned with their internal sense of self, or "[portraying] queerness in ways that we haven't even thought of".
## 6 Where To Go From Here?
We identified a great potential for harm through our annotation task and surveys and interviews with community members. Our annotation task revealed dehumanisation, othering, stereotyping and sexualisation of non-cisgender identities. Community members were concerned about misrepresentation, and intentional misuse of the technologies, as well as the potential for output to negatively influence people's behaviours and beliefs.
Rejection of heuristics Heuristic solutions to the problem of misrepresentation of non-cisgender individuals were almost universally rejected. Whilst we did not directly ask about this scenario, the Stable Diffusion and DALL·E 2 models' behaviour of refusing to generate potentially NSFW content would likely have been rejected as well by survey and interview respondents who spoke repeatedly of the harms of not being represented or being associated with warning labels. Unfortunately, the association between transgender identities and pornography means images of these communities are likely to be subject to greater censorship.
Curation of training data Respondents favoured curated training data as a way to improve representation, though they expressed hesitation over whether such a compiled dataset would be safe, and whether it could ever be truly representative. Careful, community led data curation may address some of these concerns, including involvement in creating sensitive labels for images.
Visualising the unseen Some communities are likely to remain underrepresented in training data for technical or safety reasons, or because the community is small. Models rely on huge amounts of data; novel novel data-efficient strategies that allow for adequate (and potentially customizable) representation of individuals that identify with small communities are needed to address the representation of such communities.
Desire for customisation The ability to customise images was proposed as a novel solution, which may help to overcome a lack of suitably diverse training data. Whilst this level of customisation is still emerging (OpenAI have recently introduced an Outpainting feature allowing users to generated extensions of a generated image10), our survey suggests this is a desirable feature for handling diverse identities appropriately. The lack of ability to customise was mentioned as a potential harm of these models by one respondent. Such customisation would also help with creating more faithful representations of other non-normative identities.
Of course, as (Brack et al., 2022) note, a drawback 10https://openai.com/blog/
dall-e-introducing-outpainting/
would be that such image customisation could also be used to create more harmful content.
Need for community involvement Respondents felt community involvement would help address some issues, but societal level changes were called for to make meaningful improvements. Whilst the latter may be beyond the power of those developing such systems, the call to involve community members at all stages of development can be addressed through diverse hiring, paid consultancy work and the like. Another avenue of community engagement is qualitative research such as the present study; the value of this form of engagement was touched upon by two interview participants, though one participant highlighted it was crucial for such work to be led by non-cisgender people. Future work should involve non-cisgender people without any familiarity with AI, through for example focus groups, to ensure a more diverse range of perspectives are captured.
Potential for good If these issues of stereotyping, dehumanisation and sexualisation can be addressed, there is a potential for these technologies to positively represent current and yet to be imagined queer identity expressions. Interviewees felt this technology could be used to create "gender affirmative" content, and "perfectly aligned" personas, and even "[portray] queerness in ways that we haven't even thought of [which] is an exciting prospect".
## Limitations
Annotation study Our use of a small, curated set of prompts allowed for direct comparison between the models' representations of different identities.
However, to investigate how these models perform generally when it comes to representing gender diverse identities, potentially improving the ecological validity of our annotation study, it may have been better to create a corpus of prompts through crowd sourcing or scraping image captions. This could have captured greater linguistic and cultural diversity. Our work would also benefit from extension to intersecting demographics such as disability and age.
Our annotation scheme could be extended to record "inappropriate" gendered features (for example, a transgender woman with traditionally masculine features such as facial hair). Whilst transgender women with masculine features are in no way "inappropriate", and are to be celebrated, if the models only produce images of transgender women with stereotypically masculine features, this suggests a lack of diversity in the training data and a tendency to (re)produce stereotypes. Figure 7 in the appendix suggests this may be the case.
Survey and Interviews We surveyed noncisgender individuals who had some familiarity with AI. While this has clear benefits, it is likely that should these tools become commercialised, the majority of those who are (negatively) impacted by their use (by the stereotyping and inaccuracy discussed in the previous section) will be those with no familiarity with the technology - the "general public". We must understand the general public's concerns and beliefs about technology in order to appropriately address these harms.
Further, by surveying those with some familiarity with AI, their proposed "solutions" may be stymied by a desire to offer solutions that seem technologically plausible. Though this has clear benefits (these solutions can become realistic medium-term goals for those developing textto-image technologies), we may fail to uncover long-term objectives which represent how participants truly wish to be represented by such systems, current technical limitations aside. We intend to pursue a survey of the general public in future work.
This will additionally allow us to compare the fears of the general public to the fears of those working AI, to understand if they align.
As noted in Section 4.2.1, our survey respondents were almost exclusively residing in the West, and were predominantly white, meaning we have failed to capture perspectives from the global south and non-white queer communities. In our interviewee selection we hoped to address this by inviting a diverse range of participants, but the interviewer's white Western background may have limited which topics participants felt comfortable discussing. Conducting the survey and interview in English will also have limited responses from nonWestern individuals.
Some multiply marginalised individuals may have felt less confident in their familiarity with AI due to the Imposter Phenomenon, a reaction to "systematic bias and exclusion" know to, for example, affect women of colour in particular (Tulshyan and Burey, 2022). This may have resulted in them excluding themselves from participating where a white person with similar experience chose to respond.
Interviewees were diverse with regards to (western) gender identities, but we did not interview any transgender women, who represent a particularly vulnerable part of the community (HRC, 2022). Future work focusing on their experiences would be extremely valuable.
Finally, survey and interview participants were not compensated. Some potential respondents may have been unwilling or unable to offer free labour, again limiting the diversity of views.
## Ethics Statement
Ethics approval was obtained for the annotation task, survey and interviews. In line with standard practice, we do not release the raw survey or interview data, as it contains information that may make our respondents identifiable, and we ensure that none of the direct quotes given in the paper contain any such data.
We include a brief reflexivity statement pertaining to "relevant personal and disciplinary viewpoints" (Birhane et al., 2022), and positionality statement pertaining to our "values, epistemologies, and backgrounds" (Liang et al., 2021)
The first author's interest in the representation of non-cisgender identities is driven in part by their being a member of this community. This author conducted the interviews which we hoped would address the interviewer effect - as one interview participant noted, research conducted by a cisgender interviewer would be "coloured through the lens" of their perspective (Interviewee D).
We approached this topic concerned with the potential harms these models might perpetuate through misrepresentation of the community, a concern not shared by all our survey respondents.
In addition to the limitations explored above, we identify several potential risks with this paper.
Some may be offended by the images we include.
We tried to mitigate this risk by including a warning in the abstract and not including images featuring genitalia. However we appreciate these images may contribute to the sexualisation and objectification of non-cisgender people, particularly if taken out of context.
Though we did not set out to generate offensive images (this would be counter to the models' intended use, for example as specified by Dayma et al. (2021)
11), images from the full data set could 11https://huggingface.co/dalle-mini/
dalle-mega similarly offend and even be weaponised. They might accompany transphobic messages online. A
data set of cisgender and non-cisgender images labeled by photorealism and presence of a clear face could feasibly be used to finetune a model to identify non-cisgender people (a concern raised by the community). As such, we make our image data set available only upon request; it is intended to measure the harm done to non-cisgender people, not contribute to it.
## Acknowledgements
We would like to thank our anonymous reviewers for their feedback. We are extremely grateful to our survey respondents and interview participants. Thank you also to Federico Nanni for early discussions. Eddie L. Ungless is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI
(grant EP/S022481/1) and the University of Edinburgh, School of Informatics. Anne Lauscher's work is funded under the Excellence Strategy of the Federal Government and the Länder.
## References
Annalisa Anzani, Louis Lindley, Giacomo Tognasso, M. Paz Galupo, and Antonio Prunas. 2021. "being talked to like i was a sex toy, like being transgender was simply for the enjoyment of someone else":
Fetishization and sexualization of transgender and nonbinary individuals. *Archives of Sexual Behavior*, 50(3):897–911.
Hritik Bansal, Da Yin, Masoud Monajatipoor, and KaiWei Chang. 2022. How well can text-to-image generative models understand ethical natural language interventions?
Soumya Barikeri, Anne Lauscher, Ivan Vulic, and Goran ´
Glavaš. 2021. RedditBias: A real-world resource for bias evaluation and debiasing of conversational language models. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1941–1955, Online. Association for Computational Linguistics.
Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: from allocative to representational harms in machine learning. special interest group for computing. *Information and Society (SIGCIS)*, 2.
Garfield Benjamin. 2021. What we do with data: a performative critique of data "collection". Internet Policy Review, 10(4).
Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan. 2022. Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. (arXiv:2211.03759). ArXiv:2211.03759
[cs].
Thomas J Billard. 2019. (no) shame in the game: The influence of pornography viewing on attitudes toward transgender people. Communication research reports, 36(1):45–56.
Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. 2022. The values encoded in machine learning research. In 2022 ACM Conference on Fairness, Accountability, and Transparency, page 173–184, Seoul Republic of Korea. ACM.
Roland Bleiker, David Campbell, Emma Hutchison, and Xzarina Nicholson. 2013. The visual dehumanisation of refugees. *Australian Journal of Political Science*,
48(4):398–416.
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker?
debiasing word embeddings. *arXiv:1607.06520 [cs,* stat]. ArXiv: 1607.06520.
Manuel Brack, Patrick Schramowski, Felix Friedrich, Dominik Hintersdorf, and Kristian Kersting. 2022.
The stable artist: Steering semantics in diffusion latent space.
Stephanie Brandl, Ruixiang Cui, and Anders Søgaard.
2022. How conservative are language models? adapting to the introduction of gender-neutral pronouns.
In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3624–3630, Seattle, United States. Association for Computational Linguistics.
Mary Bucholtz and Kira Hall. 2004. *Language and* Identity, page 369–394. John Wiley & Sons, Incorporated.
Yang Trista Cao and Hal Daumé III. 2020. Toward gender-inclusive coreference resolution. In *Proceedings of the 58th Annual Meeting of the Association for* Computational Linguistics, page 4568–4595, Online. Association for Computational Linguistics.
Jaemin Cho, Abhay Zala, and Mohit Bansal. 2022. Dalleval: Probing the reasoning skills and social biases of text-to-image generative transformers. *arXiv preprint* arXiv:2202.04053.
Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê Khac, Luke Melas, and Ritobrata Ghosh. 2021. Dall·e mini.
R DePalma and E Atkinson. 2006. The sound of silence: Talking about sexual orientation and schooling.
6(4):333–349.
Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Arjun Subramonian, Jeff Phillips, and Kai-Wei Chang.
2021. Harms of gender exclusivity and challenges in non-binary representation in language technologies.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, page 1968–1994, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Wei Guo and Aylin Caliskan. 2021. Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, page 122–133, Virtual Event USA. ACM.
Nick Haslam. 2006. Dehumanization: An integrative review. *Personality and Social Psychology Review*,
10(3):252–264.
HRC. 2022. An epidemic of violence 2022.
Alex Hughes. 2022. Dall-e mini: Creator explains blurred faces, going viral and the future of the project.
Johanna Kantola, Anna Elomäki, Barbara Gaweda, Cherry Miller, Petra Ahrens, and Valentine Berthet.
2022. "it's like shouting to a brick wall": Normative whiteness and racism in the european parliament.
American Political Science Review, page 1–16.
Will Knight. 2022. Dall-e mini is the internet's favorite ai meme machine.
Anne Lauscher, Archie Crowley, and Dirk Hovy. 2022.
Welcome to the modern world of pronouns: Identityinclusive natural language processing beyond gender.
arXiv:2202.11923 [cs]. ArXiv: 2202.11923.
Calvin A. Liang, Sean A. Munson, and Julie A.
Kientz. 2021. Embracing four tensions in humancomputer interaction research with marginalized people. *ACM Transactions on Computer-Human Interaction*, 28(2):1–47.
Rachel Minkin and Anna Brown. 2021. Rising shares of u.s. adults know someone who is transgender or goes by gender-neutral pronouns.
Rebecca Qian, Candace Ross, Jude Fernandes, Eric Smith, Douwe Kiela, and Adina Williams. 2022. Perturbation augmentation for fairer nlp. *arXiv preprint* arXiv:2205.12586.
Alexander Robertson, Walid Magdy, and Sharon Goldwater. 2021. Black or white but never neutral: How readers perceive identity from yellow or skin-toned emoji. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2):1–23.
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
pages 10684–10695.
Joni Salminen, Soon-gyo Jung, Shammur Chowdhury, and Bernard J. Jansen. 2020. Analyzing demographic bias in artificially generated facial pictures. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA '20, page 1–8, New York, NY, USA. Association for Computing Machinery.
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In *Proceedings of ACL*.
Hong Shen, Alicia DeVos, Motahhare Eslami, and Kenneth Holstein. 2021. Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors. *Proceedings of the ACM on Human-Computer Interaction*,
5(CSCW2):1–29. ArXiv:2105.02980 [cs].
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019a. The woman worked as a babysitter: On biases in language generation.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), page 3405–3410, Hong Kong, China. Association for Computational Linguistics.
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019b. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407–
3412, Hong Kong, China. Association for Computational Linguistics.
Treavian Simmons. 2018. Gender isn't a haircut: How representation of nonbinary people of color requires more than white androgyny.
Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting. 2022. The biased artist: Exploiting cultural biases via homoglyphs in text-guided image generation models. *arXiv preprint arXiv:2209.08891*.
Ruchika Tulshyan and Jodi-Ann Burey. 2022. Stop telling women they have imposter syndrome.
Vic Valentine. 2016. *Non-binary people's experiences* in the UK.
Catherine Yeo and Alyssa Chen. 2020. Defining and evaluating fair natural language generation. In *Proceedings of the The Fourth Widening Natural Language Processing Workshop*, pages 107–109, Seattle, USA. Association for Computational Linguistics.
## A Annotation Task A.1 Term Selection
We include *trans, transgender* to capture both binary and non-binary transgender identities. We include *enby, nonbinary, gender non-conforming,*
genderqueer, queer as the five most common nonbinary identities (other than trans and transgender),
according to the 2022 Gender Census12 (an annual survey conducted online by a nonbinary activist).
We include *two-spirit, latinx* in order to expand our focus to identities used exclusively by people of colour.
For binary identities, we combined the trans status word with *woman, man, person* and with the pronoun sets *she/her, he/him, they/them*, respectively. For nonbinary identities, we used the term *person*, with the pronouns *she/her, he/him,* they/them (it is common for nonbinary people to use both gendered and gender-neutral pronouns13
(Dev et al., 2021)). For *two-spirit* we also combined the term with *woman, man* as we found extensive evidence online of individuals identifying as two-spirit(ed) women or men14.
For the nonbinary identities, except *latinx* and two-spirit we also used the pronouns *it/it* and xe/xem which were the next two most common pronoun sets in the Gender Census15. We exclude latinx,two-spirit for a number of reasons: they are not well represented in the Gender Census so we felt the findings did not apply; we found no evidence of widespread use of these pronouns in either community; we felt using a potentially dehumanising pronoun such as it to refer to a marginalised community we did not belong to, without evidence of community use, could be harmful.
## A.2 Annotation Scheme Development
We anticipate there will be less training data for non-cisgender identities and so the images will be of a poorer quality in terms of photorealism; as such we ask annotators to rate photorealism on a 4 point scale from "totally photorealistic" to "No photorealistic parts". This will also capture if images are cartoonlike: our initial experiments indicated many non-cisgender identity prompts returned cartoon-like images, which could contribute to the belief such identities are not real (Valentine, 2016; Minkin and Brown, 2021).
12https://www.gendercensus.com/results/
2022-worldwide/
13https://www.gendercensus.com/results/
2022-worldwide/
14see for example https://www.
nativeyouthsexualhealth.com/
two-spirit-mentors-support-circle 15https://www.gendercensus.com/results/
2022-worldwide/
We ask whether an individual is present - we argue identities being depicted without a human may contribute to the "mechanistic dehumanisation"
(portrayal as nonhuman and inanimate) (Haslam, 2006) of these communities.
We also ask how many humans are present. Our early experimentation on queer identities showed certain identities were often represented by the presence of two individuals.
Representations including people's faces with recognisable features are more "personal" and humanising (Bleiker et al., 2013) so we ask if a face is clearly depicted.
We anticipate that due to a lack of multiply marginalised identities being represented in the training data, most of the training data for noncisgender identities that are not specifically associated with an ethnic minority (e.g. *latinx, two-spirit*)
will be images of white individuals, and as such the models will produce mostly white figures. We ask annotators to indicate if a person of colour is present. This will be a highly subjective judgement, but we felt it was important to attempt to capture this dimension. We wished to measure whether the systems were producing clearly diverse output with regards to subject ethnicity, so we record only if the human annotator is confident that someone non-white was present. Humans tend to interpret ambiguous input as the default or norm - for example, yellow emojis are interpreted as having white ethnicity (Robertson et al., 2021).
Due to the fetishization of transgender and nonbinary individuals (Anzani et al., 2021) and the proliferation of "trans porn" online (Billard, 2019),
we anticipate there will be more sexual content for prompts including non-cisgender identity terms, and so we ask annotators to indicate implied nudity. Whilst nudity is not inherently sexual, we feel greater implied nudity is a reasonable proxy for sexualisation of non-cisgender identities, and we supplement a quantitative analysis with examination of a sample of images.
Our initial experiments showed queer identities were often represented by flags instead of or in combination with people. We ask annotators to indicate if something resembling a flag is present and to describe it. Similarly, we found symbols represented often in the queer images. We also anticipate that images of non-normative identities may often be labelled, resulting in text in the image.
Both of these relate to the idea of non-normative identities being marked - that is to say, their deviation from the norm is indicated explicitly (Bucholtz and Hall, 2004). We combine these two concepts as often it is hard to distinguish computer generated letters from symbols.
One could argue a difference between cisgender and non-cisgender identity predictions as being an indicator of bias. However, we must also consider whether certain outcomes are desirable at all, even if equal e.g. should the model produce any images with implied nudity of either cisgender or non-cisgender individuals.
The annotation interface (built in the Amazon Turk sandbox) is depicted in Figure 3.
## A.3 Example Output
Comparing Figures 4 and 5, there is a stark difference in the amount of nudity in response to two prompts that differ only by the word "transgender". In general we found "transgender" to elicit a lot of
(partial) nudity for the Stable Diffusion model.
Also noteworthy is the absence of people of colour in both images. The model reflects the
(Western) norm of whiteness.
Figure 6 demonstrates a number of "failures" –
the figures rendered seem subhuman, and the model interprets Two-spirit to mean two individuals.
Figure 7 demonstrates a lack of diversity: only transgender women with features typically regarded as "masculine", such as a muscular frame or facial hair, are depicted. All the women are white. This is despite significant efforts by OpenAI to diversify DALL·E 2's output with regards to ethnicity.
## B Survey B.1 Demographic Information Q: What Is Your Age? Please Answer In Years.
Responses: range 19-57, mode 25, mean 30.
## Q1: What Is Your Gender Identity?
Options: male, female, nonbinary, genderqueer, third-gender, genderfluid, gender non-conforming, pangender, two-spirit, agender, questioning, prefer not to answer, other.
Note: A transcription error resulted in the options "male, female" in place of "man, woman" from Dev et al. (2021). Typically "male, female" are more associated with "biological sex" than
"man, woman" which may have influenced respondents' answers, although the question explicitly asked about gender.
| Gender | % of total responses |
|-----------------------------|------------------------|
| Male | 2.9% |
| Female | 22.9% |
| Nonbinary | 71.4% |
| Genderqueer | 20% |
| Genderfluid | 8.6% |
| Gender non-conforming | 14.3% |
| Agender | 17.1% |
| Questioning | 11.4% |
| Prefer not to answer | 2.9% |
| Other - "trans" | 2.9% |
| Other - "I'm also intersex" | 2.9% |
| Other - "Woman" | 2.9% |
Table 2: Table of selected gender identities. Respondents could select multiple gender terms.
| Sexual orientation | % of total responses |
|--------------------------------------|------------------------|
| Lesbian | 17.1% |
| Gay | 8.6% |
| Bisexual | 34.3% |
| Asexual | 5.7% |
| Pansexual | 17.1% |
| Queer | 42.9% |
| Straight | 2.9% |
| Prefer not to answer | 2.9% |
| Other - "i try not to label myself " | 2.9% |
| Other - "Bottom" | 2.9% |
Table 3: Table of selected sexual orientations. Respondents could select multiple terms.
## Responses Given In Table 2. Q2: What Is Your Sexual Orientation?
Options: lesbian, gay, bisexual, asexual, pansexual, queer, straight, questioning, prefer not to answer, other.
Responses given in Table 3
## Q3: What Pronouns Do You Use?
Options: he/him, they/them, she/her, xe/xem, e/em, ze/hir, any pronouns, I don't use pronouns, I am questioning my pronouns, prefer not to answer, other.
Responses given in Table 4 Q4: Are you trans?
Options: yes, no, I am questioning my gender, prefer not to answer.
Responses given in Table 5 Q5: In a few words, how would you describe your ethnicity? Options: text response
![14_image_0.png](14_image_0.png)
Submit
![14_image_1.png](14_image_1.png)
Figure 3: Images demonstrating the annotation interface before and after (above, below) "Do you see at least one individual" has been selected. For the commercial prompts annotators were additionally asked whether the image was relevant to the template.
![15_image_1.png](15_image_1.png)
Table 4: Table of selected pronouns. Respondents could select multiple terms.
| Pronoun set | % of total responses |
|------------------------------|-------|
| He/him | 17.1% |
| They/them | 68.6% |
| She/her | 34.3% |
| E/em | 2.9% |
| Any pronouns | 11.4% |
| I am questioning my pronouns | 17.1% |
| Other - "Elle/le" | 2.9% |
| Other - "Ey/Em" | 2.9% |
| Other - "xey/xem" | 2.9% |
| Other - "fae/faer" | 2.9% |
The majority of respondents (26) described themselves as explicitly white or Caucasian. Four named a European origin (none of these identified as Black, Latinx and/or Indigenous or as a person of colour); as white is the norm in Europe (Kantola et al., 2022), this suggests 30 of our 35 participants are white/Caucasian.
Q6: Are you Black, Latinx and/or Indigenous ?
Options: yes, no, prefer not to answer. Responses given in Table 6 Q7: Are you a person of color?
Options: yes, no, prefer not to answer.
Notes: Not all respondents who identified as Black, Latinx and/or Indigenous also identified as a person of colour and vice versa.
![15_image_0.png](15_image_0.png)
Response **% of total responses**
Yes 85.7%
No 2.9% I am questioning my gender 5.7% Prefer not to answer 5.7%
Table 5: Table of responses about trans status.
Responses given in Table 7
## Q8: What Is/Are Your Native Language(S)?
Options: text response The vast majority of participants (27) had English as a native language. Other native languages include German, French, and BSL. Q9: Which country do you live in now?
Options: text response Responses are summarised in Table 8. The vast majority of participants (34) are from Western countries, namely North America, Europe or Australia.
Q10: Briefly, how would you describe your occupation?
Options: text response Table 6: Table of responses to question about identifying as Black, Latinx and/or Indigenous.
| Response | % of total responses |
|------------|------------------------|
| Yes | 8.6% |
| No | 91.4% |
![16_image_1.png](16_image_1.png)
| Response | % of total responses |
|------------|------------------------|
| Yes | 8.6% |
| No | 91.4% |
Table 7: Table of responses to question about identifying as a person of colour.
Ten respondents described themselves as students. The next most common occupation was software engineer. Other occupations include photographer, creative professional, UX designer and therapist, suggesting we were able to capture the diverse perspectives of those working outside the field but with an interest in AI.
Q11: Briefly, how would you describe your familiarity with AI?
Options: text response The majority of respondents referenced work or education as being the source of their familiarity, though some named an interest in the topic for example as a "science magazine reader". One re-
| Region | % of total responses |
|-----------------|------------------------|
| US | 31.4% |
| UK | 34.3% |
| Europe excl. UK | 22.9% |
| Canada | 5.7% |
| Australia | 2.9% |
| Colombia | 2.9% |
Table 8: Table of responses to question about current country of residence.
![16_image_0.png](16_image_0.png)
spondent answered none but rated themselves as 2/5 in terms of familiarity with AI.
Q12: How would you rate your familiarity with AI?
Options: Likert scale 1-5 from "Very little knowledge" to "Expertise (I work in AI)".
Responses are summarised in Figure 8. All respondents considered themselves to have greater than "very little knowledge". The mean rating was 3.8.
How would you rate your familiarity with AI?
![16_image_2.png](16_image_2.png)
![16_image_3.png](16_image_3.png)
## B.2 Potential For Harm. Q13: Have You Tried Out One Of These Systems
before, including during this survey?
Options: yes, no The vast majority of respondents (28) answered yes.
Q14: Can you think of scenarios where use of
| Context | % of total responses |
|-----------------------------------|-------|
| Education | 91.4% |
| Art/creativity | 85.7% |
| Marketing | 94.3% |
| Architecture/ real estate/ design | 37.1% |
| Research | 71.4% |
text-to-image models could have undesirable outcomes for non-cisgender people, due to their application in the above or other use cases?
Options: yes, no The overwhelming majority of respondents (33)
answered yes.
Q15: Please select in which of these use cases harms might occur.
Options: education, arts/creativity, marketing, architecture/ real estate/ design, research, other.
Notes: These options are derived from DALL·E
2 documentation detailing possible future commercial use of the model. A flaw in the study design meant this question was mandatory even for those who answered "no" to the previous question. Of the two participants who answered no, one wrote
"none" in the "other" option and the other selected Education, but neither provided a description of a scenario (below).
Responses are summarised in Table 9. Two respondents provided "other" contexts of use - one referenced religious and political channels, and the philosophical, psychological and sociological fields, and the other wrote that they were concerned about the "reinforcement of heteronomativity in any context". The majority of respondents could imagine harm in each of the contexts except "Architecture/ real estate/ design". In particular respondents were concerned about "Marketing", "Education" and "Art/creativity" (over 3/4 of respondents felt harm might occur in these contexts).
Q16: Please select how severe you think these harms might be.
Options: Likert scale 1-5 from "No impact on lives" to "Significantly hinders lives".
Responses are summarised in Figure 9. The average rating was 3.3. Almost all participants (33)
felt that the harms would have some impact on non-cisgender individuals' lives.
![17_image_0.png](17_image_0.png)
## Q17: Please Describe A Specific Scenario(S) Where Harm Might Occur Against Non-Cisgender People. Options: Text Response
In contrast to Dev et al. (2021) we do not ask survey participants to distinguish between representational and allocational harms, in order to reduce their work load. We label which category of harm they describe, whether it relates to how a group is represented or which services a group has access to, or both. We also identify which use cases are relevant to the harm they mention, again to reduce work load. Using a deductive-inductive approach, we also develop codes and establish themes based on the responses. One author was lead coder, developing the codebook of 17 codes. Both the lead coder and a second author applied this codebook to the responses. The coders refined the codebook through discussion, leading to a final inter-coder reliability of κ = 0.74. Themes were identified by the lead coder and discussed and finalised through discussion between all authors.
Loosely reflecting the responses to Q15, the contexts of use mentioned by respondents were education, art/creativity, marketing, and less frequently research. A high number of representational harms were identified, and very few allocational harms.
A prominent theme was the potential impact on real world behaviours and beliefs that content produced by the models might have. Frequently, respondents spoke of the output not just reflecting but *reinforcing* stereotypes and prejudices. Some felt the tools could create new beauty standards and lead to emotional harm.
Several respondents expressed concern about intentionally abusive use of these systems. They felt they might be used to create propaganda or transphobic material, or the training data needed to create a trans recognition system. Explicit references to unintentional harms were far outnumbered by these examples.
A number of respondents explicitly referenced the role that training data played in bringing about harm, reflecting the knowledge of our respondents.
## B.3 Proposed Solutions
Respondents were asked to rate on a likert scale of 1-7 ("Extremely dissatisfied (I would not like to see this solution implemented)" to "Extremely satisfied
(I would like to see this solution implemented)"). A
rating of 4 indicates neither satisfied or dissatisfied.
They were also invited to optionally respond to the question "Can you foresee any potential harms or benefits to this solution?" for each one.
Solution 1: The model generates an image based on the text (no change to current behaviour.)
Responses are given in Figure 10. Most respondents were unsatisfied with this "solution" (to change nothing), with a mode of 3 and a mean of 3.5 (both below 4). However the spread of responses indicates this is not universally disliked.
Text responses in particular highlighted concerns about stereotyping,
No change to current behaviour
![18_image_1.png](18_image_1.png)
## Solution 2: The Model Ignores The Non-Cisgender Identity Terms In The Text Input And Generates An
image based on the rest of the text.
Responses are given in Figure 11. This solution was the least popular, with a mode of 1 and a mean of 2. No respondents were clearly satisfied with this solution. Many respondents wrote this would lead to erasure and othering. A respondent identified it would be hard to "keep up" with queer slang, or handle ambiguous words.
A simple heuristic like ignoring minority identity terms to avoid producing stereotyped content is clearly not satisfactory to the community.
Solution 3: The model generates an image based on the text but includes a warning that the out-
Ignore non-cisgender identity terms
![18_image_0.png](18_image_0.png)
## Put Might Be Offensive.
Responses are given in Figure 12. This solution was also fairly unpopular, with a mode of 2 and a mean of 3.0, although the bimodal results suggest some users would be slightly satisfied by this solution. Several respondents expressed that they felt this was not a "real" solution to the issue.
Some felt strongly that appending this warning to every image suggested transness itself was offensive. However, as suggested by the second "peak",
some respondents felt a warning offered an okay interim solution.
![18_image_2.png](18_image_2.png)
Warn that output might be offensive
## Solution 4: The Model Ignores All Gender Identity Terms In The Text Input And Generates An Image Based On The Rest Of The Text.
Responses are given in Figure 13. This solution was very unpopular, though less so than ignoring only non-cisgender identity terms, with a mode of 1 and a mean of 2.5. Some respondents expressed concern about the model "defaulting" to represent only a single gender rather than diverse results.
Respondents again mentioned erasure. Several respondents mentioned compromised functionality.
Some felt it would be difficult to implement.
![19_image_0.png](19_image_0.png)
Solution 5: The model is trained on additional images containing non-cisgender individuals, so it better learns to generate images of noncisgender people.
Responses are given in Figure 14. This solution was by far the most popular, with a mean of 5.3 and a mode of 7. However, as Figure 14 demonstrates, this solution is not universally popular, and in text responses respondents expressed concern about the challenge of gathering truly representative data, and the risk of reinforcing stereotypes.
Some expressed concern about the risks of gathering images of marginalised people.
Train on more gender diverse data
![19_image_2.png](19_image_2.png)
Solution 6: The model effectively ignores the non-cisgender identity terms in the text input and generates an image based on the rest of the text, but a flag or pin or symbol is used to indicate gender diversity.
Responses are given in Figure 15. This solution had a mode of 1 and a mean of 2.8, suggesting it was largely unpopular (though a small number were satisfied with this solution). Some respondents expressed that this solution had potential, because it no longer required using how a person looks to capture their identity. Others felt it was a "cop out", and some were concerned about the othering or stigmatising effect of explicitly labelling queer individuals.
![19_image_1.png](19_image_1.png)
Ignore non-cisgender identity terms but include symbol Solution 7: The model ignores the non-cisgender identity terms in the text input and generates an image based on the rest of the text, with a warning that to avoid harmful misrepresentation the model ignores non-cisgender identity terms.
Responses are given in Figure 16. This solution was largely but not universally unpopular, with a mode of 1 and a mean of 2.7. Respondents expressed a preference for ignoring the terms alongside an explicit warning over simply ignoring the terms in their text responses, but many argued the same issues of erasure and compromised functionality were at play. A few saw it as a short-term solution, but many argued it was again a "cop out".
Ignore non-cisgender identity terms but include ![19_image_3.png](19_image_3.png)
## Other Solutions
Respondents were then asked "Can you think of any other solutions to how models should handle non-cisgender identities? (Optional)". The majority (22) of respondents provided their thoughts. We conducted a qualitative analysis of these answers, using an inductive approach. One author developed the codebook of 22 codes using a "bottom-up" approach (driven by the data), which was then applied to the responses by a second author to establish inter-coder reliability, as a measure of code reliability. The lead coder established themes based on these codes and these themes were discussed and finalised between the authors. The major themes we established were the need for representative data; unhappiness with the proposed heuristics; the necessity of wider changes; community involvement; a desired ability to customise images.
One theme we established was the need for representative training data, echoing the most popular proposed solutions. Many respondents emphasised the need for additional data, others focused on the need to curate the training data to ensure "a diverse and representative set of images" (white, queer, nonbinary + gender nonconforming, 23).
A second theme that emerged was that of unhappiness with the proposes heuristics, with respondents seeing these as outright unsuitable or suitable only as temporary solutions.
A broad theme in the responses was the need for wider changes, encompassing both extensive changes to the model, as well as societal changes –
"may require uhhh fixing society generally" (white, bisexual, genderqueer + questioning, 30). Respondents all mentioned the need to improve outcomes for other marginalised identities.
Another theme to emerge was the need for community involvement - respondents discussed the general need for non-cisgender people to be involved in the development of such models, and two suggested involving non-cisgender individuals as part of a reinforcement learning approach to improve the models' representation of the community.
The final theme represents a novel solution, which is to allow for post-hoc modification of the generated images. This would mean users could tweak the gender presentation and/or include symbols and pins to signify identity.
## C Interviews C.1 Selecting Interviewees
We selected respondents who, from their survey answers, spanned a range of gender identities, sexualities, ethnicities, occupations and countries of residence, as well as a range of attitudes towards our proposed solutions. We hoped in doing so we could ensure a diversity of opinions in our interviews over and above a random selection of interviewees.
Four of the six invited responded to our request. Our interviewees were (by their own selfreporting):
A - a white 43 year old bisexual who identifies as nonbinary (in mixed groups) and either genderfluid or agender within the queer community B - a 33 year old pansexual nonbinary person, who identifies as "mixed race" (part Black and South American indigenous, and part Middle Eastern and white (Italian, Spanish)) C - a white Bulgarian, 30 year old bisexual genderqueer person D - a hispanic 38 year old agender trans nonbinary person who identifies as "borderline asexual/demisexual"
## C.2 Interview Format
Participants were first asked a number of demographic questions about your age, gender identity, sexuality and ethnicity. Whilst we had this data already from the survey, some aspects of identity are subject to change and we wanted to ensure interview data was presented with the most appropriate descriptors.
The remainder of the interview was unstructured, with the interviewer generating questions in response to participants' answers. Participants were asked about the potential harms that could occur due to text-to-image models' handling of non-cisgender identity terms, and how participants would like such identities to be handled by these models. Participants were invited to expand on any issues raised when completing the survey.
## C.3 Thematic Analysis
We conducted a qualitative analysis of these answers, using an inductive approach. The coder developed an initial codebook of 41 codes using a "bottom-up" approach, then established 7 major themes based on these codes. These themes were discussed and finalised between the authors: harmful output; being unable to use current technology; rejection of heuristics; need for community input; need for transparency and regulation; desire for authentic representation; the potential for good.
Within the theme of "harmful output", interview participants explored a range of concerns. They spoke of both unintentional harm, and deliberate weaponisation of the technology. Inaccurate representation, for example through mixing and matching of features or the enforcement of gender norms was a common topic. Participants were concerned that this misrepresentation may "set off, you know, violent stuff in the long run" (Interviewee D).
A related theme was that of being unable to use the technology in its current form: participants felt the models would not work for them easily and produce representative output as they do for cisgender people. One participant felt the technology should not be used at all.
The theme of rejecting the heuristic solutions came up in the interviews as in the survey: in particular, participants were concerned about the public associating non-cisgender identities with a offensiveness warning or maturity level label as they felt this would impact how the community is seen. Participants were also concerned about erasure due to these heuristics - " not being represented is a way to quash us right as a way to try to drive us out of existence" (Interviewee A).
As in the survey, interviewees spoke of the need for community input "at every step" (Interviewee D). They felt the greater involvement from noncisgender and other marginalised identities there were, the more representative the output would be.
One participant suggested integrating community feedback on output to capture "what that community feels is right for them" (Interviewee A). One raised the concern that these models might soon produce images "of people out of nothing without involving the people" (Interviewee C).
Another way participants suggested representation might be improved is through greater transparency and regulation. This seems particularly pertinent as several participants expressed that use of these technologies seemed inevitable. Greater transparency of training material sourcing was raised - one participant said "right now it's like we aren't acknowledge at all that humans are part of [generating training data]" (Interviewee B). Two participants were in particular concerned about the impact on artists and the need for transparency and regulation in the area of art.
A very frequent topic was a desire for authentic representation, not just of the non-cisgender community but "more representative of humanity"
(Interviewee D) in general. Participants felt the training data did not reflect the reality of diversity, for example the huge global diversity of gender expressions. One participant was concerned the models would fail to represent the "different expression of gender in the global south" (Interviewee B). Respondents referenced the challenge of authentically representing communities with few members, or communities who for social, historical and technical reasons are less photographed.
Despite a number of concerns, participants did see a potential for good in these technologies. They expressed seeing both pros and cons to the technologies - "I understand that there's difficulty there, but there is also potential there" (Interviewee A);
"a lot of the places where there's risks... I can see how this can be excited, exciting for another person to use" (Interviewee C). Participants saw the potential for image generation technology to be used to create "gender affirmative" output (Interviewee B), to perhaps create a persona "perfectly aligned with what you want" (Interviewee A). One participant said that "portraying queerness in ways that we haven't even thought of is an exciting prospect"
(Interviewee A).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 8
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 8
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3
## C ✓ **Did You Run Computational Experiments?** 3.2
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
3.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3.5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 3.3, 4, 5
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
3.3, 4.13, A.2, B, C.2
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
3.3, 4.1.1, 5, 7
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
3.3, 4.1.3
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
8
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
3.3 |
zhou-etal-2023-fine | Fine-grained Artificial Neurons in Audio-transformers for Disentangling Neural Auditory Encoding | https://aclanthology.org/2023.findings-acl.503 | The Wav2Vec and its variants have achieved unprecedented success in computational auditory and speech processing. Meanwhile, neural encoding studies that integrate the superb representation capability of Wav2Vec and link those representations to brain activities have provided novel insights into a fundamental question of how auditory and speech processing unfold in the human brain. Without an explicit definition, most existing studies treat each transformer encoding layer in Wav2Vec as a single artificial neuron (AN). That is, the layer-level embeddings are used to predict neural responses. However, the comprehensive layer-level embedding aggregates multiple types of contextual attention captured by multi-head self-attention (MSA) modules. Thus, the layer-level ANs lack fine-granularity for neural encoding. To address this limitation, we define the elementary units, i.e., each hidden dimension, as neuron-level ANs in Wav2Vec2.0, quantify their temporal responses, and couple those ANs with their biological-neuron (BN) counterparts in the human brain. Our experimental results demonstrated that: 1) The proposed neuron-level ANs carry meaningful neurolinguistic information; 2) Those ANs anchor to their BN signatures; 3) The AN-BN anchoring patterns are interpretable from a neurolinguistic perspective. More importantly, our results suggest an intermediate stage in both the computational representation in Wav2Vec2.0 and the cortical representation in the brain. Our study validates the fine-grained ANs in Wav2Vec2.0, which may serve as a novel and general strategy to link transformer-based deep learning models to neural responses for probing the sensory processing in the brain. |
## Fine-Grained Artificial Neurons In Audio-Transformers For Disentangling Neural Auditory Encoding
Mengyue Zhou1, Xu Liu1, David Liu2, Zihao Wu3, Zhengliang Liu3**, Lin Zhao**3 Dajiang Zhu4, Lei Guo1, Junwei Han1, Tianming Liu3, **Xintao Hu**1∗
1 School of Automation, Northwestern Polytechnical University 2 Athens Academy 3School of Computing, University of Georgia 4Department of Computer Science and Engineering,University of Texas at Arlington
{zhou_my,liu_xu}@email.nwpu.edu.cn [email protected]
{zw63397,zl18864,lin.zhao,tliu}@uga.edu [email protected]
{lguo,jhan,xhu}@nwpu.edu.cn
## Abstract
The Wav2Vec and its variants have achieved unprecedented success in computational auditory and speech processing. Meanwhile, neural encoding studies that link representations of Wav2Vec to brain activities have provided novel insights into how auditory and speech processing unfold in the human brain. Most existing neural encoding studies treat each transformer encoding layer in Wav2Vec as a single artificial neuron (AN). That is, the layerlevel embeddings are used to predict neural responses. The layer-level embedding aggregates multiple types of contextual attention captured by multi-head self-attention (MSA). Thus, the layer-level ANs lack fine-granularity for neural encoding. To address this limitation, we define the elementary units, i.e., each hidden dimension, as neuron-level ANs in Wav2Vec2.0, quantify their temporal responses, and couple those ANs with their biological-neuron (BN)
counterparts in the human brain. Our experimental results demonstrated that: 1) The proposed neuron-level ANs carry meaningful neurolinguistic information; 2) Those ANs anchor to their BN signatures; 3) The AN-BN anchoring patterns are interpretable from a neurolinguistic perspective. More importantly, our results suggest an intermediate stage in both the computational representation in Wav2Vec2.0 and the cortical representation in the brain. Our study validates the fine-grained ANs in Wav2Vec2.0, which may serve as a novel and general strategy to link transformer-based deep learning models to neural responses for probing sensory processing in the brain.
## 1 Introduction
The Wav2Vec model and its variants (Schneider et al., 2019; Baevski et al., 2020) have achieved superb performance in learning acoustic information representations and on a variety of downstream
∗The corresponding author tasks such as automatic speech recognition. Meanwhile, recent studies that link the computational representations in Wav2Vec to neural responses recorded by functional brain imaging techniques have provided novel insights into the model's interpretability and neural sensory perception of acoustic information(Li et al., 2022; Millet et al., 2022; Tuckute et al., 2022; Millet and Dunbar, 2022).
Such studies can be formulated as a general framework of brain encoding and decoding (Naselaris et al., 2011; Huth et al., 2016; Yamins and DiCarlo, 2016). In brief, a predictive model is trained to build a mapping between the computational feature representation (the feature space, referred to artificial neurons, ANs) of the input stimuli and the brain activities (the brain activity space, referred to biological neurons, BNs) evoked by the same set of stimuli. The fitness of the predictive model, also known as the "brain score", is used to infer the correspondence between specific features and the underlying brain regions.
In most existing studies that link audiotransformers to brain responses, the layer-level contextual embeddings in the transformer encoding layers are used as the feature space (Li et al.,
2022; Millet et al., 2022; Tuckute et al., 2022).
The layer-level representations aggregate multiple types of attentional relationships among the input sequences captured by multi-head self-attention
(MSA) modules (Vaswani et al., 2017). The aggregation operation results in comprehensive representations. However, these representations lack specificity. Thus, treating each encoding layer as a single AN is relatively coarse and consequently degenerates the capability of audio-transformers in brain encoding and decoding studies.
Multi-level visualizations of transformer attentions (Vig, 2019b; Clark et al., 2019; Aken et al.,
2020) may provide some inspirations to address this problem. For example, BertViz visualizes the attention at the model-level, head-level and neuronlevel(Vig, 2019a). More specifically, the neuronlevel visualization factorizes the attention score matrix in each head into a set of element-wise product matrices corresponding to the hidden dimensions. The neuron-level visualization enables computational interpretation of transformers with fine granularity. However, whether each hidden dimension can be defined as a fine-grained AN for neural encoding and decoding study is not clear.
Do those ANs carry meaningful linguistic information? Do those ANs anchor to their BN signatures in the human brain? Are the coupled AN-BN pairs interpretable from a neurolinguistic perspective?
We sought to answer these questions in this study. To this end, we propose a general framework for coupling the fine-grained ANs in Wav2Vec2.0
(Baevski et al., 2020) and the BNs in the human brain. We adopt the pre-trained Wav2Vec2.0 to embed the spoken story stimuli in the Narratives functional magnetic resonance imaging (fMRI)
dataset (Nastase et al., 2021). The temporal response of an AN is then quantified according to the element-wise product of the queries and keys.
Functional brain networks (FBNs) are identified from the fMRI data and each FBN is regarded as a single BN. Afterwards, the coupling relationship between ANs and BNs are built by maximizing the synchronizations between their temporal responses.
Our experimental results show that those finegrained ANs carry meaningful linguistic information and well synchronize to their BN signatures, and the anchored AN-BN pairs are interpretable.
More importantly, our results suggest an intermediate stage in both the computational representation in Wav2Vec2.0 and the cortical representation in the brain. The proposed fine-grained ANs may also serve as a general strategy to link transformerbased deep learning models to neural responses for probing the sensory processing in the brain.
## 2 Related Works
Features from computational models have long been used to model the feature space for exploring the auditory neural encoding. Conventional hand-crafted features that capture low-level acoustic properties (e.g., sound intensity, timbre, rhythm, pitch, and spectrograms) have been found to be closely correlated to brain responses (Potes et al.,
2012; Daube et al., 2019; Alluri et al., 2012; Cong et al., 2013; Santoro et al., 2014; Toiviainen et al.,
2014; Hu et al., 2017; Pasley et al., 2012; Leaver and Rauschecker, 2010; Berezutskaya et al., 2017; Ylipaavalniemi et al., 2009; Norman-Haignere et al., 2015). Some studies replicate similar findings for the combinations of those low-level features optimized for specific tasks such as auditory attention (Bordier et al., 2013) and melodic pitch expectations (Pearce et al., 2010).
The deep neural networks (DNNs) developed for auditory and speech processing bring new opportunities to model the feature space. The model architecture and the training objective are two basic ingredients of DNNs. Existing studies have investigated the similarity between brain responses and DNNs in different architectures including convolutional neural network (CNN) (Saddler et al., 2021; Francl and McDermott, 2022; Kell et al., 2018; Güçlü et al., 2016; Huang et al., 2018; Thompson et al., 2021), convolutional auto-encoder (CAE)
(Wang et al., 2022), generative adversarial network
(GAN) (Beguš et al., 2022), CNN followed by recurrent neural network (RNN) (Li et al., 2022; Tuckute et al., 2022; Vaidya et al., 2022; Millet and King, 2021), spiking neural networks (Khatami and Escabí, 2020), and transformers (Li et al., 2022; Millet et al., 2022; Tuckute et al., 2022; Vaidya et al., 2022). The training objectives include unsupervised, self-supervised and supervised by various tasks such as musical genre prediction, acoustic scene classification, and speech recognition.
These studies have provided fruitful insights into neural auditory encoding, model interpretation, and brain-like model development. For example, correlating the hierarchical representations derived from CNN-based models for automatic music tagging have revealed the representational gradients in the superior temporal gyrus (STG). The anterior STG (aSTG) and posterior STG (pSTG) have been shown to be more sensitive to low-level and highlevel features encoded in shallow and deep layers, respectively (Güçlü et al., 2016). By optimizing a CNN-based model for dual-task of word and music genre classification, Kell et al. showed that the bestperforming network may resemble the hierarchical organization of the human auditory cortex. That is, brain responses in the primary and non-primary auditory cortices are most well predicted by middle and late CNN layers, respectively (Kell et al., 2018). By modeling the feature space via CNNRNN-based DeepSpeech2 (Amodei et al., 2016) optimized for acoustic scene classification and speechto-text with different types of inputs (i.e., English, Dutch and Bengali), Millet et al. replicated such a hierarchy and suggested that the brain utilizes sound-generic representations in the first processing stage of its hierarchy, and then builds speechspecific representations in higher-level processing stages (Millet and King, 2021).
More recently, the transformer based on multihead self-attention (MSA) has emerged as a powerful DNN architecture to learn comprehensive contextual representations (Vaswani et al., 2017). In this context, audio-transformers such as Wav2Vec 2.0 have also been used to model the feature space
(Millet et al., 2022; Li et al., 2022; Tuckute et al.,
2022; Vaidya et al., 2022). For example, Millet et al. compared Wav2Vec 2.0 to neural activities in a large cohort, and found that the representational hierarchy of Wav2Vec 2.0 aligns with the cortical hierarchy of speech processing. More specifically, Wav2Vec2.0 learns sound-generic, speech-specific and language-specific representations that are analogous to those of the temporal and prefrontal cortices (Millet et al., 2022). Li et al. compared the representational similarity of HuBERT (Hsu et al.,
2021), Wav2Vec 2.0 (Baevski et al., 2020) and DeepSpeech2 (Amodei et al., 2016) with different training objectives to the human auditory pathway.
They showed that the representational hierarchy in the DNNs correlates well to the ascending auditory pathway, and unsupervised models achieve optimal neural correlations (Li et al., 2022). Tuckute et al. examined brain-DNN similarities within the auditory cortex for a large set of models based on various architectures and trained on different tasks.
They found that most DNNs predicted brain responses in the auditory cortex better than the filterbank models as a baseline, and the models trained on multiple tasks produced the best overall predictions. More importantly, they showed that most of the DNNs exhibited a correspondence between model stages and brain regions, for example, the neural responses in lateral, anterior and posterior non-primary auditory cortices were better predicted by deeper layers (Tuckute et al., 2022).
Despite those fruitful findings, the feature space defined in existing studies that assess the representational similarity between Wav2Vec2.0 and brain responses relies on layer-level embeddings. That is, these studies implicitly treat each layer as a single artificial neuron. Considering the heterogeneity of the attentional heads, this operation may lose the specificity of each head, which is designed to capture different types of contextual attention. As argued in the field of natural language processing
(NLP), a fine decomposition of a model's components into elementary units is among the keys for mapping computational models to their neurobiological counterparts (Hale et al., 2022; Poeppel, 2012). This demand also applies to audiotransformers. Meanwhile, our previous study has shown the validity of fine-grained ANs defined as the hidden dimensions of the pre-trained BERT
model (Liu et al., 2023). However, whether those fine-grained ANs hold similar premises in audiotransformers is unknown. Thus, the key objective of this work is to validate those fine-grained ANs in Wav2Vec 2.0 for neural encoding studies.
## 3 Methods 3.1 Synchronization Between Ans And Bns
Similar to that in our previous study (Liu et al.,
2023), the bridge that connects ANs in Wav2Vec2.0 and BNs in brain responses is defined as the synchronization between their temporal responses to the same set of external stimuli. Let F : X→Ya represent ANs, and fi(X) represent the temporal response of AN fito stimuli X. Similarly, let G : X
→ Yb represent BNs, and gj (X) denote the temporal response of BN gj to X. The best synchronized BN for an AN fiis identified according to Eq.1.
$$\operatorname{Sync}(f_{i},G)=\operatorname*{arg\,max}_{g_{j}\in G}\;\delta(f_{i},g_{j})\qquad(1)$$
where δ(·) measures the synchronization between the two responses. Similarly, the best synchronized AN for a BN giis identified according to Eq.2.
$$\mathrm{Sync}(g_{i},F)=\operatorname*{arg\,max}_{f_{i}\in G}\;\delta(g_{i},f_{j})\qquad(2)$$
In this study, we adopt the Pearson correlation coefficient (PCC) as δ(·) to measure the temporal synchronization. The ANs and BNs, as well as their temporal responses to the inputs are detailed in the following sections.
## 3.2 Ans And Their Temporal Responses
The transformer aggregates multiple attentional relationships captured by the MSA module. The attention score in a head is formulated as A = *sof tmax*(QTK/
√d) (Fig. 1a), where Q = {q1, q2, · · · , qn} is the query set, K =
7945
{k1, k2, · · · , kn} is the key set, d is the hidden dimension in a head, and n is the number of tokens in the input sequence. After removing the softmax operation for simplification, a single entry in the attention matrix is formulated as aij =
qi· kj =Pd 1 qi· ×kj (Fig. 1b), where ·× denotes element-wise product. This means that the attention matrix can be factorized into d element-wise product (EP) matrices (Fig. 1c). Each EP matrix characterizes how the query-key interactions in a hidden dimension contribute to the attention matrix. Thus, an intuitive idea is to define each hidden dimension as a single AN, which largely increases the granularity of ANs. For example, we can define NL × NH × d (e.g., 9216 in Wav2Vec2.0) ANs in audio-transformers, where NH and NH are the numbers of layers and heads, respectively.
We then quantify the temporal response of an AN. It is notable that the ANs respond to the input tokens (25ms per token with 5ms overlap) but the fMRI observes the brain in the temporal resolution of repetition time (TR, 1.5s in the Narratives fMRI
dataset). Thus, it is a prerequisite to temporally align the ANs' responses to fMRI volumes to measure the synchronization between them. To this end, the input audio stories are tokenized via the convolutional layers in Wav2vec2.0, and partitioned into subsets according to the TR. Let {t1, t2, · · · , tm}
denote the m tokens (m=75 in this study) in the j-th subset (corresponding to the j-th time point in fMRI), Q
l,h j = {q l,h 1, q l,h 2, · · · , q l,h m } and K
l,h j =
{k l,h 1
, kl,h 2
, · · · , kl,h m } denote the queries and keys in the h-th head and l-th layer in Wav2Vec2.0, respectively. The i-th dimension of the corresponding element-wise product EP*l,h,i* j ∈ Rm×m (Fig.
1c) measures how a single AN selectively responds to all the m queries and m keys. Thus, we define the response of a single AN at time point j as the mean of the entries in EP*l,h,i* j(Fig. 1d). The temporal response of an AN to the entire input sequence is derived by iterating through all the token subsets
(time points). Afterwards, it is convoluted with a canonical hemodynamic response function (HRF)
implemented in SPM1to count for compensation for hemodynamic latency in fMRI.
## 3.3 Bns And Their Temporal Responses
The human brain is intrinsically organized as a complex networked system, and brain functions essentially rely on functional interactions among 1https://www.fil.ion.ucl.ac.uk/spm/
![3_image_0.png](3_image_0.png)
functional brain networks (FBNs) (Park and Friston, 2013). Compared to the isolated voxels (an elementary structural unit in fMRI) that are used to quantify the brain activity space in most existing neural encoding studies (Tuckute et al., 2022; Millet et al., 2022; Vaidya et al., 2022), FBNs capture inter-regional functional interactions. Thus, we define each FBN as a single BN in neural recordings.
Various methods have been developed to identify FBNs in fMRI. Here, we adopt an open access model, the volumetric sparse deep belief networks
(VS-DBN) 2to identify FBNs (Dong et al., 2019).
In brief, the VS-DBN learns a set of latent variables embedded in fMRI. Each latent variable consists of voxels exhibiting similar fluctuation patterns over time and represents the spatial map of an FBN.
The VS-DBN consists of an input layer and three layers of restricted Boltzmann machines (RBMs).
It takes an fMRI volume as a feature and each time frame as a sample. The first RBM is with N visible units, where N is the number of voxels in a volume.
The number of hidden units (m) in the third RBM
determines the number of FBNs. The weights in RBMs are trained layer-wisely. The linear combination that performs successive multiplication of weights from the third to the first RBM is used to generate the global latent variables W. Each column in W represents an FBN's spatial map. The responses of a single hidden unit in the third RBM
to the entire input fMRI sequence are the corresponding time series of an FBN and are regarded as the temporal response of an FBN.
2https://github.com/QinglinDong/vsDBN
## 4 Experiments 4.1 Dataset And Preprocessing
We use the open source "Narratives" fMRI dataset
(Nastase et al., 2021) in the experiments. The "Narratives" fMRI data were acquired while human subjects listened to 27 diverse spoken stories. We select two sessions with moderate duration, the "Pie man" (Pieman) and "The Man Who Forgot Ray Bradbury" (Forgot). The Pieman is a story about a journalist writing reports of a man with supernatural abilities (duration 422s, word count 957). FMRI
data were acquired for 82 subjects (282 volumes, spatial resolution 3 × 3 × 4mm3, TR=1.5s). The Forgot is about a man confronting a gradual loss of memory (duration 837s, word count 2135). FMRI
data were acquired for 46 subjects (558 volumes, spatial resolution 2.5 × 2.5 × 2.5mm3, TR=1.5s).
The "Narratives" fMRI data were released with various preprocessed versions and we use the AFNIsmooth version. The spoken story was released with time-stamped word-level transcripts and the onset and duration of each phoneme in a word. We use this information to temporally align phonemes and fMRI volumes. In addition, we tag phonemes in the audio-story with typical categories (vowel, mixed, fricative, affricate, nasal and stop) defined previously that cover the phonetic inventory of 38 unique phonemes (Hamooni and Mueen, 2014).
## 4.2 Implementation Details
We use the pre-trained Wav2Vec2.0-base maintained by HuggingFace 3in the experiments. We partition the stories into short segments by balancing the token capacity of Wav2Vec2.0 and sentence integrity. It is notable that the story Forgot is much longer than Pieman. Thus, we crop the Forgot from the beginning to have the same number of TRs as that in Pieman to facilitate a cross-validation. As such, both spoken stories are partitioned into 25 segments (duration: 16.62±6.74s in Pieman and 15.30±3.50s in Forgot).
We train the VS-DBN model to extract FBNs for each fMRI session independently. The fMRI
volumes of multiple subjects (randomly selected 75 subjects in Pieman, and all the 46 subjects in Forgot) are aggregated as samples (20775/25668 in Pieman/Forgot). The parameters are set as follows: 512/256/128 hidden units in the 1st/2nd/3rd RBM layer, Gaussian initialization with zero-mean 3https://huggingface.co/docs/transformers/
main/en/model_doc/wav2vec2 and a standard deviation of 0.01, learning rate 0.001/0.0005/0.0005, batch-size 20, L1 weightdecay rate 0.001/0.00005/0.00005, 100 training epochs, batch normalization. In each session, the resulted FBNs in all the subjects share the same set of spatial maps but have subject-specific temporal responses. The subject-specific temporal responses are averaged over subjects to characterize the temporal responses of FBNs in population.
## 5 Results 5.1 Synchronization Between Ans And Bns
We first assess the intra-session synchronization between ANs and BNs. The distributions of the AN's maximum PCC to BNs for Pieman (Pieman-Pieman, 0.3305±0.0042) and Forgot
(Forgot-Forgot, 0.3142±0.0049) are shown in Fig.
2(a). Permutation tests with 5000 randomizations show that the PCCs are significant (p < 0.01, FDR
corrected) for 9203/9192(99.86%/99.74%) ANs in Pieman/Forgot. In both sessions, the average PCC in each layer (Fig. 2b) is relatively stable in the first ten layers but increases sharply in the last two layers, indicating that the ANs in the last two layers better synchronize to BNs. We then evaluate the inter-session synchronization between ANs and BNs, which may server as a stronger baseline control. We identify the best correlated AN in one session and the BN in the other. The inter-session PCCs are significantly (p < 10−10)
lower compared to the intra-session one in both sessions (Pieman-Forgot, 0.1912±0.0016; ForgotPieman, 0.2097±0.0032; Fig. 2a). The AN that is anchored by an BN is identified according to Eq. 2. The PCCs (0.4262±0.0027 in Pieman and 0.4199±0.0029 in Forgot) are statistically significant (p < 10−10) for all the 128 BNs in both sessions. These observations show that the temporal responses of ANs and BNs are well synchronized.
## 5.2 The Global Bn Anchored By Ans
We identify the global BN as the one that is the most frequently anchored by ANs after applying a PCC threshold of 0.25 (Fig. 3a). The spatial distributions of the global BNs in Pieman (BN\#47) and Forgot (BN\#42) are similar. They mainly encompass the Heschl's gyrus (HG) and nearby superior temporal gyrus (STG), posterior superior temporal sulcus (pSTS), posterior inferior temporal gyrus
(pITG), temporal pole (TP), temporo-parietal junction (TPJ), Broca's and Wernicke's areas in the in-
![5_image_0.png](5_image_0.png)
ferior frontal gyrus (IFG), pre-central gyrus (PrG),
and post-central gyrus (PoG) (Fig. 3b-c). These brain regions well match the cortical anatomy of the dual-stream language model (Hickok and Poeppel, 2007). The earliest stage of neural speech processing involves spectrotemporal analysis in HG
and STG, followed by phonological-level representation in STS. Subsequently, the system diverges into two streams, a dorsal steam (TPJ, IFG and sensorymotor cortex of PrG and PoG) and a ventral stream (pITG and TP) map sensory or phonological representations onto articulatory motor representations and lexical conceptual representations, respectively (Hickok and Poeppel, 2007).
Intriguingly, the ANs that synchronize with the global BN are widely distributed across layers 1-10, but predominantly located on lower layers (i.e., 1-4, Fig. 3d). We then assess the phonemic patterns of query-key pairs that those ANs selectively respond. In each of the 25 audio segments we select 1500 query-key pairs that have top values in the EP matrix corresponding to each of the ANs, and construct a 38 × 38 phoneme distribution matrix
(PDM), in which rows are queries and columns are keys. Each entry in the PDM is the proportion of the query-key pairs falling into the entry. The average PDMs over all the ANs are quite homogeneous in both sessions (Fig. 3e), showing that the global BN responds to general attentional relationships among phonemes. Meanwhile, some ANs are selective to phonemic relationships (e.g., vowelvowel, see details in section 5.4, Fig. 8). Taken together, these observations reinforce the prevalent functional interactions (Cloutman, 2013; BhayaGrossman and Chang, 2022) among the brain regions covered by the global BN and suggest that the lower layers in Wav2Vec2.0 are responsive to learn phonemic relationships.
![5_image_1.png](5_image_1.png)
## 5.3 The Local Bns In Each Layer
We then identify the frequently anchored BN in each layer (local BNs). We define the anchoring frequency of a BN in a layer as the ratio between the number of ANs it anchors in that layer and the total number of anchored AN-BN pairs in that layer. The BNs' anchoring frequency shows distinctive patterns across layers (Fig. 4). In lower layers 1-5 and upper layers 11-12, the local BNs are very sparse and limited to one or two predominant BNs. For example, the anchoring frequency of the BN\#47 is much higher compared to those of the rest in layers 1-5. Meanwhile, the local BNs in layers 6-10 are widely spread. That is, the ANs in those layers tend to anchor to different BNs. We highlight the local BNs in each layer by circles and show their spatial maps in Fig. 5 for Pieman.
The BN\#47 is referred Fig. 3(b). The anchoring frequency and the local BNs in Forgot are shown in Fig. A.1-A.2, respectively.
The global BN (BN\#47) is identified as the local BN in layers 1-5, however, its anchoring frequency decreases as the layer goes deeper (Fig. 6a). The BN\#54 encompasses the working memory network
(WM, retains short-term temporal memory) and
![6_image_0.png](6_image_0.png)
the language network (Broca's and Wernicke's areas), reflecting the functional interactions between them. It is identified as the local BNs in the intermediate layers 6-10 and its anchoring frequency increases and reaches the peak in layer 10 (Fig.
6b). The BN\#67 involves the activations in PrG
and the deactivations in PrG and HG/STG, reflecting the functional competition between them. It is identified as the local BNs in layers 7-9. Its anchoring frequency indicates that it is widely anchored by ANs in layers 1-9 but fades out sharply in the last three layers (Fig. 6c). The BN\#83, which is one of the local BNs in layer 10, exhibits activations in precuneus cortex (PcC, which is considered as part of the brain's semantic system (Binder et al., 2009)) and frontal pole (FP), as well as deactivations in PrG and PoG. The BN\#42 reflects functional interactions among the primary auditory cortex (HG/STG), the language network and visual cortex (intracalcarine cortex, IcC, and PcC) and it predominates the local BNs in layer 11. There are two local BNs in layer 12, BN\#30 and BN\#37.
The BN\#30 shows complex co-activations in the ventral and dorsal streams in speech processing, and semantic related regions including the angular gyrus (AG), posterior supramarginal gyrus (SmGp),
and lateral occipital cortex (LOCs). The BN\#37 mainly covers the ventral and dorsal streams in speech processing.
Using cumulative attention diagonality (CAD)
applied to head-level attention score, Shim et al. have shown distinctive global and local attention patterns in lower (1-8) and upper (9-16) layers in Wav2Vec2.0, respectively. The former integrates long-range attention to form phonetic localization, while the latter focuses on short-range diagonal attention for language identification (Shim et al.,
2021). We apply the same metric to the EP matrix of ANs rather than the head-level attention score.
Fig. 7 shows the average CAD over top 1% and top 2% ANs in each layer for a randomly selected segment in the two sessions. We identify a transient stage (layers 6-10) between global (layers 1-4) and local (layers 11-12) ones. Combined with the fadeout of BN\#47 (layers 1-5) and the fade-in of BN\#54
(layers 6-10) along the layer depth (Fig. 6), we suggest that there is an intermediate level between the global and local ones in Wav2Vec2.0. That is, the layers 6-10 may gradually integrate global phonetic localization encoded in the early stages of cortical speech hierarchy (BN\#47) through the functional interactions between WM and the language network (BN\#54) to form local language localization.
In addition, the good predictive performance in WM has rarely been reported in exiting neural encoding studies of Wav2Vec2.0 (Li et al., 2022; Millet et al., 2022; Tuckute et al., 2022; Vaidya et al.,
2022), which may be partly due to the relatively coarse layer-level ANs used in these studies. Thus, the fine-grained ANs defined in this study enable us to preliminarily reveal this intermediate-level representation in Wav2Vec2.0 and map it to its neurobiological counterparts.
![6_image_1.png](6_image_1.png)
Figure 6: The anchoring frequency of three local BNs
![7_image_0.png](7_image_0.png)
![7_image_2.png](7_image_2.png)
in different layers. (a) BN\#47. (b) BN\#54. (c) BN\#67.
## 5.4 Phoneme-Selective An-Bn Paris
We identify some ANs that are selective to different categories of phonemes, as shown in Fig. 8 for some examples. In each example, we show the phoneme distribution matrix (PDM) of the AN,
and the BN anchored by the AN. It is notable that the ANs in the two sessions are identical and the corresponding BNs are similar, showing good reproducibility across sessions. Brain regions including HG/STG, STS and sensorimotor areas are frequently observed in those BNs, which is partly in line with previous studies (Kim et al., 2021).
Despite some interesting findings in computational interpretation of audio-transformers (Shim et al., 2021; Yang et al., 2020), the neural basis of phoneme-selectivity in the brain is still under debate (Mesgarani et al., 2014; Gwilliams et al.,
2022; Bhaya-Grossman and Chang, 2022; Sohoglu, 2019). What we intend to convey here is that the fine-grained ANs defined in this study, applied in a neural encoding framework, may provide an alternative strategy to probe this problem.
## 6 Discussion And Conclusion
We proposed to define fine-grained artificial neurons (ANs) in the audio-transformer Wav2Vec2.0 and map them to their neurobiological counterparts. Our experimental results showed that the fine-grained ANs carried meaningful linguistic information and well synchronized to their BN sig-
![7_image_1.png](7_image_1.png)
natures. Moreover, the anchored AN-BN pairs are partly interpretable in a neurolinguistic view.
Although a comprehensive mapping of the cortical speech hierarchy is out of the scope of this study, we observed some interesting results, facilitated by the fine-grained ANs. First, the alignment between the computational hierarchy in Wav2Vec2.0 and the cortical speech hierarchy is largely in line with existing studies (Li et al., 2022; Millet et al., 2022; Tuckute et al., 2022; Vaidya et al., 2022). Second, and more importantly, we preliminarily discovered an intermediate stage in both the computational representation in Wav2Vec2.0 and the cortical representation in the brain. It gradually integrates global phonetic localization encoded in the early stages of neural speech hierarchy through the functional interactions between the working memory and language networks to form local language localization.
In comparison, a good predictive performance from computational representation in audio-transformers to brain activities has rarely been reported previously. Third, we observed phoneme-selective neural-level ANs in Wav2Vec2.0, and the associated BNs are partly in line with existing studies
(Kim et al., 2021). Thus, the fine-grained ANs defined here may potentially provide an alternative approach to explore whether there are phonemeselective neural activities in the brain.
The fine-grained ANs defined in this study may also serve as the brain-based test-bed to evaluate and interpret audio-transformers, and provide neurolinguistic support for better understanding of the role of self-attention for efficient speech computation. For example, after interpreting distinctive attentional profiles in different attention heads, Shim et al. applied a layer-wise attention map reuse strategy to improve model performance (Shim et al.,
2021). A similar but with more fine-grained strategy could further improve model performance.
In conclusion, we defined and validated neuronlevel ANs in Wav2Vec2.0. This definition may serve as a general strategy to link transformer-based deep learning models to neural responses for probing the sensory processing in the brain.
## 7 Limitation
The current study is has some limitations. First, we used a single audio-transformer model, the pretrained Wav2Vec2.0-base, as a test bed to validate the fine-grained ANs and couple them to their BN signatures. On the one hand, various audiotransformers have been proposed in the literature.
On the other hand, the parameters of a pre-trained model are fine-tuned by downstream tasks and previous studies have shown that fine-tuning may lead DNNs to increase their brain similarity (Millet and King, 2021; Tuckute et al., 2022). Thus, it would be interesting to explore whether there are consistent AN-BN coupling patterns across different models, either pre-trained or fine-tuned. In addition, it is necessary to investigate these patterns across different languages (e.g., English VS Mandarin).
Second, existing studies have shown that audiotransformers are able to learn sound-generic, speech-specific and language-specific representations and those hierarchical representations are akin to the cortex (Li et al., 2022; Millet et al., 2022; Vaidya et al., 2022). Thus, it would be interesting to explore whether the fine-grained ANs carry such multi-level representations, and link them to brain responses.
Third, the reproducibilty between the two sessions was high regarding to most of the results (e.g.,
the global BNs and the phoneme-selective AN-BN
pairs), but it was relatively low in some results
(e.g., the local ANs in some layers). We speculate that this is the consequence of relatively smaller fMRI training samples but much larger amount of VS-DBN model parameters in the session of Forgot, in which the number of subjects is smaller but the fMRI spatial resolution are higher. Higher spatial resolution results in much larger number of valid voxels (120,506) compared to that in Pieman
(50,065) and consequently more visible units in the VS-DBN model.
Last but not least, the analyses presented in this study are intrinsically limited by the coarseness of spatial (voxels in millimeters) and temporal resolution (volumes in seconds) of fMRI data. Mapping from sound to an interpretable representation involves integrating neural activities on different spatial-scales down to sub-millimeters and on different timescales down to milliseconds. Thus, it would be of great interest in the future to apply the fine-grained ANs to auditory magnetoencephalogram (MEG) dataset to disentangle the symbiosis of model computation and brain responses in both space and time (Bhaya-Grossman and Chang, 2022; Gwilliams et al., 2022).
## 8 Acknowledgements
This work was partly supported by National Key R&D Program of China (2020AAA0105701),
National Natural Science Foundation of China
(62076205, 61936007 and 61836006).
## References
Betty van Aken, Benjamin Winter, Alexander Löser, and Felix A Gers. 2020. Visbert: Hidden-state visualizations for transformers. In *Companion Proceedings* of the Web Conference 2020, pages 207–211.
Vinoo Alluri, Petri Toiviainen, Iiro P Jääskeläinen, Enrico Glerean, Mikko Sams, and Elvira Brattico. 2012.
Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm. *Neuroimage*, 59(4):3677–3689.
Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In *International conference on machine learning*, pages 173–182. PMLR.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
Advances in Neural Information Processing Systems, 33:12449–12460.
Gašper Beguš, Alan Zhou, and T Christina Zhao. 2022.
Encoding of speech in convolutional layers and the brain stem based on language experience. *bioRxiv*.
Julia Berezutskaya, Zachary V Freudenburg, Umut Güçlü, Marcel AJ van Gerven, and Nick F Ramsey. 2017. Neural tuning to low-level features of
speech throughout the perisylvian cortex. *Journal of* Neuroscience, 37(33):7906–7920.
Ilina Bhaya-Grossman and Edward F. Chang. 2022.
Speech computations of the human superior temporal gyrus. *Annual Review of Psychology*, 73(1):79–102.
Jeffrey R. Binder, Rutvik H. Desai, William W. Graves, and Lisa L. Conant. 2009. Where Is the Semantic System? A Critical Review and Meta-Analysis of 120 Functional Neuroimaging Studies. *Cerebral Cortex*, 19(12):2767–2796.
Cecile Bordier, Francesco Puja, and Emiliano Macaluso.
2013. Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging. *Neuroimage*, 67:213–226.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does bert look at? an analysis of bert's attention. *arXiv preprint* arXiv:1906.04341.
Lauren L. Cloutman. 2013. Interaction between dorsal and ventral processing streams: Where, when and how? *Brain and Language*, 127(2):251–263.
Fengyu Cong, Vinoo Alluri, Asoke K Nandi, Petri Toiviainen, Rui Fa, Basel Abu-Jamous, Liyun Gong, Bart GW Craenen, Hanna Poikonen, Minna Huotilainen, et al. 2013. Linking brain responses to naturalistic music through analysis of ongoing eeg and stimulus features. *IEEE Transactions on Multimedia*,
15(5):1060–1069.
Christoph Daube, Robin A.A. Ince, and Joachim Gross.
2019. Simple acoustic features can explain phonemebased predictions of cortical responses to speech.
Current Biology, 29(12):1924–1937.e9.
Qinglin Dong, Fangfei Ge, Qiang Ning, Yu Zhao, Jinglei Lv, Heng Huang, Jing Yuan, Xi Jiang, Dinggang Shen, and Tianming Liu. 2019. Modeling hierarchical brain networks via volumetric sparse deep belief network. *IEEE transactions on biomedical engineering*, 67(6):1739–1748.
Andrew Francl and Josh H McDermott. 2022. Deep neural network models of sound localization reveal how perception is adapted to real-world environments.
Nature Human Behaviour, 6(1):111–133.
Umut Güçlü, Jordy Thielen, Michael Hanke, and Marcel Van Gerven. 2016. Brains on beats. *Advances in* Neural Information Processing Systems, 29.
Laura Gwilliams, Jean-Remi King, Alec Marantz, and David Poeppel. 2022. Neural dynamics of phoneme sequences reveal position-invariant code for content and order. *Nature communications*, 13(1):1–14.
John T Hale, Luca Campanelli, Jixing Li, Shohini Bhattasali, Christophe Pallier, and Jonathan R Brennan.
2022. Neurocomputational models of language processing. *Annual Review of Linguistics*, 8:427–446.
Hossein Hamooni and Abdullah Mueen. 2014. Dualdomain hierarchical classification of phonetic time series. In 2014 IEEE international conference on data mining, pages 160–169. IEEE.
Gregory Hickok and David Poeppel. 2007. The cortical organization of speech processing. Nature reviews neuroscience, 8(5):393–402.
Wei-Ning Hsu, Yao-Hung Hubert Tsai, Benjamin Bolte, Ruslan Salakhutdinov, and Abdelrahman Mohamed.
2021. Hubert: How much can a bad teacher benefit asr pre-training? In *ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal* Processing (ICASSP), pages 6533–6537. IEEE.
Xintao Hu, Lei Guo, Junwei Han, and Tianming Liu.
2017. Decoding power-spectral profiles from fmri brain activities during naturalistic auditory experience. *Brain imaging and behavior*, 11(1):253–263.
Nicholas Huang, Malcolm Slaney, and Mounya Elhilali.
2018. Connecting deep neural networks to physical, perceptual, and electrophysiological auditory signals.
Frontiers in neuroscience, 12:532.
Alexander G Huth, Wendy A De Heer, Thomas L Griffiths, Frédéric E Theunissen, and Jack L Gallant.
2016. Natural speech reveals the semantic maps that tile human cerebral cortex. *Nature*, 532(7600):453–
458.
Alexander JE Kell, Daniel LK Yamins, Erica N Shook, Sam V Norman-Haignere, and Josh H McDermott.
2018. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. *Neuron*,
98(3):630–644.
Fatemeh Khatami and Monty A Escabí. 2020. Spiking network optimized for word recognition in noise predicts auditory system hierarchy. *PLOS Computational Biology*, 16(6):e1007558.
Seung-Goo Kim, Federico De Martino, and Tobias Overath. 2021. Linguistic modulation of the neural encoding of phonemes. *bioRxiv*.
Amber M Leaver and Josef P Rauschecker. 2010. Cortical representation of natural complex sounds: effects of acoustic features and auditory object category.
Journal of Neuroscience, 30(22):7604–7612.
Yuanning Li, Gopala K Anumanchipalli, Abdelrahman Mohamed, Junfeng Lu, Jinsong Wu, and Edward F
Chang. 2022. Dissecting neural computations of the human auditory pathway using deep neural networks for speech. *bioRxiv*.
Xu Liu, Mengyue Zhou, Gaosheng Shi, Yu Du, Lin Zhao, Zihao Wu, David Liu, Tianming Liu, and Xintao Hu. 2023. Coupling artificial neurons in bert and biological neurons in the human brain. In AAAI
2023.
Nima Mesgarani, Connie Cheung, Keith Johnson, and Edward F Chang. 2014. Phonetic feature encoding in human superior temporal gyrus. *Science*,
343(6174):1006–1010.
Juliette Millet, Charlotte Caucheteux, Pierre Orhan, Yves Boubenec, Alexandre Gramfort, Ewan Dunbar, Christophe Pallier, and Jean-Remi King. 2022.
Toward a realistic model of speech processing in the brain with self-supervised learning. arXiv preprint arXiv:2206.01685.
Juliette Millet and Ewan Dunbar. 2022. Do selfsupervised speech models develop human-like perception biases? *arXiv preprint arXiv:2205.15819*.
Juliette Millet and Jean-Remi King. 2021. Inductive biases, pretraining and fine-tuning jointly account for brain responses to speech. arXiv preprint arXiv:2103.01032.
Thomas Naselaris, Kendrick N Kay, Shinji Nishimoto, and Jack L Gallant. 2011. Encoding and decoding in fmri. *Neuroimage*, 56(2):400–410.
Samuel A Nastase, Yun-Fei Liu, Hanna Hillman, Asieh Zadbood, Liat Hasenfratz, Neggin Keshavarzian, Janice Chen, Christopher J Honey, Yaara Yeshurun, Mor Regev, et al. 2021. The "narratives" fmri dataset for evaluating models of naturalistic language comprehension. *Scientific data*, 8(1):1–22.
Sam Norman-Haignere, Nancy G. Kanwisher, and McDermott Josh H. 2015. Distinct cortical pathways for music and speech revealed by hypothesis-free voxel decomposition. *Neuron*, 88(6):1281–1296.
Hae-Jeong Park and Karl Friston. 2013. Structural and functional brain networks: from connections to cognition. *Science*, 342(6158):1238411.
Brian N Pasley, Stephen V David, Nima Mesgarani, Adeen Flinker, Shihab A Shamma, Nathan E Crone, Robert T Knight, and Edward F Chang. 2012. Reconstructing speech from human auditory cortex. *PLoS*
biology, 10(1):e1001251.
Marcus T Pearce, María Herrojo Ruiz, Selina Kapasi, Geraint A Wiggins, and Joydeep Bhattacharya. 2010.
Unsupervised statistical learning underpins computational, behavioural, and neural manifestations of musical expectation. *NeuroImage*, 50(1):302–313.
David Poeppel. 2012. The maps problem and the mapping problem: two challenges for a cognitive neuroscience of speech and language. *Cognitive neuropsychology*, 29(1-2):34–55.
Cristhian Potes, Aysegul Gunduz, Peter Brunner, and Gerwin Schalk. 2012. Dynamics of electrocorticographic (ecog) activity in human temporal and frontal cortical areas during music listening. *NeuroImage*,
61(4):841–848.
Mark R Saddler, Ray Gonzalez, and Josh H McDermott.
2021. Deep neural network models reveal interplay of peripheral coding and stimulus statistics in pitch perception. *Nature communications*, 12(1):1–25.
Roberta Santoro, Michelle Moerel, Federico De Martino, Rainer Goebel, Kamil Ugurbil, Essa Yacoub, and Elia Formisano. 2014. Encoding of natural sounds at multiple spectral and temporal resolutions in the human auditory cortex. PLoS computational biology, 10(1):e1003412.
Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre-training for speech recognition. arXiv preprint arXiv:1904.05862.
Kyuhong Shim, Jungwook Choi, and Wonyong Sung.
2021. Understanding the role of self attention for efficient speech recognition. In *International Conference on Learning Representations*.
Ediz Sohoglu. 2019. Auditory neuroscience: sounding out the brain basis of speech perception. Current Biology, 29(12):R582–R584.
Jessica AF Thompson, Yoshua Bengio, Elia Formisano, and Marc Schönwiesner. 2021. Training neural networks to recognize speech increased their correspondence to the human auditory pathway but did not yield a shared hierarchy of acoustic features. *bioRxiv*.
Petri Toiviainen, Vinoo Alluri, Elvira Brattico, Mikkel Wallentin, and Peter Vuust. 2014. Capturing the musical brain with lasso: Dynamic decoding of musical features from fmri data. *Neuroimage*, 88:170–180.
Greta Tuckute, Jenelle Feather, Dana Boebinger, and Josh H McDermott. 2022. Many but not all deep neural network audio models capture brain responses and exhibit hierarchical region correspondence. *bioRxiv*.
Aditya R Vaidya, Shailee Jain, and Alexander G Huth.
2022. Self-supervised models of audio effectively explain human cortical responses to speech. *arXiv* preprint arXiv:2205.14252.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Jesse Vig. 2019a. Bertviz: A tool for visualizing multihead self-attention in the bert model. In *ICLR Workshop: Debugging Machine Learning Models*.
Jesse Vig. 2019b. Visualizing attention in transformerbased language representation models. *arXiv* preprint arXiv:1904.02679.
Liting Wang, Huan Liu, Xin Zhang, Shijie Zhao, Lei Guo, Junwei Han, and Xintao Hu. 2022. Exploring hierarchical auditory representation via a neural encoding model. *Frontiers in neuroscience*, 16.
Daniel LK Yamins and James J DiCarlo. 2016. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience , 19(3):356–365.
Shu-wen Yang, Andy T Liu, and Hung-yi Lee. 2020.
Understanding self-attention of self-supervised audio transformers. arXiv preprint arXiv:2006.03265 .
Jarkko Ylipaavalniemi, Eerika Savia, Sanna Malinen, Riitta Hari, Ricardo Vigário, and Samuel Kaski. 2009.
Dependencies between stimuli and spatially independent fmri sources: Towards brain correlates of natural stimuli. NeuroImage , 48(1):176–185.
## Appendix A
![11_image_0.png](11_image_0.png)
Figure A.1: The anchoring frequency of BNs in each layer in Forgot. In each subplot, the x -axis is BN index and the y -axis is anchoring frequency.Circles highlight the indices of local BNs.
![11_image_1.png](11_image_1.png)
Figure A.2: The local BNs in Forgot. HG: Heschl's gyrus; STG: superior temporal gyrus; STS: superior temporal sulcus; pITG: posterior inferior temporal gyrus; terior supramarginal gyrus; OcP: occipital pole; LOC: lateral occipital cortex; AG: angular gyrus.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3, 4, 5
✓ B1. Did you cite the creators of artifacts you used?
3, 4, 5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
3, 4, 5
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3, 4, 5 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4, 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4, 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
liu-etal-2023-deeply | Deeply Coupled Cross-Modal Prompt Learning | https://aclanthology.org/2023.findings-acl.504 | Recent advancements in multimodal foundation models (e.g., CLIP) have excelled in zero-shot generalization. Prompt tuning involved in the knowledge transfer from foundation models to downstream tasks has gained significant attention recently. Existing prompt-tuning methods in cross-modal learning, however, either solely focus on language branch, or learn vision-language interaction in a shallow mechanism. In this context, we propose a Deeply coupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexibly accommodates the interplay between vision and language with a Cross-Modal Prompt Attention (CMPA) mechanism, which enables the mutual exchange of respective representation through a well-connected multi-head attention progressively and strongly. We then conduct comprehensive few-shot learning experiments on 11 image classification datasets and analyze the robustness to domain shift as well. Thorough experimental analysis evidently demonstrates the superb few-shot generalization and compelling domain adaption capacity of a well-executed DCP. | # Deeply Coupled Cross-Modal Prompt Learning
Xuejing Liu 1, Wei Tang+ 2, Jinghui Lu 1, Rui Zhao 1 Zhaojun Guo+ 3 **Fei Tan**∗ 1 1 SenseTime Research 2 Nanjing University of Science and Technology 3 Fudan University
{liuxuejing, lujinghui1, zhaorui, tanfei}@sensetime.com [email protected], [email protected]
## Abstract
Recent advancements in multimodal foundation models (e.g., CLIP) have excelled in zeroshot generalization. Prompt tuning involved in the knowledge transfer from foundation models to downstream tasks has gained significant attention recently. Existing prompttuning methods in cross-modal learning, however, either solely focus on language branch, or learn vision-language interaction in a shallow mechanism. In this context, we propose a Deeply coupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexibly accommodates the interplay between vision and language with a Cross-Modal Prompt Attention (CMPA) mechanism, which enables the mutual exchange of respective representation through a well-connected multi-head attention module progressively and strongly. We then conduct comprehensive few-shot learning experiments on 11 image classification datasets and analyze the robustness to domain shift as well. Thorough experimental analysis evidently demonstrates the superb few-shot generalization and compelling domain adaption capacity of a well-executed DCP. The code can be found at https://github.com/GingL/CMPA.
## 1 Introduction
Large foundation models pre-trained on web-scale image-text pairs such as CLIP (Radford et al., 2021)
and ALIGN (Jia et al., 2021) have shown promising performance on zero-shot image classification.
Research has repeatedly shown that the general knowledge learned by the foundation models can also be transferred to diverse downstream tasks, such as few-shot image classification (Zhou et al.,
2022b,a), visual grounding (Subramanian et al.,
2022), visual question answering (Liu et al., 2022)
and so on. They have exhibited a significant potential in open-vocabulary scenarios. Thus, the challenge associated with how to efficiently and effectively adapt large pre-trained models to downstream tasks has garnered increasing attention especially in low-resource training scenarios.
Directly fine-tuning the foundation model is infeasible due to the massive training parameters and the catastrophic forgetting caused by overfitting (Kirkpatrick et al., 2016). In contrast, the parameter-efficient *prompt tuning* approach explored in natural language processing has yielded significant success (Lester et al., 2021),
leading to an increased examination of this technique within the realm of multi-modality, especially in the language-branch of CLIP. For example, CoOp (Zhou et al., 2022b) and ProDA (Lu et al., 2022b) explore the vanilla few-shot learning based on CLIP by adjusting the embedding or distribution of the text prompt. CoCoOp (Zhou et al., 2022a) and ProGrad (Zhu et al., 2022) focus more on the unseen classes. They contextualize the text prompt either under the supervision of visual clues or tweak gradient direction to improve the generalization ability of the model.
The aforementioned approaches, however, only adjust the text embedding of CLIP and neglect the visual branch. The success of VPT (Jia et al., 2022)
demonstrates the effectiveness of visual prompt learning. Inspired by this work, UPT (Zang et al.,
2022) and MaPLe (Khattak et al., 2022) synergize the visual and textual prompts. Specifically, UPT
improves the few-shot learning ability by generating visual and text prompts initially. MaPLe achieves better performance in the classification of unseen classes. They uncover the underlying rationale and limitations of dual-branch prompt tuning.
Concretely, the dual-branch CLIP learns the visual and language synergy only based on contrastive learning, whereas both branches lack mutual communication at the early stage of the network. Multi-modal prompt learning techniques,
+Work was done during internship at SenseTime Research
*Corresponding author such as MaPLe and UPT, incorporate languagevision interactions of the network and achieve substantially improved performance, highlighting the significance of the cross-modal interactions. However, previous studies have leveraged languagevision interactions at a superficial level. For example, UPT generates visual and text prompts before they are fed into the corresponding encoders. MaPLe generates visual prompts conditioned on language counterparts by a mapping function. Many studies (Dosovitskiy et al., 2021; Wang et al., 2022a) have shown that neural networks, especially transformer-based models, can leverage the deep fusion of information from multiple views to improve their performance. It remains less explored in the thread of multi-modal few-shot learning. To this end, we design Deeply coupled Cross-modal Prompt learning (DCP) enhancing the language-vision interaction. Specifically, DCP is built upon CLIP, with additional text and visual prompts across multiple layers. Different from previous methods with deep prompt tuning (Jia et al., 2022; Zang et al., 2022; Khattak et al., 2022), DCP only initializes the first layer of visual and text prompt randomly. The subsequent prompts are generated by Cross-Modal Prompt Attention (CMPA) module, which elegantly integrates the prompts from the preceding cross-modal layer. CMPA is characterized with stronger connection in two folds, i.e., *Depth* and *Breadth*. 1)
Depth means that CMPA intensifies the correlation of the prompts among different layers. 2) *Breadth* refers to that CMPA amplifies the interaction between visual and language modalities. CMPA is the core module to realize the deep coupling between two modalities. Essentially, DCP empowered by CMPA amalgamates uni-branch and dual-branch multi-modal pre-training paradigms in a favorable way in an attempt to bridge the discrepancy between visual and textual knowledge without introducing too much overhead.
To conclude, the contributions of this work are as follows:
- We develop a deeply coupled cross-modal prompt learning (DCP) method with a core module cross-modal prompt attention
(CMPA). CMPA can reinforce the interaction between visual and language modals across different layers.
- We benchmark our method on 11 image classification datasets consisting of generic objects,
scenes, actions and fine-grained categories.
Our method surpasses visual prompt tuning, text prompt tuning and existing competitive multi-modal prompt tuning methods under the few-shot setting.
- We conduct experiments on domain adaptation tasks. Our method achieves comparable performance to the state-of-the-art methods, indicating the robustness of our method to domain shift.
## 2 Related Work 2.1 Vision-Language Pre-Trained Models
The advent of Transformer (Vaswani et al., 2017)
has accelerated the development of large-scale pretraining. The application of Transformer in the multi-modal is divided into two schools of thought:
one is the single-stream model, in which language and vision information are fused at the beginning and fed directly into the encoder together; the other is the dual-stream model, in which language and vision information first pass through two separate encoder modules at the beginning, and then the different modal information is fused through the cross Transformer.
At the outset, the basic architecture of some contemporaneous work is BERT. The images are detected with Faster-RCNN (Ren et al., 2015) for region features, and these image region features are fed into BERT along with text information to align the text and image information. Following the same process as BERT, these methods first pretrain and then fine-tune on the corresponding tasks.
Single-stream networks (Li et al., 2019; Alberti et al., 2019; Chen et al., 2019; Li et al., 2020; Su et al., 2020; Zhou et al., 2020; Qi et al., 2020; Lu et al., 2020) fuse information from different modalities directly through an encoder. The dual-stream models (Lu et al., 2019; Tan and Bansal, 2019) integrate different modal information through cross modal transformer. Empirically single-stream networks are more sufficient for information fusion, while dual-stream networks can be more efficient for training due to fewer training parameters. In the design of our method, we aim to combine the advantages of the single-stream and dual-stream, so as to enhance the cross-modal integration without introducing many training parameters.
Recent cross-modal large-scale pre-training models have made greater breakthroughs in training data scale and tasks by devising various model architectures and training objectives, and have achieved impressive performance in many downstream tasks. CLIP (Radford et al., 2021)
and ALIGN (Jia et al., 2021) got remarkable zero-shot results after being pre-trained on millions or billions of (image, text) pairs collected from the internet. Coca (Yu et al., 2022) combined the advantages of the contrast learning method (Radford et al., 2021) and the generative model SiMVLM (Wang et al., 2022b) by adding caption loss to the contrast loss of CLIP.
OFA (Wang et al., 2022a), Unified-IO (Lu et al.,
2022a) and Florence (Yuan et al., 2021) unified vision, language and multi-modal tasked by pretraining on both cross-modal and uni-modal data.
These methods have achieved state-of-the-art results in many downstream tasks. Some methods are dedicated to improving the performance of certain specific tasks. UniTAB (Yang et al.,
2022) focused on grounded vision-language tasks such as grounded captioning and visual grounding. GLIP (Li et al., 2022) unified object detection and phrase grounding for pre-training. Pretraining models have opened up a situation where deep learning models scale and perform in tandem, becoming a revolutionary breakthrough in artificial intelligence and deep learning.
## 2.2 Prompt Learning
For a long time, first pre-training then fine-tuning was the dominant approach to apply large foundation models to downstream tasks. However, fine-tuning for large models is inefficient and may cause catastrophic forgetting (Kirkpatrick et al.,
2016). Prompt learning is proposed to address the above problems. The prompt is usually a series of trainable parameters inserted into the input. The success of prompt learning in NLP (Lester et al.,
2021) has inspired its application in other modalities. VPT (Jia et al., 2022) is a typical successful application of prompt learning on computer vision.
Prompt learning has generated more attention and made great progress in cross-modal learning.
SoftCPT (Ding et al., 2022) and CPL (He et al.,
2022) applied prompt tuning to different vision and language tasks and outperformed single-task prompt tuning method. CoOp (Zhou et al., 2022b),
ProDA (Lu et al., 2022b) and UPT (Zang et al.,
2022) adapted prompt learning to traditional fewshot visual recognition with CLIP as the backbone.
CoCoOp (Zhou et al., 2022a), ProGrad (Zhu et al.,
2022) and MaPLe (Khattak et al., 2022) improved the classification performance of pre-trained models on novel categories by prompt learning. Different from previous methods, our approach brings stronger connection between modalities and layers with proposed cross-modal prompt attention.
The stronger interaction between vision and language enables our method to get state-of-the-art performance in the few-shot learning.
## 3 Method
In this section, we first introduce the preliminaries, including CLIP (Radford et al., 2021), CoOp (Zhou et al., 2022b) and VPT (Jia et al., 2022). Then, we describe our deeply coupled prompt learning (DCP) and detail its underlying module CMPA.
## 3.1 Preliminaries
CLIP is a dual-encoder pre-trained model which consists of a text encoder and an image encoder.
The text and image are independently encoded by the corresponding encoder, then projected to the same embedding space by a projection layer.
Specifically, the backbone of the image encoder is ResNet (He et al., 2016) (d=256) or ViT (d=512),
which can map the high-dimension image into a low-dimension embedding. The text encoder is built based on the decoder of Transformer (Vaswani et al., 2017), which is also known as GPT (Brown et al., 2020), to generate a vectorized representation for a sequence of words. The model uses a contrastive loss to align the two modalities during training stage. The training objective is to maximize the cosine similarity for the match image-text pairs and minimize the unmatched ones.
In zero-shot image recognition, the image encoder of CLIP encodes the image into a feature representation x. The input text is usually in the form of "a photo of a {class}." (*discrete prompt*),
where the "{class}" token is the name of each category. For each dataset containing K categories, a set of text prompts {wi}
K
i=1 are generated by the text encoder. The prediction probability is computed as
$$p(y\mid\mathbf{x})={\frac{\exp\left(\cos\left(\mathbf{x},\mathbf{w}_{y}\right)/\tau\right)}{\sum_{i=1}^{K}\exp\left(\cos\left(\mathbf{x},\mathbf{w}_{i}\right)/\tau\right)}},\quad{\mathrm{(1)}}$$
$$\tau{\mathrm{~is~a~temp}}$$
where τ is a temperature parameter.
CoOp adapts CLIP to downstream tasks with prompt tuning. Specifically, CoOp tries to learn
![3_image_0.png](3_image_0.png)
prompt embedding *(continuous prompt)* during few-shot training to avoid manual prompts. The prompt fed in the text encoder is designed as t = [V ]1[V ]2...[V ]M[*CLASS*], where [V ]m (m ∈ {1*, ..., M*}) is initialized with the same dimension as word embeddings. The parameters of the CLIP
model is frozen while the prompt is trainable. The prediction probability of CoOp is
$$p(y\mid\mathbf{x})={\frac{\exp\left(\cos\left(\mathbf{x},g(\mathbf{t}_{y})\right)/\tau\right)}{\sum_{i=1}^{K}\exp\left(\cos\left(\mathbf{x},g(\mathbf{t}_{i})\right)/\tau\right)}},\quad{\mathrm{(2)}}$$
## Where G(·) Denotes The Text Encoder.
VPT is an efficient and effective way to adapt large-scale Transformer models in vision with only a small amount of trainable parameters. The backbone of VPT is ViT, which is the same as the image encoder of CLIP. There are two variants of VPT: VPT-Shallow and VPT-Deep. VPT-Shallow only inserts prompts into the first layer of the Transformer. The visual prompt can be defined as p = [P]1[P]2...[P]N , where [P]n (n ∈ {1*, ..., N*})
keeps the same dimension as the image embedding.
The input of VPT-shallow is [xcls*, p, x*], where xcls is the classification token [CLS]. VPT-Deep introduces visual prompts at every Transformer layer.
The deep VPT can be formulated as
$$\left[\mathbf{x}_{cls}^{i},\ldots,\mathbf{x}^{i}\right]=L^{i}\left(\left[\mathbf{x}_{cls}^{i-1},\mathbf{p}^{i-1},\mathbf{x}^{i-1}\right]\right)$$ $$i=1,2,...,L\tag{3}$$ $$\mathbf{y}=\text{Head}\left(\mathbf{x}_{cls}^{L}\right),$$
where L denotes the number of Transformer layers and *Head* is the classification head. Only the prompts and classification head is learnt during training. VPT achieves impressive performance on 24 downstream recognition tasks.
## 3.2 Cross-Modal Prompt Attention
Inspired by the advance of prompt learning in vision and language, recent studies start to explore multi-modal prompt learning (Zang et al., 2022; Khattak et al., 2022). These methods update the visual and text prompt simultaneously to achieve balance in the learning of visual and text embedding. Although the visual and text embedding are adapted to the few-shot data, the interaction between visual and text is still insufficient. Hence we propose deeply coupled cross-modal prompt learning (DCP), which can enhance the communication between prompts across different layers and modalities. The essential module of DCP is cross-modal prompt attention, which fuses visual and text with multi-head cross-modal attention. Figure 1 depicts the pipeline of DCP and the detailed architecture of cross-modal prompt attention (CMPA).
Our method follows the implementation of CLIP,
which is also a dual-encoder model. Differently, we add prompts to every branch, and enable information fusion between vision and language during training through CMPA. Specifically, CMPA is a multi-head attention with visual and text prompts as inputs. The language prompts of the first layer are initialized with the pre-trained CLIP word embeddings of the template 'a photo of a <class>',
whereas the visual prompts inserted into the first layer are randomly initialized from a normal distribution. Then, the prompts of the next layer are generated by CMPA based on the prompts from the preceding layer. Formally, CMPA can be formulated as
$$\mathbf{P}_{t}^{l+1}=\mathrm{softmax}\left({\frac{P_{v}^{l}(P_{t}^{l})^{T}}{\sqrt{d_{k}}}}\right)P_{t}^{l}\qquad\qquad(4)$$ $$\mathbf{P}_{v}^{l+1}=\mathrm{softmax}\left({\frac{P_{t}^{l}(P_{v}^{l})^{T}}{\sqrt{d_{k}}}}\right)Pv^{l}\qquad\qquad(5)$$ $$l=1,2,...,N-1,\qquad\qquad\qquad(6)$$
where P
l t and P
lv denote the text prompt and visual prompt the the l layer of each encoder, respectively.
N is the depth of CMPA, which is smaller than the length of text and visual encoder. dk is the dimension of keys.
Different from previous methods, only the prompts from the first layer are randomly generated.
The subsequent prompts condition on the prompts from both visual and language modal. CMPA enables information communication between vision and text through corresponding prompts. Totally, CMPA brings stronger feature fusion from two aspects: layers and modalities. Note that CMPA
shares parameters from different layers, and the additional trainable parameters is only in a small amount.
## 4 Experiments
In this section, we conduct experiments to evaluate the effectiveness of our method under two settings.
One is few-shot visual recognition including 11 different datasets covering generic objects, scenes, actions and fine-grained categories. The other is domain adaptation, where we train our model on ImageNet and evaluate it on other four datasets.
## 4.1 Few-Shot Learning 4.1.1 Datasets
Following CoOp (Lester et al., 2021), we evaluate our method on 11 public visual recognition datasets: ImageNet (Deng et al., 2009), Caltech101 (Fei-Fei et al., 2004), OxfordPets (Parkhi et al., 2012), StanfordCars (Krause et al., 2013), Flowers102 (Nilsback and Zisserman, 2008),
Food101 (Bossard et al., 2014), FGVCAircraft (Maji et al., 2013), SUN397 (Xiao et al.,
2010), DTD (Cimpoi et al., 2014), EuroSAT (Helber et al., 2019) and UCF101 (Soomro et al., 2012).
We also use the same 1, 2, 4, 8 and 16 shots as CoOp for training and the full test set for evaluation purpose. The reported results are the average over three runs with different random seeds.
## 4.1.2 Implementation Details
We use the pre-trained ViT-B/16 CLIP model as our backbone. The length of prompt tokens for visual and textual context are both 16. The prompt depth is 9 as a trade-off between accuracy and training efficiency. We set the batch-size to 4 with a learning rate of 0.0035 via SGD optimizer.
We use 20 epochs for most datasets, except ImageNet, SUN397 and Food101. Also, 5-epoch setting works for diverse shots of Food101, 1/2/4shot of ImageNet, and 1/2-shot of SUN397, respectively.
## 4.1.3 Main Results
Baseline Methods. We compare our method with the original zero-shot CLIP, text prompt learning
(CoOp), visual prompt learning (VPT) and multimodal prompt learning (MaPLe), which all have ViT-B/16 as visual backbone. Basically, we follow the implementation of MaPLe (Khattak et al.,
2022). The prompt length of CoOp is set to 16.
VPT uses a prompt length of 8 and the visual and text prompt length of MaPLe is 2. The training epoch of CoOp is defined as 10, and that of VPT
and MaPLe is 5. We use the deep variant of VPT in few-shot experiments. The prompt depth of MaPLe is 9 as their original setting.
Performance Analysis. Figure 2 demonstrates our results comparison with other methods. The top left sub-figure shows the average performance of four methods. We can have the following findings. 1) Overall, cross-modal prompt learning (DCP and MaPLe) gets a large performance gain compared with single-modal prompt learning methods (VPT and CoOp). VPT and CoOp achieve comparable performance on different shots.
These results demonstrate the superiority of crossmodal prompt learning over uni-modal prompt learning. 2) Although both belong to multi-modal prompt learning methods, our method still outperforms MaPLe on 1/2/4/8/16 shots settings by 1.72/3.18/3.19/2.20/2.76(%). MaPLe utilized a linear layer to generate visual prompts from text prompts. Our proposed DCP enhances the interaction between vision and language with a cross-
![5_image_0.png](5_image_0.png)
![5_image_1.png](5_image_1.png)
modal prompt attention, which can not only guide visual embedding learning through text prompts, but also influence the language embedding with visual prompts. 3) Compared with 2/4/8/16 shots, our approach achieves a lower performance gain on one shot. We can also find that on separate datasets, our method achieves the best performance in almost all 16-shot cases (except for Food101).
This phenomenon indicates that our method is more effective in cases where the number of shots is relatively large. This is probably because the alignment between different modals is more challenging due to the small number of samples per category.
For individual datasets, we find that our approach has significant performance improvements on Flowers102, StanfordCars, FGVCAircraft, and EuroSAT. However, on the datasets of general categories such as ImageNet and Caltech101, our method does not achieve satisfactory performance when the number of shots is less than 16. We can
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
conclude that our method is more robust for finegrained classification datasets, and we need more shots for general category classification. On the dataset of Food101, our method performs slightly lower than MaPLe. We also find that all methods underperform zero-shot on 1-shot setting. We suppose this phenomenon comes from the noisy training data of Food101 (Bossard et al., 2014).
## 4.1.4 Ablation Study
The are two important settings in CMPA: the feature fusion method in different prompts and parameter sharing of CMPA across different layers. We conduct corresponding ablation experiments in this section to find the optimal setting.
Feature Fusion in Prompts. Before the visual and text prompts are fed into the CMPA, the dimension of the batch size is supposed to be consistent.
| Variant | 2 | 4 | 6 | 8 | 16 |
|-----------|-------|-------|-------|-------|-------|
| w/ PS | 68.99 | 72.56 | 75.69 | 78.42 | 80.55 |
| w/o PS | 67.42 | 71.34 | 75.27 | 78.49 | 80.53 |
The defined batch size only affects visual prompt while the batch size of text prompts is actually the number of the dataset due to the implementation of CLIP. The dimension transformation of visual and text prompts is shown in Figure 3. The batch size of text prompt is actually the number of categories in the dataset. We experiment with three settings to align the batch size of visual and text prompts. Figure 4 reports the average accuracy over three runs on different shots (1/2/4/8/16) of 10 datasets (without ImageNet for time efficiency).
'Avg' means that we use the average of visual and text prompts across the dimension of batch. 'Max' stands for using the features with the highest response across the batch dimension as the visual and text prompt. 'First' represents that we select the first embedding across the batch dimension of visual and text prompts to feed into CMPA. Overall, the 'avg' setting of feature fusion can achieve better performance compared with 'max' and 'first'.
Parameter Sharing. We intend to learn as few parameters as possible to achieve a transfer of largescale pre-trained models in downstream tasks. Setting the prompt depth to 9 means that there are 9 CMPA modules, which greatly increases the number of trainable parameters for the model. Hence we conduct the experiment in which the parameters of CMPA are shared across different layers. Table 1 shows the average results of different shots on 11 datasets. 'PS' is short for 'parameter sharing'. It can be observed that on most shots (except for 8 shots) the performance of parameter sharing is higher than non-sharing setting.
## 4.2 Domain Generalization
After prompt tuning on specific datasets, we do not want to lose the general knowledge of the pretrained large model. In this section, we conduct domain adaptation experiments to evaluate the generalization ability of our model DCP.
| Method | Source | Target | Average | OOD Average | | | |
|------------|----------|----------|-----------|---------------|-------|-------|-------|
| ImageNet | -V2 | -S | -A | -R | | | |
| CLIP | 66.73 | 60.83 | 46.15 | 47.77 | 73.96 | 59.09 | 57.18 |
| CoOp | 71.53 | 64.20 | 47.99 | 49.71 | 75.21 | 61.73 | 59.28 |
| CoCoOp | 71.02 | 64.07 | 48.75 | 50.63 | 76.18 | 62.13 | 59.91 |
| VPT-Deep | 70.57 | 63.67 | 47.66 | 43.85 | 74.42 | 60.03 | 57.40 |
| MaPLe | 71.02 | 64.07 | 49.15 | 50.90 | 76.98 | 62.42 | 60.28 |
| UPT | 72.63 | 64.35 | 48.66 | 50.66 | 76.24 | 62.51 | 59.98 |
| DCP (ours) | 71.53 | 64.50 | 48.77 | 49.40 | 76.50 | 62.14 | 59.79 |
## 4.2.1 Datasets And Implementation Details
Following (Zhou et al., 2022b), we use ImageNet (Deng et al., 2009) as source domain, and ImageNet V2 (Recht et al., 2019),
ImageNet-Sketch (Wang et al., 2019), ImageNetA (Hendrycks et al., 2021b) and ImageNetR (Hendrycks et al., 2021a) as target domains. We train our model on the 16 shots of ImageNet, and test it on other four datasets. Different from the settings in few-shot task, the training epoch on 16shot ImageNet in cross domain task is set to 5. We also decrease the prompt length to 8.
## 4.2.2 Main Results
Table 2 compares our method DCP with other prompt learning methods on cross-domain tasks.
The compared methods include zero-shot CLIP,
unimodal prompt learning methods (CoOp, CoCoOp and VPT-Deep) and multi-modal prompt learning methods (MaPLe and UPT). The best results on different datasets are in bold, and the second best results are underlined. We can observe that 1) prompt learning does not corrupt the generalization ability of pre-trained large models; 2)
multi-modal prompt learning methods outperform unimodal prompt learning methods in generalization performance; 3) our method can get comparable performance as the state-of-the-art methods.
## 5 Discussion And Conclusion
This paper proposes a deeply coupled cross-modal prompt learning method, with a core module crossmodal prompt attention. Our method focuses on optimizing the interaction across different models and layers to address the alignment between vision and language. Experiments on few-shot image classification and domain adaptation evidence that our method can transfer the general knowledge learned by pre-trained foundation models to downstream tasks without penalty of the original generalization ability. Our method provides a strong baseline on few-shot image classification. The deep fusion between visual and language information may enable our approach to have greater potential for complex cross-modal tasks, such as referring expression comprehension (Subramanian et al., 2022), image retrieval (Baldrati et al., 2022) and visual question answering (Liu et al., 2022). We will apply our method to such complicated cross-modal tasks to evaluate its effectiveness in our future work.
![7_image_0.png](7_image_0.png)
## 6 Limitations
We discover that for datasets with a relatively large number of categories, our method requires a more delicate setting of epoch under different shots. Figure 5 shows the average results on Sun397 and ImageNet of different epochs. It can be observed that for datasets with a large number of categories
(such as Sun397 and ImageNet), as the number of shots decreases, the performance deteriorates with an increase in the number of epochs, which is not evident on the datasets with a small number of categories. We will delve further into this problem to find the reason and solution.
## 7 Acknowledgement
We would like to thank anonymous reviewers for their insightful comments to help improve the paper. This publication has emanated from research conducted with the support of SenseTime Research and Hetao Shenzhen-Hong Kong Science and Technology Innovation Cooperation Zone
(HZQB-KCZYZ-2021045.
## References
Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. 2019. Fusion of detected objects in text for visual question answering. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2131–2140. Association for Computational Linguistics.
Alberto Baldrati, Marco Bertini, Tiberio Uricchio, and Alberto Del Bimbo. 2022. Effective conditioned and composed image retrieval combining clip-based features. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 21434–21442.
IEEE.
Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool.
2014. Food-101 - mining discriminative components with random forests. In *Computer Vision - ECCV*
2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VI,
volume 8694 of *Lecture Notes in Computer Science*,
pages 446–461. Springer.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33:
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. UNITER: learning universal image-text representations. *CoRR*, abs/1909.11740.
Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. 2014. Describing textures in the wild. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, pages 3606–3613. IEEE Computer Society.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248–255. IEEE Computer Society.
Kun Ding, Ying Wang, Pengzhang Liu, Qiang Yu, Haojian Zhang, Shiming Xiang, and Chunhong Pan. 2022.
Prompt tuning with soft context sharing for visionlanguage models. *CoRR*, abs/2208.13474.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *9th International Conference* on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
Li Fei-Fei, Rob Fergus, and Pietro Perona. 2004. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In *IEEE Conference on* Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2004, Washington, DC, USA, June 27 - July 2, 2004, page 178. IEEE Computer Society.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE
Computer Society.
Xuehai He, Diji Yang, Weixi Feng, Tsu-Jui Fu, Arjun R.
Akula, Varun Jampani, Pradyumna Narayana, Sugato Basu, William Yang Wang, and Xin Eric Wang. 2022.
CPL: counterfactual prompt learning for vision and language models. *CoRR*, abs/2210.10362.
Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2019. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. *IEEE J. Sel. Top. Appl. Earth Obs.*
Remote. Sens., 12(7):2217–2226.
Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. 2021a. The many faces of robustness: A critical analysis of outof-distribution generalization. In 2021 IEEE/CVF
International Conference on Computer Vision, ICCV
2021, Montreal, QC, Canada, October 10-17, 2021, pages 8320–8329. IEEE.
Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. 2021b. Natural adversarial examples. In *IEEE Conference on Computer Vision* and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 15262–15271. Computer Vision Foundation / IEEE.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In *Proceedings of the 38th International Conference on Machine Learning, ICML*
2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 4904–4916. PMLR.
Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge J. Belongie, Bharath Hariharan, and Ser-Nam Lim. 2022. Visual prompt tuning. In *Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIII*, volume 13693 of Lecture Notes in Computer Science, pages 709–727. Springer.
Muhammad Uzair Khattak, Hanoona Abdul Rasheed, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. 2022. Maple: Multi-modal prompt learning.
CoRR, abs/2210.03117.
James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A.
Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell.
2016. Overcoming catastrophic forgetting in neural networks. *CoRR*, abs/1612.00796.
Jonathan Krause, Michael Stark, Jia Deng, and Li FeiFei. 2013. 3d object representations for fine-grained categorization. In *2013 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2013, Sydney, Australia, December 1-8, 2013*,
pages 554–561. IEEE Computer Society.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 3045–
3059. Association for Computational Linguistics.
Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. 2020. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training.
In *The Thirty-Fourth AAAI Conference on Artificial* Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 11336–11344. AAAI Press.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language.
CoRR, abs/1908.03557.
Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, KaiWei Chang, and Jianfeng Gao. 2022. Grounded language-image pre-training. In *IEEE/CVF Conference on Computer Vision and Pattern Recognition,*
CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10955–10965. IEEE.
Yuhang Liu, Wei Wei, Daowan Peng, and Feida Zhu.
2022. Declaration-based prompt tuning for visual question answering. In Proceedings of the ThirtyFirst International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 3264–3270. ijcai.org.
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee.
2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 814, 2019, Vancouver, BC, Canada, pages 13–23.
Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, and Aniruddha Kembhavi. 2022a. Unifiedio: A unified model for vision, language, and multimodal tasks. *CoRR*, abs/2206.08916.
Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vision and language representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA,
June 13-19, 2020, pages 10434–10443. Computer Vision Foundation / IEEE.
Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, and Xinmei Tian. 2022b. Prompt distribution learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 5196–
5205. IEEE.
Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew B.
Blaschko, and Andrea Vedaldi. 2013. Finegrained visual classification of aircraft. *CoRR*,
abs/1306.5151.
Maria-Elena Nilsback and Andrew Zisserman. 2008.
Automated flower classification over a large number of classes. In Sixth Indian Conference on Computer Vision, Graphics & Image Processing, ICVGIP 2008, Bhubaneswar, India, 16-19 December 2008, pages 722–729. IEEE Computer Society.
Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. 2012. Cats and dogs. In *2012* IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, June 16-21, 2012, pages 3498–3505. IEEE Computer Society.
Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, and Arun Sacheti. 2020. Imagebert: Cross-modal pre-training with large-scale weak-supervised imagetext data. *CoRR*, abs/2001.07966.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In *Proceedings of the 38th International* Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763.
PMLR.
Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do imagenet classifiers generalize to imagenet? In *Proceedings of the* 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5389–5400. PMLR.
Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28:
Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 91–99.
Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. *CoRR*,
abs/1212.0402.
Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: pretraining of generic visual-linguistic representations.
In *8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April* 26-30, 2020. OpenReview.net.
Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, and Anna Rohrbach.
2022. Reclip: A strong zero-shot baseline for referring expression comprehension. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5198–5215. Association for Computational Linguistics.
Hao Tan and Mohit Bansal. 2019. LXMERT: learning cross-modality encoder representations from transformers. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5099–
5110. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Haohan Wang, Songwei Ge, Zachary C. Lipton, and Eric P. Xing. 2019. Learning robust global representations by penalizing local predictive power. In *Advances in Neural Information Processing Systems 32:*
Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 10506–10518.
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022a. OFA: unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In International Conference on Machine Learning, ICML
2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 23318–23340. PMLR.
Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2022b. Simvlm: Simple visual language model pretraining with weak supervision. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. 2010. SUN database:
Large-scale scene recognition from abbey to zoo. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, San Francisco, CA, USA, 13-18 June 2010, pages 3485–
3492. IEEE Computer Society.
Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2022. Unitab: Unifying text and box outputs for grounded vision-language modeling. In *Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXVI*, volume 13696 of *Lecture Notes in* Computer Science, pages 521–539. Springer.
Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. 2022.
Coca: Contrastive captioners are image-text foundation models. *CoRR*, abs/2205.01917.
Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, and Pengchuan Zhang.
2021. Florence: A new foundation model for computer vision. *CoRR*, abs/2111.11432.
Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, and Chen Change Loy. 2022. Unified vision and language prompt learning. *CoRR*, abs/2210.07225.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022a. Conditional prompt learning for vision-language models. In *IEEE/CVF Conference* on Computer Vision and Pattern Recognition, CVPR
2022, New Orleans, LA, USA, June 18-24, 2022, pages 16795–16804. IEEE.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022b. Learning to prompt for visionlanguage models. *Int. J. Comput. Vis.*, 130(9):2337–
2348.
Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and VQA. In *The Thirty-Fourth AAAI Conference on* Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI
2020, New York, NY, USA, February 7-12, 2020, pages 13041–13049. AAAI Press.
Beier Zhu, Yulei Niu, Yucheng Han, Yue Wu, and Hanwang Zhang. 2022. Prompt-aligned gradient for prompt tuning. *CoRR*, abs/2205.14865.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Section 6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 4.1.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
section 4.1.1
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
bao-etal-2023-opinion | Opinion Tree Parsing for Aspect-based Sentiment Analysis | https://aclanthology.org/2023.findings-acl.505 | Extracting sentiment elements using pre-trained generative models has recently led to large improvements in aspect-based sentiment analysis benchmarks. These models avoid explicit modeling of structure between sentiment elements, which are succinct yet lack desirable properties such as structure well-formedness guarantees or built-in elements alignments. In this study, we propose an opinion tree parsing model, aiming to parse all the sentiment elements from an opinion tree, which can explicitly reveal a more comprehensive and complete aspect-level sentiment structure. In particular, we first introduce a novel context-free opinion grammar to normalize the sentiment structure. We then employ a neural chart-based opinion tree parser to fully explore the correlations among sentiment elements and parse them in the opinion tree form. Extensive experiments show the superiority of our proposed model and the capacity of the opinion tree parser with the proposed context-free opinion grammar. More importantly, our model is much faster than previous models. |
## Opinion Tree Parsing For Aspect-Based Sentiment Analysis
Xiaoyi Bao1, Xiaotong Jiang1**, Zhongqing Wang**1∗
, Yue Zhang2,3**, and Guodong Zhou**1 1Natural Language Processing Lab, Soochow University, Suzhou, China 2School of Engineering, Westlake University 3Institute of Advanced Technology, Westlake Institute for Advanced Study [email protected],[email protected] [email protected]
{wangzq,gdzhou}@suda.edu.cn
## Abstract
Extracting sentiment elements using pretrained generative models has recently led to large improvements in aspect-based sentiment analysis benchmarks. However, these models always need large-scale computing resources, and they also ignore explicit modeling of structure between sentiment elements. To address these challenges, we propose an opinion tree parsing model, aiming to parse all the sentiment elements from an opinion tree, which is much faster, and can explicitly reveal a more comprehensive and complete aspect-level sentiment structure. In particular, we first introduce a novel context-free opinion grammar to normalize the opinion tree structure. We then employ a neural chart-based opinion tree parser to fully explore the correlations among sentiment elements and parse them into an opinion tree structure. Extensive experiments show the superiority of our proposed model and the capacity of the opinion tree parser with the proposed context-free opinion grammar. More importantly, the results also prove that our model is much faster than previous models. Our code can be found in https://github.com/
HoraceXIaoyiBao/OTP4ABSA-ACL2023.
## 1 Introduction
Aspect-based sentiment analysis (ABSA) has drawn increasing attention in the community, which includes four subtasks: aspect term extraction, opinion term extraction, aspect term category classification and aspect-level sentiment classification. The first two subtasks aim to extract the aspect term and the opinion term appearing in one sentence. The goals of the remaining two subtasks are to detect the category and sentiment polarity towards the extracted aspect term.
Previously, most ABSA tasks are formulated as either sequence-level (Qiu et al., 2011; Peng et al.,
2020; Cai et al., 2021) or token-level classification
∗ Corresponding author
![0_image_0.png](0_image_0.png)
problems (Tang et al., 2016). However, these methods usually suffer severely from error propagation because the overall prediction performance hinges on the accuracy of every step (Peng et al., 2020).
Therefore, recent studies tackle the ABSA problem with a unified generative approach. For example, they treat the class index (Yan et al., 2021) or the desired sentiment element sequence (Zhang et al.,
2021b,a) as the target of generation model. More recently, Bao et al. (2022) addresses the importance of correlations among sentiment elements (e.g., aspect term, opinion term), and proposes an opinion tree generation model, which aims to jointly detect all sentiment elements in a tree structure.
The major weakness of generative approaches is the training and inference efficiency, they always need large-scale computing resources. In addition, these generative approaches lack certain desirable properties. There are no structural guarantees of structure well-formedness, i.e. the model may predict strings that can not be decoded into valid opinion trees, and post-processing is required. Furthermore, predicting linearizations ignores the implicit alignments among sentiment elements, which provide a strong inductive bias.
As shown in Figure 1, we convert all the sentiment elements into an opinion tree and design a neural chart-based *opinion tree parser* to address these shortcomings. The opinion tree parser is much simpler and faster than generative models.
It scores each span independently and performs a global search over all possible trees to find the highest-score opinion tree (Kitaev and Klein, 2018; Kitaev et al., 2019). It explicitly models tree structural constraints through span-based searching and yield alignments by construction, thus guaranteeing tree structure well-formedness.
One challenge to the above is that not all the review texts contain standard sentiment quadruplets
(i.e., aspect term, opinion term, aspect category, and polarity) which can be easily formed in an opinion tree (Bao et al., 2022). For example, there may be more than one opinion term correlated with an aspect term and vice versa. In addition, aspect or opinion terms might be implicit. According to our statistics, such irregular situations appear in more than half of review texts. In this study, we propose a novel *context-free opinion grammar* to tackle these challenges. The grammar is generalized and well-designed, it is used to normalize the sentiment elements into a comprehensive and complete opinion tree. Furthermore, it contains four kinds of conditional rules, i.e., one-to-many, mono-implicit, bi-implicit, cross-mapping, which are used to solve the irregular situations in opinion tree parsing.
The detailed evaluation shows that our model significantly advances the state-of-the-art performance on several benchmark datasets. In addition, the empirical studies also indicate that the proposed opinion tree parser with context-free opinion grammar is more effective in capturing the sentiment structure than generative models. More importantly, our model is much faster than previous models.
## 2 Related Work
As a complex and challenging task, aspect-based sentiment analysis (ABSA) consists of numerous sub-tasks. The researches on ABSA generally follow a route from handling single sub-task to complex compositions of them. The fundamental subtasks focus on the prediction of a single sentiment element, such as extracting the aspect term (Qiu et al., 2011; Tang et al., 2016; Wang et al., 2021),
detecting the mentioned aspect category (Bu et al., 2021; Hu et al., 2019), and predicting the sentiment polarity for a given aspect (Tang et al., 2016; Chen et al., 2022a; Liu et al., 2021; Seoh et al., 2021; Zhang et al., 2022).
Since the sentiment elements are natural correlated, many studies focus on exploring the joint extraction of pairwise sentiment elements, including aspect and opinion term extraction (Xu et al.,
2020; Li et al., 2022); aspect term extraction and its polarity detection (Zhang and Qian, 2020); aspect category and polarity detection (Cai et al., 2020).
Furthermore, recent studies also employed end-toend models to extract all the sentiment elements in triplet or quadruple format (Peng et al., 2020; Wan et al., 2020; Cai et al., 2021; Zhang et al., 2021a; Chen et al., 2022b; Mukherjee et al., 2021).
More recently, studies using pre-trained encoderdecoder language models show great improvements in ABSA (Zhang et al., 2021a). They either treated the class index (Yan et al., 2021) or the desired sentiment element sequence (Zhang et al., 2021b) as the target of the generation model. in addition, Bao et al. (2022) addressed the importance of correlations among sentiment elements, and proposed an opinion tree generation model, which aims to jointly detect all sentiment elements in a tree structure. However, the generative models always need large-scale computing resources, they also cannot guarantee the structure well-formedness, and ignores the implicit alignments among sentiment elements.
In this study, we propose a novel opinion tree parser, which aims to model and parse the sentiment elements from the opinion tree structure. The proposed model shows significant advantages in both decoding efficiency and performance as it is much faster and more effective in capturing the sentiment structure than generative models. Furthermore, we design a context-free opinion grammar to normalize the opinion tree structure, and improve parser's applicability decisions for complex compounding phenomena.
## 3 Overview Of Proposed Model
Aspect-based sentiment analysis aims to extract all kinds of sentiment elements and their relations from review text. Basically, there are four kinds of sentiment elements in the review text: aspect term denotes an entity and its aspect indicating the opinion target, which is normally a word or phrase in the text; *aspect category* represents a unique predefined category for the aspect in a particular domain; *opinion term* refers the subjective statement on an aspect, which is normally a subjective word or phrase in the text; *polarity* is the prede-
![2_image_0.png](2_image_0.png)
fined semantic orientation (e.g., positive, negative, or neutral) toward the aspect.
As shown in Figure 2, we convert all the sentiment elements into an opinion tree, and we design a chart-based opinion tree parser with context-free opinion grammar to parse the opinion tree from review text. In particular, we firstly propose a contextfree opinion grammar to normalize the sentiment elements into an opinion tree. We then perform a neural chart-based opinion tree parser to parse the opinion tree structure from a given review text.
Since all the sentiment elements are normalized into the opinion tree, it is easy to recover them from the tree. In the next two sections, we will discuss the context-free opinion grammar and the opinion tree parser in detail.
## 4 Context-Free Opinion Grammar
In this study, we propose a novel context-free opinion grammar to normalize the opinion tree structure.
In the below of this section, we first introduce basic definitions of context-free opinion grammar. After that, we give some conditional rules to solve irregular situations and show some examples to illustrate the effectiveness of proposed grammar.
## 4.1 Basic Definitions
A context-free opinion grammar (CFOG) is a tuple G = (N, Σ*, P, S*), where N and Σ are finite, disjoint sets of non-terminal and terminal symbols, respectively, Table 1 gives the notation of nonterminals. S ∈ N is the start symbol and P is a finite set of rules. Each rule has the form A → α, where A ∈ N, α ∈ V∗
I
and VI = N ∪ Σ.
The top of Figure 2 gives an example of opinion parsing tree. Each terminal in the tree is either an irrelevant word or a sentiment element like aspect
![2_image_1.png](2_image_1.png)
![2_image_2.png](2_image_2.png)
W Word
or opinion term. Each non-terminal combines terminals or non-terminals to create a sub-tree of sentiment elements. In order to make the description as clear as possible, we begin with the basic rules allowed by our grammar:
S → I Q I // S → irrelevant content,quad,irrelevant content Q → A I O | O I A | ϵ // quad → (aspect,opinion) or (opinion,aspect)
Q → Q I Q // multiple quads A → C // aspect → category C → AT // category → aspect term O → P // opinion → polarity P → OT // polarity → opinion term AT → W // aspect term → word OT → W // opinion term → word I → W // irrelevant content → word W→ W W | ϵ W → happy | to | great | party | but | have | ...
C ⇔ Surface | Laptop | ... // C is replaced with a certain category P ⇔ Positive | Negative | Neutral // P is replaced with a certain polarity In the above notations, the rules bring out the
![3_image_1.png](3_image_1.png)
![3_image_0.png](3_image_0.png)
Restaurant General
Positive
(a) (b) (c) (d)
W W W W W W W W W W W
grammatical relations among the elements of a standard sentiment quadruplet. For example, I is used to define the irrelevant content in the review sentence, and Q is used to describe a sentiment quadruple. In addition, the components of quadruple, i.e.,
A and O, are used to denote the aspect pair (category C and aspect term AT) and opinion pair
(polarity P and opinion term OT). Since the opinion trees built under the above grammar may be too complicated, we adopt a pruning approach to reduce the duplication in the trees, detail discussion of pruning can be found in Appendix A.
## 4.2 Conditional Rules
Although the basic rules can be used to parse an opinion tree with standard quadruplets, they cannot handle irregular situations. In this subsection, we introduce conditional rules to improve rule applicability for complex compounding phenomena.
One-to-Many means that there is more than one opinion term correlated with an aspect term, and vice versa. For example, in the review sentence
"So *happy* to have a *great bar*", both opinion terms
"*happy*" and "*great*" are mapped to the same aspect term "bar". In this study, we attach successor elements to the preceding one and charge the rule A and O below for solving this situation:
// multiple aspects map to one opinion. **A $\rightarrow$ A I A** // multiple opinions map to one aspect **O $\rightarrow$ O I O**
Then, the above cause can be correctly parsed through these two new rules. The example of parsing result is shown in Figure 3(a).
Mono-Implicit means that either aspect term or opinion term is missing in the review text. Given a review sentence "Yum", only an opinion term appears in the sentence. For solving this problem, we attach the opinion to corresponding aspect node or attach the aspect to corresponding opinion node:
// implicit aspect term
// implicit aspect term **Q $\rightarrow$ C; C $\rightarrow$ O // quad $\rightarrow$category$\rightarrow$option** // implicit opinion term **Q $\rightarrow$ P; P $\rightarrow$ A // quad $\rightarrow$polarity $\rightarrow$ aspect**
An example of this solution can be found in Figure 3(b).
Bi-Implicit denotes that both the aspect term and opinion term are missing in the review text. As shown in the review sentence "Had a party here",
although we know that the authors express a positive opinion, both aspect term and opinion term do not appear in the sentence. To solve the situation, we insert two fake tokens F A and F O at the beginning of a sentence as the fake aspect and opinion term. Then, we can use standard rules to parse such sentences with implicit aspect and opinion.
Figure 3(c) gives an example of this solution.
Cross-Mapping means that there are more than one aspect category and opinion polarity on the review text, and their correlations are many-tomany. For example, in the review sentence "*Great* but *expensive* laptop", there are two categories
"Laptop General" and "Laptop Price" towards the aspect term "laptop". Meanwhile, the opinions towards these two categories are different. The author feels "great" about the "Laptop General",
but thinks the "Laptop Price" is "expensive". The solution of such situation is shown in below:
// two categories and two opinion terms towards one aspect term A → C1; C1 → C2; C2 → AT
// two categories and two opinion terms towards one opinion term O → P1; P1 → P2; P2 → OT
![4_image_0.png](4_image_0.png)
Then, we use the shortest path to detect the correlation between aspect category and opinion term.
As shown in Figure 3(d), since the distance between "Laptop General" and "great" is shorter than
" *expensive* ", we connect "Laptop General" with " *great*", and then connect "Laptop Price" with "
expensive ".
In summary, based on the basic and conditional rules, the proposed context-free opinion grammar can solve most situations in aspect-based sentiment analysis, and would help parse a comprehensive and complete opinion tree.
## 5 Opinion Tree Parser
In this study, we employ a neural chart-based opinion tree parser to parse sentiment elements from the opinion tree structure. As shown in Figure 4, the opinion tree parser follows an encoder-decoder architecture (Kitaev and Klein, 2018; Kitaev et al.,
2019; Cui et al., 2022). It scores each span independently and performs a global search over all possible trees to find the highest-score opinion tree.
In particular, the process of opinion tree parsing can be separated into two stages: context-aware encoding and chart-based decoding, we will discuss these in the below subsections.
## 5.1 **Span Scores And Context-Aware Encoding**
Given a review text X = {x1*, ..., x*n}, its corresponding opinion parse tree T is composed by a set of labeled spans:
$$T=\{(i_{t},j_{t},l_{t})\}|_{t=1}^{|T|}$$
t=1 (1)
where it and jt represent the t-th span's fencepost positions and lt represents the span label.
We use a self-attentive encoder as the scoring function s(*i, j*), and a chart decoder to perform a global-optimal search over all possible trees to find the highest-scoring tree given the review text. In particular, given an input review text X = {x1*, ..., x*n}, a list of hidden representations Hn 1 = {h1, h2*, ..., h*n} is produced by the encoder, where hiis a hidden representation of the input token xi. The representation of a span (*i, j*) is constructed by:
$$v_{i,j}=h_{j}-h_{i}$$
$$(2)$$
$$({\mathfrak{I}}{\mathfrak{J}})$$
vi,j = hj − hi (2)
Finally, vi,j is fed into an MLP to produce real valued scores s(*i, j,*) for all labels:
$$s(i,j)=W_{2}R e L U(W_{1}v_{i,j}+b_{1})+b_{2}$$
where W1, W2, b1 and b2 are trainable parameters, W2 ∈ R|H|×|L|can be considered as the label embedding matrix, where each column in W2 corresponds to the embedding of a particular constituent label. |H| represents the hidden dimension and |L| is the size of the label set.
## 5.2 Tree Scores And Chart-Based Decoding
The model assigns a score s(T) to each tree T,
which can be decomposed as:
$$s(T)=\sum_{(i,j,l)\in T}s(i,j,l)$$
$$\quad(4)$$
At test time, the model-optimal tree can be found efficiently using a CKY-style inference algorithm.
Given the correct tree T∗, the model is trained to satisfy the margin constraints:
$$s(T*)\geq s(T)+\Delta(T,T^{*})$$
$$({\boldsymbol{5}})$$
$$(6)$$
for all trees T by minimizing the hinge loss:
$$m a x(0,\underset{T\neq T^{*}}{m a x}\left[s(T)+\Delta(T,T^{*})\right]-s(T^{*}))$$
∗)) (6)
Here ∆ is the Hamming loss on labeled spans, and the tree corresponding to the most-violated constraint can be found using a slight modification of the inference algorithm used at test time.
## 6 Experiments
$$(1)$$
In this section, we introduce the dataset used for evaluation and the baseline methods employed for comparison. We then report the experimental results conducted from different perspectives.
## 6.1 Setting
In this study, we use ACOS dataset (Cai et al., 2021)
for our experiments. There are 2,286 sentences in Restaurant domain, and 4,076 sentences in Laptop domain. Following the setting from Cai et al.
| Method | Restaurant | Laptop | | | | |
|------------------|--------------|----------|--------|--------|--------|--------|
| P. | R. | F1. | P. | R. | F1. | |
| BERT-CRF | 0.3717 | 0.3055 | 0.3353 | 0.2966 | 0.2562 | 0.2749 |
| JET | 0.5731 | 0.2754 | 0.3720 | 0.4326 | 0.1435 | 0.2155 |
| TAS-BERT | 0.2611 | 0.4509 | 0.3307 | 0.4654 | 0.1892 | 0.2690 |
| Extract-Classify | 0.3812 | 0.5144 | 0.4378 | 0.4523 | 0.2822 | 0.3475 |
| BARTABSA | 0.5793 | 0.5513 | 0.5650 | 0.4032 | 0.3853 | 0.3940 |
| GAS | 0.5871 | 0.5694 | 0.5781 | 0.3989 | 0.3917 | 0.3953 |
| Paraphrase | 0.5977 | 0.6045 | 0.6011 | 0.3842 | 0.3930 | 0.3885 |
| OTG | 0.6094 | 0.5988 | 0.6040 | 0.4102 | 0.3901 | 0.3998 |
| Ours | 0.7113 | 0.5608 | 0.6271 | 0.4512 | 0.3791 | 0.4120 |
| Domain | Train | Validation | Test |
|------------|---------|--------------|--------|
| Restaurant | 1,529 | 171 | 582 |
| Laptop | 2,929 | 326 | 816 |
(2021), we divide the original dataset into a training set, a validation set, and a testing set. In particular, we remove some sentences (1.5% among all the sentences) which cannot be parsed (e.g., one-tomany with implicit term, nested, overlapped). The distribution of the dataset can be found in Table 3.
We tune the parameters of our models by grid searching on the validation dataset. For fair comparison, we employ T5 (Raffel et al., 2020) and fine-tune its parameters not only for our opinion tree parser's encoder, but also for the backbone of all other generative methods. The model parameters are optimized by Adam (Kingma and Ba, 2015) with a learning rate of 5e-5. The batch size is 128 with a maximum 512 token length. Our experiments are carried out with a Nvidia RTX
3090 GPU. The experimental results are obtained by averaging ten runs with random initialization.
In evaluation, a quadruple is viewed as correct if and only if the four elements, as well as their combination, are exactly the same as those in the gold quadruple. On this basis, we calculate the Precision and Recall, and use F1 score as the final evaluation metric for aspect sentiment quadruple extraction (Cai et al., 2021; Zhang et al., 2021a).
## 6.2 Main Results
We compare the proposed opinion tree parser with several classification-based aspect-based sentiment analysis models, including, *BERT-CRF* (Devlin et al., 2019), JET (Xu et al., 2020), TAS-
BERT (Wan et al., 2020) and *Extract-Classify* (Cai et al., 2021). In addition, generative models are also compared, such as *BARTABSA* (Yan et al., 2021),
GAS (Zhang et al., 2021b), *Paraphrase* (Zhang et al., 2021a) and OTG (Bao et al., 2022).1 As shown in Table 2, we find that generative models give the best performance among the previous systems. It shows that the unified generation architecture helps extract sentiment elements jointly.
Meanwhile, our proposed model outperforms all the previous studies significantly (p < 0.05) in all settings. It indicates that the chart-based opinion parser is more useful for explicitly modeling tree structural constraints, while previous generative models cannot guarantee the structure wellformedness, and their generated linearized string ignores the implicit alignments among sentiment elements. Furthermore, the results also indicate the effectiveness of the context-free opinion grammar, which is used to form the sentiment structure into an opinion tree.
## 6.3 Comparison Of Decoding Efficiency
Table 4 compares different models in terms of decoding speed. For a fair comparison, we re-run all previous models on the same GPU environment.
The results are averaged over 3 runs. In addition, the settings of batch size are the same for all the models.
As we can see, for generative models (Zhang et al., 2021b,a; Bao et al., 2022), they have to generate words one by one, leading to their low speed, and the beam searching during decoding makes the speed much slower. Meanwhile, based 1The implementations of JET, TAS-BERT, ExtractClassify and OTG are based on their official codes, we reimplement the remaining by ourselves.
| Method | Encoder | Time (s) |
|------------|-----------|------------|
| BERT-CRF | 1.96 | |
| JET | 2.83 | |
| BERT | | |
| Ours | 0.81 | |
| GAS | 58.2 | |
| Paraphrase | 61.3 | |
| OTG | 64.9 | |
| Ours | 1.04 | |
| T5 | | |
Table 4: Decoding efficiency of different models.
![6_image_0.png](6_image_0.png)
| Rules | Restaurant | Laptop |
|---------------|--------------|----------|
| Basic | 0.4558 | 0.2727 |
| +OneToMany | 0.5812 | 0.3175 |
| +MonoImplicit | 0.4856 | 0.3632 |
| +BiImplicit | 0.5167 | 0.2984 |
| +CrossMapping | 0.4598 | 0.2786 |
| Ours | 0.6271 | 0.4120 |
on span-based searching, our chart-based opinion tree parser achieves a much higher speed. In addition, the speed of proposed opinion tree parser is faster than the classification-based models (e.g.,
BERT-CRF, JET). It may be due to that these classification-based models extract the sentiment elements one by one as pipeline systems. It also indicates the effectiveness of the chart-based parser and span-based searching, which could parallelly extract the sentiment elements in the sentence.
## 7 Analysis And Discussion
In this section, we give some analysis and discussion to show the effectiveness of proposed opinion tree parser for aspect-based sentiment analysis.
| Method | Restaurant | Laptop |
|----------|--------------|----------|
| BERT-CRF | 0.3353 | 0.2749 |
| Zhang19 | 0.5021 | 0.3537 |
| Nguyen21 | 0.5872 | 0.3673 |
| Yang22 | 0.5936 | 0.3712 |
| Ours | 0.6123 | 0.3748 |
## 7.1 **Effect Of Context-Free Opinion Grammar**
We firstly give the statistic of regular and irregular situations of opinion trees in Figure 5, where Basic is the regular situation which contains full four elements of a quadruple, and others are the irregular situations. From the figure, we find that the distribution of these situations are similar in the two domains: around half of reviews contains regular full quadruple situations, and mono-implicit is the most frequency irregular situations.
We then analyze the effect of different conditional rules which are used to solve irregular situations. As shown in Table 5, we can find that if we only use the basic rules, the performance of opinion tree parser is very low. It may be due to the irregular situations appear in more than half of the review texts. In addition, all the conditional rules are beneficial to parse the opinion tree. Among these rules, one-to-many performs better than others. Furthermore, our proposed model achieves the best performance, which proves the effect of conditional rules.
## 7.2 Results Of Different Tree Parsers
We then analyze the effect of different tree parsers with the proposed context-free opinion tree grammar. In particular, we select three popular parsers which have shown their effect on syntax tree parsing (Zhang et al., 2019; Nguyen et al., 2021)
and name entity recognition (Yang and Tu, 2022).
Among these parsers, Zhang et al. (2019) is transition-based parser, which constructs a complex output structure holistically, through a statetransition process with incremental output-building actions; Nguyen et al. (2021) and Yang and Tu
(2022) are sequence-to-sequence parsers, which employ pointing mechanism for bottom-up parsing and use sequence-to-sequence backbone. For fair comparison, we use RoBERTa-base (Liu et al.,
2019) as the backbone of all the parsers and our proposed chart-based opinion tree parser.
| Schema | Domain | OTG | Ours |
|----------|------------|--------|--------|
| Pair | Restaurant | 0.6906 | 0.7681 |
| Laptop | 0.7201 | 0.7602 | |
| Triple | Restaurant | 0.6582 | 0.7051 |
| Laptop | 0.6562 | 0.6843 | |
| Quad | Restaurant | 0.6040 | 0.6271 |
| Laptop | 0.3998 | 0.4120 | |
As shown in Table 6, all the parsers outperform the BERT-CRF. It shows the effect of the proposed context-free opinion grammar. No matter which parser we use, it achieves better performance than classification-based models. In addition, our chartbased opinion tree parser outperforms all the other parsers with a remarkable advantage. It may be due to that all the other parsers suffer from error propagation and exposure bias problems. Meanwhile, our proposed chart-based parser could infer parallelly, especially effective in parsing long review texts. Such observation has also been proven in neural constituency parsing (Cui et al., 2022),
the chart-based parser reported state-of-the-art performance in that task.
## 7.3 Impact Of Opinion Tree Schemas
We analyze the effect of the proposed model with the opinion tree generation model (OTG) (Bao et al., 2022) in different opinion tree schemas. OTG
employs a generative model to jointly detect all sentiment elements in a linearized tree formation with a sequence-to-sequence architecture. In particular, there are three popular schemas: *Pair* means that we only extract aspect term and opinion term from review text (Qiu et al., 2011; Xu et al., 2020; Li et al., 2022), and *Triple* means that we extract aspect term, opinion term, and polarity from review text (Zhang et al., 2021b; Chen et al., 2021). *Quad* is the quadruple schema that extracts the whole four sentiment elements to form the opinion tree (Cai et al., 2020; Zhang et al., 2021a; Bao et al., 2022).
Note that, we make minor modifications to the context-free opinion grammar, and let it suitable for Pair and Triple schemas.
From Table 7, we can find that our model outperforms OTG in all the schemas. It indicates that our opinion tree parser model is generalized and can be used to handle different schemas in aspect-based sentiment analysis. It also shows that the parsing
![7_image_0.png](7_image_0.png)
strategy is more effect than generative model on capture the structure of sentiment elements. In addition, we also find that the improvement of Pair and Triple are much higher than Quad, it may be due to that the simple schema is easier to normalize and recover.
We then analyze the completeness of the tree structure generated/parsed from OTG and the proposed model. The completeness is calculated through the valid rate of a tree structure. As shown in Figure 6, the completeness of the proposed model is higher than OTG in all the schemas.
It shows that our proposed model can explicitly model tree structural constraints, and guarantee tree structure well-formedness. In addition, the high completeness also guarantees the quality of recovery from tree structure to sentiment elements.
Furthermore, case studies in Appendix B are given to make more intuitive comparisons between OTG and proposed opinion tree parser.
## 8 Conclusion
In this study, we propose a novel opinion tree parsing model, aiming to parse all the sentiment elements into an opinion tree, which can reveal a more comprehensive and complete aspect-level sentiment structure. In particular, we first introduce a novel context-free opinion grammar to normalize the opinion structure. We then employ a neural chart-based opinion tree parser to fully explore the correlations among sentiment elements and parse them in the opinion tree form. Detailed evaluation shows that our model significantly advances the state-of-the-art performance on several benchmarks. The empirical studies also show that the proposed opinion tree parser with context-free opinion grammar is more effective in capturing the opinion tree structure than generative models with a remarkable advantage in computation cost.
## 9 Limitations
The limitations of our work can be stated from two perspectives. First, the proposed context-free opinion grammar is designed manually. It can be the future work to explore how to automatic generate the grammar. Secondly, we focus on opinion tree parsing in one major language. The performance of other languages remains unknown.
## Acknowledgments
We would like to thank Prof. Yue Zhang for his helpful advice and discussion during this work.
Also, we would like to thank the anonymous reviewers for their excellent feedback. This work is supported by the China National Key R&D
Program (No. 2020AAA0108604), and the National Natural Science Foundation of China (No.
61976180, No. 62006093).
## References
Xiaoyi Bao, Zhongqing Wang, Xiaotong Jiang, Rong Xiao, and Shoushan Li. 2022. Aspect-based sentiment analysis with opinion tree generation. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 4044–4050. ijcai.org.
Jiahao Bu, Lei Ren, Shuang Zheng, Yang Yang, Jingang Wang, Fuzheng Zhang, and Wei Wu. 2021. ASAP: A
Chinese review dataset towards aspect category sentiment analysis and rating prediction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2069–2079, Online. Association for Computational Linguistics.
Hongjie Cai, Yaofeng Tu, Xiangsheng Zhou, Jianfei Yu, and Rui Xia. 2020. Aspect-category based sentiment analysis with hierarchical graph convolutional network. In *Proceedings of the 28th International* Conference on Computational Linguistics, pages 833–
843, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Hongjie Cai, Rui Xia, and Jianfei Yu. 2021. Aspectcategory-opinion-sentiment quadruple extraction with implicit aspects and opinions. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 340–350, Online.
Association for Computational Linguistics.
Chenhua Chen, Zhiyang Teng, Zhongqing Wang, and Yue Zhang. 2022a. Discrete opinion tree induction for aspect-based sentiment analysis. In *Proceedings* of the 60th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 2051–2064, Dublin, Ireland. Association for Computational Linguistics.
Hao Chen, Zepeng Zhai, Fangxiang Feng, Ruifan Li, and Xiaojie Wang. 2022b. Enhanced multi-channel graph convolutional network for aspect sentiment triplet extraction. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2974–2985, Dublin, Ireland. Association for Computational Linguistics.
Shaowei Chen, Yu Wang, Jie Liu, and Yuelin Wang.
2021. Bidirectional machine reading comprehension for aspect sentiment triplet extraction. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
volume 35, pages 12666–12674.
Leyang Cui, Sen Yang, and Yue Zhang. 2022. Investigating non-local features for neural constituency parsing. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2065–2075. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Mengting Hu, Shiwan Zhao, Li Zhang, Keke Cai, Zhong Su, Renhong Cheng, and Xiaowei Shen. 2019. CAN:
Constrained attention networks for multi-aspect sentiment analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 4601–4610, Hong Kong, China. Association for Computational Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In *Proceedings of the 57th Conference* of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3499–3505. Association for Computational Linguistics.
Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne,
Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2676–2686. Association for Computational Linguistics.
Junjie Li, Jianfei Yu, and Rui Xia. 2022. Generative cross-domain data augmentation for aspect and opinion co-extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4219–4229, Seattle, United States. Association for Computational Linguistics.
Jian Liu, Zhiyang Teng, Leyang Cui, Hanmeng Liu, and Yue Zhang. 2021. Solving aspect category sentiment analysis as a text generation task. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4406–4416, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Rajdeep Mukherjee, Tapas Nayak, Yash Butala, Sourangshu Bhattacharya, and Pawan Goyal. 2021.
PASTE: A tagging-free decoding framework using pointer networks for aspect sentiment triplet extraction. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*,
pages 9279–9291, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Thanh-Tung Nguyen, Xuan-Phi Nguyen, Shafiq Joty, and Xiaoli Li. 2021. A conditional splitting framework for efficient constituency parsing. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 5795–5807, Online.
Association for Computational Linguistics.
Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2020. Knowing what, how and why: A near complete solution for aspect-based sentiment analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8600–8607.
Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen.
2011. Opinion Word Expansion and Target Extraction through Double Propagation. Computational Linguistics, 37(1):9–27.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Ronald Seoh, Ian Birle, Mrinal Tak, Haw-Shiuan Chang, Brian Pinette, and Alfred Hough. 2021. Open aspect target sentiment classification with natural language prompts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6311–6322, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu.
2016. Effective lstms for target-dependent sentiment classification. In *COLING 2016*, pages 3298–3307.
Hai Wan, Yufei Yang, Jianfeng Du, Yanan Liu, Kunxun Qi, and Jeff Z. Pan. 2020. Target-aspect-sentiment joint detection for aspect-based sentiment analysis.
In *AAAI 2020*, pages 9122–9129.
Qianlong Wang, Zhiyuan Wen, Qin Zhao, Min Yang, and Ruifeng Xu. 2021. Progressive self-training with discriminator for aspect term extraction. In *Proceedings of the 2021 Conference on Empirical Methods* in Natural Language Processing, pages 257–268, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Lu Xu, Hao Li, Wei Lu, and Lidong Bing. 2020.
Position-aware tagging for aspect sentiment triplet extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 2339–2349, Online. Association for Computational Linguistics.
Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 2416–2429, Online.
Association for Computational Linguistics.
Songlin Yang and Kewei Tu. 2022. Bottom-up constituency parsing and nested named entity recognition with pointer networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2403–2416, Dublin, Ireland. Association for Computational Linguistics.
Junchi Zhang, Yanxia Qin, Yue Zhang, Mengchi Liu, and Donghong Ji. 2019. Extracting entities and events as a single task using a transition-based neural model. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5422–5428. International Joint Conferences on Artificial Intelligence Organization.
Mi Zhang and Tieyun Qian. 2020. Convolution over hierarchical syntactic and lexical graphs for aspect level sentiment analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3540–3549, Online. Association for Computational Linguistics.
Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, and Wai Lam. 2021a. Aspect sentiment quad prediction as paraphrase generation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 9209–
9219, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2021b. Towards generative aspect-based sentiment analysis. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2:
Short Papers), pages 504–510, Online. Association for Computational Linguistics.
Zheng Zhang, Zili Zhou, and Yanna Wang. 2022.
SSEGCN: Syntactic and semantic enhanced graph convolutional network for aspect-based sentiment analysis. In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4916–4925, Seattle, United States.
Association for Computational Linguistics.
## A Tree Pruning
As the original opinion trees are too complicated for parsing, we adopt a pruning method to reduce the duplication in trees. To be more specific, we introduce our method with a pruning example of review "So *happy* to have a *great* bar", which can be described as following steps, and the original tree is demonstrated in Figure 7(a).
- The unary chain of category and polarity are integrated into the aspect node and opinion node respectively. The processed result is shown in Figure 7(b).
- We delete the chains with ϵ leaf node, the processed result is shown in Figure 7(c).
- If the children nodes contain nodes that have exactly the same node type with the parent node, we will delete the parent node and connect children with the ancestor node directly, the processed result is shown in Figure 7(d).
Therefore, Figure 7(d) gives the final formation of our opinion tree for parsing.
## B Case Study
We launch a set of case studies to make a more intuitive comparison between our model and OTG (Bao et al., 2022). We select reviews that are predicted into invalid formation by OTG to demonstrate our models' superiority in guaranteeing structure wellformedness. As demonstrated in Table 8, these cases can be divided into following categories:
## Invalid Term
The first three examples are about invalid terms which generated from OTG.
In the first example, OTG gives a very typical wrong prediction, it rewrites "*waiting*" to "*wait*",
which could change the original meanings and does not meet the requirement of extracting raw text from the review, while our method operating over raw spans, easily gives a right answer.
In the second example, OTG generates "*atmosphere*" as the aspect term based on its understanding of "*feeling*" since they have similar semantic information. However, '*atmosphere*" does not exist in the review. On the other hand, our model also shots the right target but selects it as the final prediction under the constraints of chart decoder.
In the third example, OTG generates "not that slow" from the review, which are not continuous in the original text: the words "*not that*" appear in the beginning but "*slow*" appears in the end. In this situation, our span-based method can easily extract
"*slow*" as the opinion term since it can only operate over raw spans.
## Invalid Structure
The invalid structure means that the output sequence of OTG can not be recovered into a valid tree structure, this may due to various reasons. One of the common reasons is unmatched brackets. The fourth example shows an OTG's output sequence that can not be decoded into a valid tree since the sequence that starts with "opinion" can not be recognized as a subtree. In contrast, with the CYK-style algorithm, our method build trees and subtrees over spans, ensuring the legality of trees or subtrees.
## Invalid Category
OTG also would classifies aspect term into a non-existing category. In the fifth example, the aspect term "*msi headset*" is classified into a non-existing category "HEADSET GENERAL"
by OTG, which usually happens when it comes to the generative method with LAPTOP dataset since it has more than 100 categories. This would not be a difficult problem for our model's classifier, it will set specific target classes before starting the training process.
![11_image_0.png](11_image_0.png)
| Review text | Reason | OTG | Ours |
|-----------------------------------------------------------------|-----------------------------------------------------------------------------------------|-----------------------------------------------------|----------------------------------------------------|
| SERVICE GENERAL✓ wait staff ✗ POSITIVE ✓ perfect ✓ | SERVICE GENERAL ✓ waiting staff ✓ POSITIVE ✓ perfect ✓ | | |
| The waiting staff | Invalid | | |
| has been perfect | Term | AMBIENCE GENERAL✓ atmosphere✗ POSITIVE ✓ intimate ✓ | AMBIENCE GENERAL ✓ feeling ✓ POSITIVE ✓ intimate ✓ |
| I also really enjoy the intimate feeling of a small restaurant. | Invalid Term | OS PERFORMANCE✗ boots up✓ neural ✗ not that slow ✗ | LAPTOP PERFORMANCE ✓ boots up ✓ NEGATIVE ✓ slow ✓ |
| not that this machine | Invalid | | |
| boots up slow. | Term | FOOD QUALITY✓ delicious ✓ POSITIVE ✓ pizza ' s ✓ | |
| we're can't say enough about their delicious gourmet pizza ' s! | ( root ( quad ( aspect ( food quality, pizza ) ) ), ( opinion ( positive, null ) ) ) )✗ | | |
| Invalid structure | HEADSET GENERAL✗ msi headset✓ POSITIVE ✓ nice ✓ | DEVICE GENERAL ✓ msi headset✓ POSITIVE ✓ nice ✓ | |
| writing this review so early to receive that nice msi headset. | Invalid category | Table 8: Case study | |
From the cases shown in Table 8, we can find that our method shows significant superiority in modeling tree structural constraints and guaranteeing tree structure well-formedness, along with the quality of recovery from tree structure to sentiment elements, while OTG has to employ complex postprocessing method to strengthen its shortage.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 9.
✓ A2. Did you discuss any potential risks of your work?
Section 9.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
N/A.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And 4.
✓ B1. Did you cite the creators of artifacts you used?
Section 6.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 6.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 6.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 6.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 6.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 6.
## C ✓ **Did You Run Computational Experiments?** Section 6.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 6.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 6.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 6.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 6.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
arora-etal-2023-comix | {C}o{M}ix: Guide Transformers to Code-Mix using {POS} structure and Phonetics | https://aclanthology.org/2023.findings-acl.506 | Code-mixing is ubiquitous in multilingual societies, which makes it vital to build models for code-mixed data to power human language interfaces. Existing multilingual transformer models trained on pure corpora lack the ability to intermix words of one language into the structure of another. These models are also not robust to orthographic variations. We propose CoMixCoMix is not a trademark and only used to refer to our models for code-mixed data for presentational brevity., a pretraining approach to improve representation of code-mixed data in transformer models by incorporating phonetic signals, a modified attention mechanism, and weak supervision guided generation by parts-of-speech constraints. We show that CoMix improves performance across four code-mixed tasks: machine translation, sequence classification, named entity recognition (NER), and abstractive summarization. It also achieves the new SOTA performance for English-Hinglish translation and NER on LINCE Leaderboard and provides better generalization on out-of-domain translation. Motivated by variations in human annotations, we also propose a new family of metrics based on phonetics and demonstrate that the phonetic variant of BLEU correlates better with human judgement than BLEU on code-mixed text. | # Comix: Guide Transformers To Code-Mix Using Pos Structure And Phonetics
Gaurav Arora Amazon [email protected] Srujana Merugu Amazon [email protected] Vivek Sembium Amazon [email protected]
## Abstract
Code-mixing is ubiquitous in multilingual societies, which makes it vital to build models for code-mixed data to power human language interfaces. Existing multilingual transformer models trained on pure corpora lack the ability to intermix words of one language into the structure of another. These models are also not robust to orthographic variations. We propose CoMix1, a pretraining approach to improve representation of code-mixed data in transformer models by incorporating phonetic signals, a modified attention mechanism, and weak supervision guided generation by partsof-speech constraints. We show that CoMix improves performance across four code-mixed tasks: machine translation, sequence classification, named entity recognition (NER), and abstractive summarization. It also achieves new SOTA performance for English-Hinglish translation and NER on LINCE Leaderboard and provides better generalization on out-ofdomain translation. Motivated by variations in human annotations, we also propose a new family of metrics based on phonetics and demonstrate that the phonetic variant of BLEU correlates better with human judgement than BLEU
on code-mixed text.
## 1 Introduction
Code-mixing, i.e., embedding linguistic units of one language (*embedded language* LE) into a sentence grammatically structured as per another language (*matrix language* LM), is common in multilingual communities. Growing mobile penetration coupled with the increased adoption of informal conversational interfaces is leading to further rise in such communication. Currently, over 20% of user generated content from South Asia and parts of Europe is code-mixed (Choudhury et al., 2019).
Hinglish (code-mixed Hindi-English) has nearly 1CoMix is not a trademark and only used to refer to our models for code-mixed data for presentational brevity.
350 million speakers (GTS, 2019) making it one of the most widely spoken languages. Recent literature suggests that multilingual users associate codemixing with cultural affinity and prefer chatbots that can code-mix (Bawa et al., 2020). Code-mixed modeling is, thus, a foundational prerequisite for linguistic systems targeted towards such users.
Transformer models such as BART (Lewis et al.,
2020) and BERT (Devlin et al., 2018) have been successful across various NLP tasks. These models can readily capture code-mixing semantics if a large corpus was available for training. Unfortunately, that is not true for most code-mixed languages. Existing approaches rely on learning from a parallel corpus of embedded and matrix languages (e.g., English and Hindi for Hinglish).
Recent work (Chen et al., 2022), however, shows that multilingual models such as mBERT trained on monolingual sources fail to effectively interleave words from topologically diverse languages.
Adapting transformers to code-mixed data requires addressing the following challenges: 1.
Divergent grammatical structure. For codemixed languages such as Hinglish, where LE and LM have different Parts-of-Speech (POS) patterns, models trained on monolingual corpora do not yield similar representations for equivalent words across languages, which is needed to facilitate interleaving of LE and LM words. Linguistic theories propose certain syntactic constraints for code-mixed generation (Poplack, 1980), but these are not usually incorporated into the modeling. **2. Code-mixing**
diversity. Code-mixed languages also exhibit a wide diversity in the degree of code-mixing (e.g.,
ratio of LE to LM words). Fig 1 shows multiple Hinglish constructions for a given sentence in English. Accounting for this variation in code-mixing is necessary for high fidelity modeling. **3. Orthographic variations.** The informal nature of code-mixed interactions and lack of standardized transliteration rules leads to users employing adhoc phonological rules while writing code-mixed content. Fig 1 shows Hinglish sentences with similar sounding words and their variations ("kis", "kys").
Contributions. In this paper, we adapt transformer models for code-mixed data by addressing the above challenges. To ensure applicability to multiple downstream tasks, we focus on pretraining.
1. We propose CoMix, a set of generic pretraining methods to improve code-mixed data representations that can be applied to any transformer model assuming the availability of POS-tagger and phonetic transcription tools. These include: (a) Domain Knowledge-based Guided Attention (DKGA)
mechanism that facilitates intermixing of linguistic units of LE into the structure of LM through a modified attention function, (b) Weakly Supervised Generation (WSG) that generates code-mixed data for training in a controllable fashion driven by linguistic constraints, and (c) inclusion of phonetic signals to align embeddings of similar sounding with different orthographic representation.
2. We instantiate CoMix pretraining for BART
and BERT and demonstrate efficacy on multiple downstream NLP tasks, namely Machine Translation, NER, Sequence Classification, and Abstractive Summarization with relative improvements of up to 22%. CoMixBART and CoMixBERT achieve new state-of-the-art (SOTA) results for EnglishHinglish translation and Hinglish NER tasks on LINCE Leaderboard (Aguilar et al., 2020), beating previous best mT5 (Jawahar et al., 2021) and XLMR (Winata et al., 2021) models, despite having less than 0.5x and 0.1x model size respectively.
3. We evaluate out-of-domain code-mixed translation performance on two test sets, one created in-house and other one adapted from GupShup corpus (Mehnaz et al., 2021), and show that CoMix generalizes better than other models. To the best of our knowledge, this is the first such evaluation for English-Hinglish translation. We hope our benchmark will assist the community to improve out-ofdomain generalization of code-mixed translation, a critical need for low-resource regimes.
4. To address the limitations of existing metrics in handling orthographic variations in code-mixed data, we propose a new family of natural language generation (NLG) metrics based on phonetic adaptation of existing metrics. We observe that PhoBLEU, the phonetic variant BLEU, is better aligned to human judgement (+0.10 - 0.15 on Pearson correlation) than BLEU on Hinglish.
![1_image_0.png](1_image_0.png)
## 2 Related Work
Multilingual and Code-Mixed NLP. Recent advances in large multilingual pre-trained models such as mBERT (Devlin et al., 2018) and mBART
(Liu et al., 2020) have led to significant gains on many multilingual NLP tasks. However, evaluation of these models on code-mixed content for machine translation (Chen et al., 2022), sequence classification (Patwa et al., 2020), summarization (Mehnaz et al., 2021) and other tasks (Aguilar et al., 2020)
points to their inability to intermix words from two languages since these are pretrained on monolingual text without any language alternation. Our CoMix approach encourages the model to learn representations that allows appropriate embedding of words from one language into structure of another via domain knowledge guided attention and through weakly supervised code-mixed generation. Prior work (Sanad Zaki Rizvi et al., 2021) focuses on generating synthetic code-mixed data using constraints from linguistic theories followed by learning. We perform joint generation and learning using pretrained models that has dual benefit of data generation and improving model representations, and has been shown to be effective for anomaly detection in medical images (Li et al., 2019).
Incorporating Phonetics in Language Modeling.
Combined modeling of phonemes and text has been a topic of recent interest and has contributed in improving robustness to ASR errors (Sundararaman et al., 2021). In code-mixed domain, Soto et al.
(Soto and Hirschberg, 2019) engineered spelling and pronunciation features by calculating distance between pairs of cognate words to improve perplexity of English-Spanish models. We also incorporate phonetic signals to learn robust representations. Sentence Evaluation Metrics. Automated sentence evaluation metrics such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) for comparison of unstructured sentences led to rapid innovation in NLP by facilitating ready evaluation of NLP systems against ground truth without additional human annotations. However, these metrics are unreliable for code-mixed content as they disregard widely prevalent orthographic variations. We propose a new family of metrics to address this gap.
## 3 Comix Approach
Given a corpus of sentence pairs from LM and LE,
our goal is to adapt transformer models such as BART and BERT to overcome the key challenges in modeling code-mixed data. To ensure applicability to multiple downstream tasks, we focus on the pretraining phase. We assume access to POS
tagging and phonetic transcription tools 2 which is true for many languages (see Section 7). Below we summarize our approach for each of the challenges.
P1 - Divergence in POS structure of LE and LM:
To enable transformer models to extrapolate from LE and LM to code-mixed data, we rely on linguistic constraints. We observe that coarse groups of POS labels of concepts are preserved across translation (see Section 7) and that code-mixed sequences often retain the POS structure of LM sequence. Assuming access to POS labels the above constraints provide token-level correspondence for parallel training sentences, which can be used to augment the transformer attention mechanism and lead to representations that facilitate accurate interleaving of LE and LM words. [Section 3.1]
P2 - Variations in the level of code-mixing: To accurately model variations in code-mixed data such as the mixing propensity, we propose a weakly supervised approach that combines POS constraints with a control on the code-mixing probability to generate code-mixed sequences from parallel monolingual corpora3for training. [Section 3.2]
P3 - Orthographic variations: To align similar sounding words with orthographic variations, we incorporate phonetic signal as an additional input channel. We modify the transformer architecture 2We use Stanza for POS-tagging and Refined Soundex implementation Pyphonetics for phonetic transcription.
3By parallel monolingual corpora we mean parallel corpora wherein each one of the parallel sentences are in pure/monolingual form of their corresponding language.
![2_image_0.png](2_image_0.png)
to include two multi-head self-attention layers, one each for text and phoneme channels. [Section 3.3]
## 3.1 Domain Knowledge Guided Attention
Attention (Vaswani et al., 2017) is an essential mechanism of transformer architecture that converts an input sequence into a latent encoding using representational vectors formed from the input, i.e.,
queries, keys and values to determine the importance of each portion of input while decoding. Let X and Z denote the sequence of input tokens and the associated representations. Further, let Q, K,
V denote the sequences of query, key and value vectors derived from appropriate projections of Z.
In this case, attention is typically defined in terms of the scaled dot-product of Q and K.
To incorporate domain knowledge, we propose augmenting attention with an additional independent term f DKGA(X) defined on the input:
$$Attention(Q,K,V)=softmax(\frac{QK^T+f^\text{DKGA}(X)}{\sqrt{d_k}})V,\tag{1}$$ where d_k is the dimension of a sample, and k is the number of
where dk is the dimension of query and key vectors.
While the notion of DKGA is general, to aid with code-mixing, we focus on linguistic constraints.
We construct three groups of POS labels (see A.1)
that are preserved during translation (see 7). Let X denote the concatenation of the parallel monolingual sentences, i.e., X = XM∥XE, where XM
and XE are sentences in LM and LE respectively.
Let POSGP(x) denote the group of the POS label of a token x. The linguistic constraints require that aligned token pairs from XM and XE belong to the same POS label group. Hence, for matrix tokens, we restrict attention to compatible embedded
words by choosing f
$$\operatorname{GA}(X)=[f_{i j}^{\mathrm{DKGA}}],\,\mathrm{w}$$
ij ], where
![3_image_0.png](3_image_0.png)
$$f_{ij}^{\text{DKGA}}=\begin{cases}0&\text{if POS}^{\text{GP}}(x_{i})=\text{POS}^{\text{GP}}(x_{j})\\ &\text{and}x_{i}\in X_{M},x_{j}\in X_{E}\\ &\text{or}i=j\text{for}x_{i}\in X_{E}\\ -\infty&\text{otherwise.}\end{cases}\tag{2}$$
Note that the above asymmetric choice is motivated by the fact that code-mixed sentences retain the POS structure of LM. Fig 2 shows how tokens from XE are selected using the above strategy which coupled with self-attention ensures learning representations that facilitate better intermixing of LE tokens into LM structure. See A.4.9 for example. Instead of a hard constraint on POS-label preservation, the DKGA function can also be modified to incorporate soft transition probabilities of POS labels during an LM to LE translation, which could be learned from parallel sentence pairs with token level alignment. We can also extend DKGA
to include other sources for attention guidance, e.g.,
domain ontologies, word-alignment and also for cross-attention.
Pretraining with DKGA. We modify all selfattention blocks in the encoder with DKGA and pretrain CoMixBERT with masked language modeling (MLM) objective (Devlin et al., 2018) and CoMixBART with denoising objective (text infilling with span=1). We mask the tokens of XM for which we want DKGA to guide attention to embedded words (e.g., in Fig 2, "kapde" will be masked).
## 3.2 Weakly Supervised Generation (Wsg)
Lack of large code-mixed corpora poses a big challenge for code-mixed modeling. Hence, to facilitate direct training and allow control over desired properties such as the level of code mixing, we propose a weakly supervised mechanism for code-mixed generation using any transformer-based encoderdecoder model. The key idea is to nudge a pretrained multilingual model to code-mix by restricting the search space in the autoregressive step to a small suitable set of LE tokens exploiting the the fact that tokens with similar meaning and POS
labels in LE are likely to replace the LM token.
Fig 3 shows the generative mechanism (Equation 3 shows the corresponding equations). At each auto-regressive step, we first determine the choice to code-mix, denoted by Mi, sampled based on the mixing probability pMix of the POS label of the token xi ∈ XM and an overall code-mixing level τ . The vocabulary search space denoted by Viis chosen as the POS-compatible words (same POS group as that of xi) from XE for code-mixing, and the entire vocabulary V
all otherwise. The next token is generated via greedy search with teacher forcing. In case of code-mixing, the target yiis set to the predicted value yˆi and xi otherwise. We train the model with negative log-likelihood loss using XM as the input, Y = [yi]
N
i=1 as the target, Yˆ = [ ˆyi]
N
i=1 as the prediction. Due to the self-dependency in WSG, the efficacy depends on whether the underlying model can correctly order the tokens in Vi, which is a reasonable expectation from SOTA pretrained multilingual models.
$$\hat{y_{i}}=\operatorname{argmax}_{y\in V_{i}}P(y|y_{1},y_{2},...,y_{i-1},X_{M}),$$ $$y_{i}=\begin{cases}\hat{y}_{i}&\text{if}M_{i}=1\\ x_{i}&\text{if}M_{i}=0\end{cases},$$ $$V_{i}=\begin{cases}\{x_{j}|x_{j}\in X_{E},\\ \operatorname{POS}^{\operatorname{GP}}(x_{j})=\operatorname{POS}^{\operatorname{GP}}(x_{i})\}&\text{if}M_{i}=1\;,\\ V^{\operatorname{all}}&\text{if}M_{i}=0\end{cases}$$ $$M_{i}\sim\operatorname{Bernoulli}(\tau p_{\operatorname{POS}(x_{i})}^{\operatorname{Mix}}).\tag{3}$$
In our experiments, we set τ = 1 and p*M ix* to 1 for POS groups {NOUN, P ROP N, ADJ,
ADV, V ERB} where code-mixing is frequent and 0 for the rest but can learn it from a small codemixed corpus in future. The proposed WSG mechanism can also be applied to encoder-only models such as BERT by considering a similar restriction of the vocabulary set Vi at the last layer.
## 3.3 Mixing Phonetic Signal
Given a text sequence X, let XPh denote the corresponding phonetic sequence. To incorporate both the signals, we replace the multi-head self attention layer in the transformer encoder layer with two multi-head self attention layers, one each for
![4_image_1.png](4_image_1.png)
text and *phoneme* channel. Text sequence shares feed-forward layers with the phonetic sequence as shown in Fig 4 since we want phonetic representations to be in the same space as text representations
. To keep the number of parameters in check, we add phonetics part of the encoder to only alternate encoder layers. Our decoder uses the concatenated sequence of contextual embeddings from X and XPh as keys and values for cross attention.
Pretraining with Phonetics. We pretrain CoMixBERT for phonetics with MLM objective
(as in BERT) and CoMixBART with denoising objective (text infilling with span length 1).
## 4 **Phonetic Sentence Comparison Metrics**
Lack of standardized transliteration rules is a key challenge not only for modeling but also for evaluating code-mixed or multi-lingual NLG tasks, e.g.,
English to Hinglish translation Human annotators employ orthographic variations and are also inconsistent in the use of punctuation and upper-lower casing as shown in Fig 1. Most NLG evaluation metrics such as BLEU, do not account for these variations, which leads to pessimistic and inaccurate estimates of the performance of NLG systems.
To address this gap, we propose a new family of metrics based on the phonetic representation. Let s(·, ·) be any metric such as BLEU and ROUGE
(Banerjee and Lavie, 2005) that facilitates comparison of a word sequence against a reference one.
Given a pair of sentences (*X, Y* ), we define the phonetic metric as P ho-s(*X, Y* ) = s(XPh, Y Ph),
where XPh, Y Ph are the phonetic sequences. In this paper, we limit our focus to PhoBLEU and
![4_image_0.png](4_image_0.png)
## 5 Experiments 5.1 **Downstream Tasks, Baselines And Metrics**
Table 1 lists the four downstream tasks, baselines, SOTA models and metrics used in the evaluation4.
For translation on HooD dataset (Sec 5.2), we also include Echo baseline which just passes the input sequence as output and helps measure the contribution of input-output overlap to the final performance.
Previous studies (Dabre et al., 2021) (Kakwani et al., 2020) indicate that IndicBART (Dabre et al.,
2021) and IndicBERT (Kakwani et al., 2020) are competitive relative to mBART and mBERT respectively on Indic languages. Further, since we initialize our models CoMixBART and CoMixBERT
with weights from IndicBART and IndicBERT, we consider these as strong baselines for our evaluation of generative and classification tasks respectively.
## 5.2 Datasets For Downstream Tasks
We evaluate on LINCE Eng-Hinglish dataset for translation (Chen et al., 2022), SemEval-2020 Hinglish Sentimix dataset for sequence classification (Patwa et al., 2020), GupShup Hinglish chats to summaries (GupShup H2H) dataset (Mehnaz et al., 2021) for summarization and LINCE
Hinglish (Singh et al., 2018) dataset for the NER
task. Table 9 in Appendix A.3.2 lists data statistics.
Hinglish Out-of-Domain Translation Dataset
(HooD). We introduce two out-of-domain translation test-sets for Hinglish. Of these, the first one from shopping domain was prepared by inhouse human experts who translated English sentences generated by humans and models like GPT3, following the guidelines in Appendix A.3.1. The second test set was prepared from GupShup corpus (Mehnaz et al., 2021) from parallel EnglishHinglish summaries of conversations created by linguists (Gliwa et al., 2019). These datasets can help 4Formulations of all tasks are in A.2.
| # datapoints | Avg. # of all tokens | Avg. # of tokens in Target | Code-Mixing | | | |
|------------------|------------------------|------------------------------|---------------|-------------------------------------|-------|-------|
| Source | Target | English | Hindi | Index (CMI) (Das and Gambäck, 2014) | | |
| HooD Shopping | 1050 | 16.63 | 17.76 | 3.35 | 14.41 | 19.2 |
| HooD Open Domain | 6831 | 20.29 | 22.46 | 6.16 | 16.29 | 28.96 |
Table 2: Data statistics for Hinglish Out-of-Domain (HooD) dataset.
assess the zero-shot transfer capabilities of models from movies to shopping and open-domain for models trained on LINCE English-Hinglish dataset.
Table 2 shows statistics of the HooD dataset.
## 5.3 Experimental Setup 5.3.1 Pretraining
Initialization. Training large transformer models from scratch requires thousands of GPU hours, which can be prohibitive. To ensure broader accessibility and best utilise existing models, we initialize CoMixBART decoder and encoder's nonphonetic weights (NPW) from IndicBART and CoMixBERT's NPW from IndicBERT. These are pretrained using Samanantar English-Hindi parallel corpus (Ramesh et al., 2021).
CoMixBART. We pretrain CoMixBART with DKGA on 1M sentences from Samanantar for 36k steps (∼ 2 epochs) on three 24GB GPUs with batch size of 2816 tokens, linear learning rate warmup and decay with 16k warmup steps. We use Adam optimizer with max learning rate of 1e-3, label smoothing of 0.1, dropout of 0.1 and token masking probability of 0.4. For WSG, we pretrain the DKGA model for additional 2k steps with the same setup except label smoothening and masking probability of 0. Learning curve for pretraining with DKGA and WSG is shown in Appendix A.4.2. Since pretraining CoMixBART for phonetics from scratch is computationally prohibitive because of its size, we devise a way to obtain reasonable weights for downstream training. We initialize embeddings of phonetic tokens with the mean of embeddings of the text tokens that map to it. We also initialize phonetic self-attention layer parameters with the same weights as that of corresponding text channel's self-attention layer.
CoMixBERT. We pretrain CoMixBERT with DKGA, WSG and Phonetics on 100k sentences from Samanantar on six 32GB GPUs with batch size of 20 per GPU, starting with a learning rate of 5e-5, linear learning rate warmup and AdamW optimizer. We pretrain DKGA and WSG for 1k steps and Phonetics for 3k steps. We are able to pretrain CoMixBERT with Phonetics because it has 7x less
## Parameters Than Comixbart. 5.3.2 Downstream Fine-Tuning
CoMixBART. The pretrained model is fine-tuned for downstream tasks in two stages. First, we attach a custom task-specific head to the decoder and train its weights, CoMixBART's NPW (encoder and decoder) for 5k steps on three 24GB GPUs with batch size of 2048 tokens, linear learning rate warmup, and decay with 2k warmup steps and max.
learning rate of 5e-4 using Adam optimizer. In the second stage, phonetic weights of CoMixTransformer encoder are initialized as per section 5.3.1.
Then, in downstream training of complete model, the weights from previous step are optimised with smaller learning rate than CoMixBART encoder's phonetic weights for additional 5k steps. We use beam search (size 4) for decoding. We train baseline IndicBART model for all tasks using YANMTT (Dabre, 2022), as prescribed in IndicBART
repository (IndicBART, 2022) with the same setup as CoMix models. In all cases, we pick the model with best validation score after 5k training steps.
CoMixBERT. We attach a custom task-specific head to the model and train using standard finetuning procedure. For NER, we also have CRF
layer attached after all models including baseline.
Since its possible to combine encoder only models without sequential training, we report ensemble results obtained by averaging logits for DKGA+WSG
and DKGA+WSG+Phonetic variants as they were better than sequential training. We use grid search to find the right set of hyperparameters for all models including baseline and pick the model with best validation score. We custom-build CoMixBART
and CoMixBERT implementation using transformers (Wolf et al., 2020), YANMTT (Dabre, 2022),
and PyTorch (Paszke et al., 2019).
## 6 Results And Analysis 6.1 Machine Translation
Table 5 shows the results for the LINCE Leaderboard English-Hinglish translation task. CoMix 5We show punctuation-less metrics for Echo baseline for HooD in brackets to correct for the inconsistent punctuation.
| Movies to Shopping domain transfer | Movies to open domain transfer | | | | | | | | | |
|--------------------------------------|----------------------------------|--------------------|-------|-------|----------|---------------|-------|-------|-------|-------|
| Echo | Indic | CoMix | Echo | Indic | CoMix | | | | | |
| BART | BART | | | | | | | | | |
| DKGA | DKGA+ | DKGA+WSG Phonetics | DKGA | DKGA+ | DKGA+WSG | | | | | |
| WSG | WSG | Phonetics | | | | | | | | |
| BLEU | 9.88 (6.49) | 10.37 | 11.95 | 11.95 | 12.15 | 13.23 (10.35) | 16.36 | 17.16 | 17.53 | 18.67 |
| BLEUuncase | 11.72 (6.79) | 11.82 | 13.47 | 13.32 | 13.57 | 14.07 (10.48) | 17.11 | 18.6 | 18.91 | 19.98 |
| PhoBLEU | 7.59 (7.59) | 12.75 | 14.97 | 14.88 | 14.97 | 11.99 (11.99) | 19.04 | 20.78 | 21.16 | 22.13 |
Table 3: Metrics for models trained on LINCE English-Hinglish dataset and tested on HooD dataset.5
Indic
BART
CoMix
DKGA **DKGA**
+WSG
DKGA
+Phonetics
DKGA+WSG
+Phonetics
BLEU 11.86 13.43 13.25 **13.88** 13.85
BLEU*uncase* 13.98 15.65 15.66 **16.35** 16.34
PhoBLEU 17.38 18.63 18.43 19.10 **19.13**
Table 4: Val set results on LINCE Eng-Hinglish dataset.
Table 5: LINCE Leaderboard scores on EnglishHinglish translation test set.
| Indic | m | mT5+ | CoMix | | | |
|----------------|------------|---------------|---------|-------|-------|-------|
| BART BART CMDR | DKGA+ | DKGA DKGA+WSG | | | | |
| Phonetics +WSG | +Phonetics | | | | | |
| BLEU | 11.20 | 11 | 12.67 | 12.98 | 12.41 | 12.51 |
| #params | 244M | 610M | 580M | 273M | 244M | 273M |
achieves the new SOTA result with 12.98 BLEU
points beating previous best mT5 based model that is more than double the size. Validation set scores in Table 4 show that CoMix beats IndicBART by over 2 BLEU and 1.7 PhoBLEU points besides yielding faster convergence (see Appendix A.4.3).
To test the generalization capabilities, we also evaluate the above models on out-of-domain HooD
data. Table 3 shows the results with CoMix improving over IndicBART on both the HooD datasets in terms of both BLEU and PhoBLEU metrics, pointing towards better generalization capabilities of CoMix. HooD Open-Domain dataset has higher Code-Mixing Index (CMI) (shown in section 5.2) than HooD Shopping dataset, which is where DKGA+WSG model improves over DKGA model owing to its pretraining procedure which encourages code-mixing (see Appendix A.4.7). To reduce overfitting for phonetic weights on the downstream dataset, here we train DKGA+Phonetic model on actual training data and 40k unlabelled English sentences and DKGA model's predictions. Fig 12 in Appendix A.4.4 compares few sample generated translations from IndicBART and CoMix.
## 6.2 Named Entity Recognition
Table 6 shows weighted F1-score from LINCE
leaderboard for Hinglish NER task. All the CoMixBERT components and their combinations beat baseline IndicBERT by 0.77-2.42 points and SOTA XLM-R large model, which has 10x more parameters, by 0.72-2.37 points. Since combinations yield upto 1.65 points boost, it is likely they capture different facets of code-mixing.
## 6.3 Sequence Classification
| Baseline | CoMix | DKGA | | | | |
|----------------|-----------|--------|----------|-------|-------|-------|
| BERT | Phonetics | DKGA | WSG DKGA | | | |
| Indic | +WSG+ | | | | | |
| +WSG Phonetics | | | | | | |
| NER | 80.65 | 82.16 | 81.42 | 81.78 | 82.4 | 83.07 |
| CLS | 64.51 | 65.71 | 64.87 | 66.37 | 67.47 | 67.61 |
Table 6 shows micro-F1 score for Hinglish sentiment classification task where individual CoMix components beat IndicBERT model by 0.36-1.86 points and DKGA+WSG+Phonetics model beats it by over 3 points. Similar to the mBERT training in
(Patwa et al., 2020), we train our model in a minimalistic fashion without any data-augmentation or weighted adversarial loss or token ids that can improve performance. Hence, we do not compare our results against other solutions in Semeval-2020 task and only compare against mBERT and IndicBERT.
## 6.4 Abstractive Summarization
On Gupshup H2H summarization dataset, CoMix beats IndicBART on all metrics (BLEU, PhoBLEU,
R1, R2 and RL) by margin of 0.8 to 2 points as shown in Table 7. CoMix even beats previously published best BLEU results obtained from PEGASUS model (Zhang et al., 2019) but is worse on R1 and R2 metrics. CoMix is worse on recallbased metrics (R1, R2) and better on precision based metrics (BLEU) than PEGASUS and BART
likely because of their ability to recall Englishbased words in the Hinglish summaries as they were pretrained only on English and the Gupshup dataset has been adapted from English conversa-
| IndicBART | PEGASUS | BART | CoMix | | | | |
|-------------|--------------------|-----------------|-----------------|---------------------|---------------------|---------------------|-------------------|
| DKGA | DKGA | DKGA | DKGA+WSG | | | | |
| +WSG | +Phonetics | +Phonetics | | | | | |
| BLEU | 5.9 | 6.16 | 5.96 | 6.4 | 6.25 | 6.72 | 6.09 |
| R1,R2,RL | 30.73, 9.35, 24.74 | 35.69, 11.01, - | 36.28, 11.45, - | 32.39, 10.11, 25.87 | 32.18, 10.13, 25.73 | 32.73, 10.12, 26.02 | 31.72, 9.8, 25.37 |
| BLEU_uncase | 6.15 | - | - | 6.55 | 6.42 | 6.9 | 6.35 |
| PhoBLEU | 6.6 | - | - | 7.33 | 7.01 | 7.64 | 7.18 |
Table 7: Results on Gupshup H2H test set where "-" indicates metrics not reported in prior work.
![7_image_0.png](7_image_0.png)
tional summarization corpus due to which it contains a lot of English named entities and words. We believe CoMixBART performance can further improve if we do pretraining with phonetics in future.
## 6.5 Qualitative Analysis
We examine how well the models in Section 6.1 separate 3655 pairs of words (178 similar, 3477 dissimilar) from 20 sentences in Appendix A.4.6.
Figure 5 shows the distribution of cosine similarity of contextual embeddings (phonetic and textual for CoMix, textual for IndicBART) for similar (green)
and dissimilar (red) pairs . We note that CoMix text embeddings separate the similar and dissimilar pairs better relative to IndicBART. Note that the scores for phonetic embeddings are on the higher side most likely due to initialisation choice (mean of all text tokens mapped to a phonetic token) and the smaller (0.25x of text) vocab size for phonetics.
## 6.6 Efficacy Of Phobleu On Code-Mixed Data
On English-Hinglish translation for the LINCE dataset, we observe that annotations from human experts fluent in both Hindi and English achieve a BLEU score of 10.43 BLEU, which is lower than most MT models. Further analysis revealed that BLEU is unable to account for valid variations in
![7_image_1.png](7_image_1.png)
| Pearson | Spearman | | | |
|--------------------|------------|-------------|---------|-------|
| Correlation | P-value | Correlation | P-value | |
| BLEU | 0.178 | 0.021 | 0.176 | 0.023 |
| BLEUuncase | 0.194 | 0.012 | 0.197 | 0.01 |
| BLEUuncase,nopunct | 0.23 | 0.002 | 0.257 | 0 |
| PhoBLEU | 0.333 | 0 | 0.359 | 0 |
spellings, pronouns, and language switching (LE
vs. LM) as shown in Fig 1. To address these gaps, we consider PhoBLEU [as defined in Section 4]
and evaluate its correlation with human judgements.
We randomly selected 200 English-Hinglish sentence pairs and their system-generated translations to be rated by professional on a scale of 1 to 5, with 1 as poor and 5 as perfect. Completeness (no information lost) and fluency (grammatical correctness)
were considered as the rating criteria. Results in Table 8 and Figure 6 show that PhoBLEU is significantly more correlated with human judgement and that its distribution is better aligned with human ratings than other BLEU variants.
## 7 Extensibility Of Comix
The proposed ideas of domain-knowledge guided attention, weak supervision, and phonetic representations are not specific to Hinglish and readily generalize to any language pair where we have a parallel corpus of embedded and matrix language content and tools for POS tagging and phonetic transcription. Below we discuss these requirements along with other assumptions on the POS structure that permits extensions of our methodology to most common code-mixed language pairs.
Assumption 1: Availability of parallel corpora.
Most common code-mixed languages happen to include English among the language pair which is typically transcribed in Latin script that permits easy phonetic transcription through tools such as Pyphonetics. Currently, there also exist multiple large parallel corpora (e.g. Flores, CCMatrix, Samanantar) where sentences in English are paired with that of multiple other languages. There are also many ongoing initiatives for creating such parallel corpora even for low-resource languages.
Hence, the requirement of a large parallel corpus of matrix and embedded language content is satisfied by most common code-mixed pairs.
Assumption 2: Availability of pretrained multilingual models. With the proliferation of massively multilingual foundational models (e.g., mBART
(50 languages), mBERT (104 languages), T5 (101 languages)) including advances in synthetic data augmentation, our assumption on the availability of pretrained LLMs or datasets to pretrain those models is also a reasonable one. We choose to work with IndicBART and IndicBERT which support 11 and 12 Indic languages respectively because they provide stronger baselines for Indic Languages and are faster to experiment with because of their smaller size, but the proposed ideas can be readily applied with any pre-trained transformer model.
Assumption 3: Languages of LM and LE **share**
the same POS set and access to POS tagging utilities. (Petrov et al., 2012) proposed Universal POS
tagset comprising 12 categories that exist across languages and had developed a mapping from 25 language-specific tagsets to this universal set. They demonstrated empirically that the universal POS
categories generalize well across language boundaries and led to a open community initiative by universaldependencies.org on creating Universal POS tags (Nivre et al., 2020) for 90 languages. In our work, we use these universal POS tags to build three coarse groups (nouns-pronouns, adjectivesverbs-adverbs, rest) of POS tags (see Fig 7). Note that even though we utilize POS-tagging, the structural constraints are imposed with respect to these three coarse groups. Fig 7 in A.1 lists the POS tags from the universal POS tags website, which we use in our work. Further, Stanza provides Universal POS-tagging utilities for around 66 languages.
Assumption 4: Equivalent word pair from LM
and LE **share the same coarse POS group.** We assume that equivalent words in an LM and LE pair share the same coarse POS group (from Fig 7 ) and not necessarily the same POS tag. A small-scale empirical analysis of 50 Hindi-English-Hinglish sentences from the HooD dataset (Sec 5.2) indicates this assumption is true in 88.6% of the cases.
POS-tags provide complementary (weak) supervision for intermixing (in DKGA) and generation (in WSG) in addition to word semantics already captured in embeddings. Further, even though our current guiding function f DKGA assumes a hard constraint on the word pairs to be in the same coarse POS group, our methodology is general and can be extended to the case where the two languages have different POS tag sets. In particular, given empirical probabilities that a matrix token with POS
tag A maps to an embedded token with POS tag B
for all possible pairs of POS tags (*A, B*), we can define the guiding function's value f DKGA
ij associated with matrix token xi and embedded token xj as the log of the empirical transition probability of the POS tags of the matrix token xi and embedded token xj . The current choice is the special case where transition probability is uniform for all POS
tag pairs within a coarse group and 0 for the rest.
## 8 Conclusion
We presented CoMix, a pretraining approach for code-mixed data that combines (a) domain knowledge guided attention (DKGA), (b) weakly supervised code-mixed generation based on POSstructure constraints, and (c) transformer encoder modification to include phonetic signal. We showed that CoMix yields improvements across multiple code-mixed tasks, achieving new SOTA result for Eng-Hinglish translation and Hinglish NER on LINCE Leaderboard with superior performance on out-of-domain translation. Our approach is applicable to code-mixing with all languages where POS tagging and phonetic transcription is possible. Motivated by gaps in current NLG evaluation metrics for code-mixed data, we proposed a new family of metrics based on phonetic representation and show that PhoBLEU is better correlated with human judgement than BLEU on Hinglish.
In future, we plan to extend the applicability of DKGA and WSG to other settings that can benefit from domain knowledge, and explore new metrics for code-mixed NLG with a large scale evaluation.
## Limitations
Our CoMix approach assumes availability of parallel bilingual (embedded and matrix language) corpora and mature tools for POS tagging and phonetic transcription for both the embedded and matrix languages which does not hold true for every language.
But these assumptions are reasonable for a large number of languages as shown in Appendix 7. Second, our current choice of guiding function for attention f DKGA and mixing probability p Mix are based on limited knowledge of the linguistic structure specific to English and Indic languages, and might need to be adapted for other language families. Additionally, as discussed in Section 4, due to multiple variations in code-mixed generation, current automated metrics that compare system generated text with reference text do not provide a true reflection of a system's ability to generate code-mixed text. Lastly, as with large language models, our CoMix models are also vulnerable to biases inherent in the training corpus.
## Ethics Statement
Our research motivation is to address the inequities in language resources and AI systems for multilingual societies such as India. The primary contribution of our work is a new modeling approach CoMix, which is especially designed to leverage existing pretrained models with moderate computation so that it is accessible to a wider community and does not create an adverse environmental impact. We also created two new Hinglish datasets for out-of-domain evaluation (HooD), which we described in detail in Section 5.2. There are no privacy or intellectual property rights associated with either of these datasets. We will open-source HooD,
our models and code in future post organizational approval. Human translations and evaluations reported in the paper have been done by professional annotation teams and are reflective of typical performance. Similar to other large language models, our CoMix model also encodes biases in the original training corpus and the domain constraints used as supervision. While the performance might be acceptable for natural language understanding, it is important to have guardrails while using the models directly for natural language generation.
## References
Gustavo Aguilar, Sudipta Kar, and Thamar Solorio.
2020. LinCE: A centralized benchmark for linguistic code-switching evaluation. *CoRR*, abs/2005.04322.
Satanjeev Banerjee and Alon Lavie. 2005. METEOR:
An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics.
Anshul Bawa, Pranav Khadpe, Pratik Joshi, Kalika Bali, and Monojit Choudhury. 2020. Do multilingual users prefer chat-bots that code-mix? Let's nudge and find out! *Proc. ACM Hum.-Comput. Interact.*,
4(CSCW1).
Shuguang Chen, Gustavo Aguilar, Anirudh Srinivasan, Mona T. Diab, and Thamar Solorio. 2022. Calcs 2021 shared task: Machine translation for code-switched data. *ArXiv*, abs/2202.09625.
Monojit Choudhury, Anirudh Srinivasan, and Sandipan Dandapat. 2019. Processing and understanding mixed language data. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP): Tutorial Abstracts, Hong Kong, China. Association for Computational Linguistics.
Raj Dabre. 2022. YANMTT library. https://
github.com/prajdabre/yanmtt. [Online; accessed 29-May-2022].
Raj Dabre, Himani Shrotriya, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M. Khapra, and Pratyush Kumar. 2021. IndicBART: A pre-trained model for natural language generation of Indic languages.
ArXiv, abs/2109.02903.
Amitava Das and Björn Gambäck. 2014. Identifying languages at the word level in code-mixed Indian social media text. In *Proceedings of the 11th International Conference on Natural Language Processing*,
pages 378–387, Goa, India. NLP Association of India.
Universal Dependencies. 2014. Universal POS
tags. https://universaldependencies.org/u/ pos/all.html. [Online; accessed 29-May-2022].
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805.
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In *Proceedings of the 2nd Workshop on* New Frontiers in Summarization, pages 70–79, Hong
Kong, China. Association for Computational Linguistics.
GTS. 2019. Hinglish - the biggest language you've never heard of with 350 million speakers. https://blog.gts-translation.com/
2019/06/12/hinglish-the-biggest-languageyouve-never-heard-of-with-350-millionspeakers/. [Online; accessed 29-May-2022].
IndicBART. 2022. IndicBART GitHub Repo. https:
//github.com/AI4Bharat/indic-bart. [Online; accessed 29-May-2022].
Ganesh Jawahar, El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, and Laks Lakshmanan, V.S.
2021. Exploring text-to-text transformers for English to Hinglish machine translation with synthetic code-mixing. In *Proceedings of the Fifth Workshop on Computational Approaches to Linguistic* Code-Switching, pages 36–46, Online. Association for Computational Linguistics.
Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M.
Khapra, and Pratyush Kumar. 2020. IndicNLPSuite:
Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages. In *Findings of EMNLP*.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
L Li, M Xu, X Wang, L Jiang, and H Liu. 2019. Attention based glaucoma detection: A large-scale database and CNN model. In *CVPR*, pages 571–580.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
Laiba Mehnaz, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Ratn Shah. 2021. GupShup: Summarizing open-domain code-switched conversations. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6177–
6192, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajic, Christopher D. Manning, Sampo ˇ
Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2:
An evergrowing multilingual treebank collection. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4034–4043, Marseille, France. European Language Resources Association.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc.
Parth Patwa, Gustavo Aguilar, Sudipta Kar, Suraj Pandey, Srinivas PYKL, Björn Gambäck, Tanmoy Chakraborty, Thamar Solorio, and Amitava Das.
2020. SemEval-2020 task 9: Overview of sentiment analysis of code-mixed tweets. In *Proceedings of the* Fourteenth Workshop on Semantic Evaluation, pages 774–790, Barcelona (online). International Committee for Computational Linguistics.
Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012.
A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2089–
2096, Istanbul, Turkey. European Language Resources Association (ELRA).
Shana Poplack. 1980. Sometimes I'll start a sentence in Spanish Y TERMINO EN ESPAÑOL: toward a typology of code-switching. 18(7-8):581–618.
Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, Raghavan AK,
Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Mahalakshmi J, Divyanshu Kakwani, Navneet Kumar, Aswin Pradeep, Srihari Nagaraj, Kumar Deepak, Vivek Raghavan, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2021. Samanantar: The largest publicly available parallel corpora collection for 11 Indic languages.
Mohd Sanad Zaki Rizvi, Anirudh Srinivasan, Tanuja Ganu, Monojit Choudhury, and Sunayana Sitaram.
2021. Gcm: A toolkit for generating synthetic codemixed text. In *2021 Conference of the European*
Chapter of the Association for Computational Linguistics, pages 205–211. Association for Computational Linguistics.
Kushagra Singh, Indira Sen, and Ponnurangam Kumaraguru. 2018. Language identification and named entity recognition in Hinglish code mixed tweets. In Proceedings of ACL 2018, Student Research Workshop, pages 52–58, Melbourne, Australia. Association for Computational Linguistics.
Víctor Soto and Julia Hirschberg. 2019. Improving code-switched language modeling performance using cognate features. In *INTERSPEECH*.
Mukuntha Narayanan Sundararaman, Ayush Kumar, and Jithendra Vepa. 2021. Phoneme-BERT: Joint language modelling of phoneme sequence and ASR transcript. In *Interspeech*.
Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *ArXiv*, abs/1706.03762.
Genta Indra Winata, Samuel Cahyawijaya, Zihan Liu, Zhaojiang Lin, Andrea Madotto, and Pascale Fung.
2021. Are multilingual models effective in codeswitching? *CoRR*, abs/2103.13309.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019. PEGASUS: pre-training with extracted gap-sentences for abstractive summarization.
CoRR, abs/1912.08777.
## A Appendix A.1 Classes Of Pos Tags
Figure 7 shows the POS tag groups used by DKGA.
We built these groups using information from Universal Dependencies (Dependencies, 2014).
Figure 7: Classes of POS Tags (POS Groups) used by guiding function in DKGA.
## A.2 Formulations Of Downstream Tasks A.2.1 Machine Translation
Given a source sequence X = [x1, x2*, ...x*S] and target sequence Y = [y1, y2*, ...y*T ], autoregressive neural machine translation system learns to model the following distribution
$$P(Y|X)=\prod_{t=1}^{T}P(y_{t}|y_{0},y_{1},...,y_{t-1},x_{1},x_{2},...,x_{S})\tag{4}$$
$\mathbf{r}=\mathbf{f}\left(\pi r(i),\pi r(i)\right)$ 3.
Given a training set D = {⟨X(i), Y (i)⟩}M
i=0 with P
M data points, we aim to maximize Lθ =
M
i=0 log P(Y
(i)|X(i); θ) where θ is set of model parameters.
## A.2.2 Sequence Classification
In sequence classification task, we are given a sequence X = [x1, x2*, ...x*S] and corresponding label y ∈ {y1, y2*, ..., y*k} from fixed set of k classes. Given a training set D = {⟨X(i), y(i)⟩}M
i=0 with P
M data points, we aim to maximize Lθ =
M
i=0 log P(y
(i)|X(i); θ)
## A.2.3 Abstractive Summarization
Mathematical formulation for summarization is same as translation so we avoid repeating it here for brevity. In abstractive summarization, unlike translation, target sequence Y is a concise summary of source sequence X, usually much shorter in length than X.
## A.2.4 Token Classification
In token classification task, we are given a sequence X = [x1, x2*, ...x*S] and corresponding label ys ∈ {y1, y2, ..., yk}∀s ∈ {1, S} for every input token, where {y1, y2*, ..., y*k} is the fixed set of k classes. Given a training set D =
{⟨X(i), Y (i)⟩}M
i=0 with M data points, we aim to maximize Lθ =PM
i=0 log P(Y
(i)|X(i); θ)
## A.3 More Details About Datasets A.3.1 Guidelines For Preparing Hood Shopping Dataset
Figure 8 shows the guidelines given to human annotators for translating English sentences to Hinglish for HooD Shopping dataset.
$\phi(D)=\phi_{\text{mod}}\phi_{\text{mod}}$ .
## A.3.2 Data Statistics Of Public Datasets
Table 9 shows statistics for public datasets which we have used for downstream tasks.
Figure 8: Guidelines given to Human annotators for
![12_image_1.png](12_image_1.png)
translating English sentences to Hinglish.
| Dataset | Number of Datapoints | | |
|----------------------------------------------|------------------------|------|------|
| Name | Train | Dev | Test |
| LINCE English - Hinglish Translation Dataset | 8060 | 942 | 960 |
| SemEval 2020 Task-9 Hinglish Sentimix | 14000 | 3000 | 3000 |
| Gupshup H2H | 5831 | 500 | 500 |
| LINCE Hinglish NER | 1243 | 314 | 522 |
Table 9: Statistics of public datasets we have used for code-mixed Machine Translation, Sequence Classification, Abstractive Summarization and NER.
## A.4 Additional Experimental Details And Results A.4.1 Details About Tokenization
For Phonetics data, we train our own sub-word tokenizer using sentencepiece6. For text data we use pretrained IndicBART's tokenizer for CoMixBART and IndicBERT's tokenizer for CoMixBERT. We consider sub-words POS to be same as the POS of the word from which sub-words have been created.
## A.4.2 **More Details On Pretraining With Dkga** And Wsg
Figure 9 shows the learning curve for pretraining CoMixBART. As you can see from the curve, loss stabilizes after 25k steps and does not change much.
Figure 10 shows the learning curve for pretraining ComixBART with WSG.
Table 10 shows few sample inputs which went into the model during WSG training and corresponding targets constructed by the model.
These generated sentences can be used for data6https://github.com/google/sentencepiece Figure 9: Change in negative log-likelihood loss with
![12_image_0.png](12_image_0.png)
![12_image_2.png](12_image_2.png)
training steps for Pretraining CoMixBART with DKGA.
Figure 10: Change in negative log-likelihood loss with training steps for Pretraining CoMixBART with WSG.
augmentation which we plan to explore in the future.
## A.4.3 Convergence For Indicbart Vs Comix On Lince Leaderboard Translation Task
Figure 11 shows the convergence speed for IndicBART and CoMix models for LINCE EnglishHinglish translation task. As you can see from the curve, CoMix is better than baseline IndicBART
throughout training.
Figure 11: Change in BLEU score with training steps
![12_image_3.png](12_image_3.png)
for IndicBART and CoMix.
## A.4.4 Qualitative Analysis On Few Sample Predictions For Code-Mixed Translation
Figure 12 shows few example translations generated by IndicBART and CoMix and their qualitative analysis.
## A.4.5 Set Of Sentences For Cosine Similarity Distribution
Figure 13 shows the 20 sentences from which every pair of word was labelled similar/dissimilar manually and then used to create Figure 5 that shows cosine similarity score distribution of contextual embeddings obtained from the encoder of CoMix and IndicBART.
| Input | Target built by WSG |
|-------------------------------------------------------------------------|--------------------------------------------------------|
| hamen koi naaraazgi bhi nahin he. Illl We dont have any complaint. | hamen koi complaint bhi nahin he. |
| jald hi aapako apadet milegaa. Ill There will be an update soon. | jald hi aapako update milegaa. |
| vaatayan antarrashtriya puraskaar, landan ke vaatayan-euke sanghathan | |
| dwaara diya jaataa he IIII | vaatayan work puraskaar, UK ke artists sanghathan |
| Vatayan International Awards given by the Vatayan - | dwaara International jaataa he |
| UK organization in London, honours poets, | |
| writers and artists for their exemplary work in their respective fields | |
| LIVE / jharkhand main shuruaati rujhaanon main bhaajapa | LIVE / jharkhand main ahead rujhaanon main bhaajapa 18 |
| 18 congress+ 37 siton par aage III | Congress 37 seats par leading |
| Congress leading 37 seats, BJP ahead in 18 | iske baad MLA vahaan se left |
| iske baad vidhayak vahaan se hate. Illl The MLA then left the venue. | |
Table 10: Few sample inputs which went into the model during WSG training and corresponding targets built by the model. Words in bold are the embedding language (English) words.
| Hinglish | | | | |
|-----------------------------------------|----------------------------------------|---------------------------------------|---------------------------------------------|--------------------------------------------|
| English | Reference - Human | IndicBART generated | Comix generated | |
| generated | | | | |
| Good, thank you for | acha, thank you aapke | Good, thank you for | IndicBART struggles with gender specific | |
| Good, puchne ke live thanks! | | | | |
| asking! I'm doing well | Mai aaj acha hu. Aap kaise | liye! mein achchi kar raha | asking! Main aaj achcha | pronouns (use of achchi vs achcha, |
| tumhara vs tumhari), whereas Comix does | | | | |
| today. How are you? | not | hoon, kaise ho? | xr raha hu. Tum kaise ho? | good job. Also notice the use of different |
| Cortainly! What is your | Ji haan! Aapke wife ka | Certainly! tumhara wife ka | Certainly! tumhari wife ka | orthographic forms across 3 generations |
| wife's favorite type of | favorite type ka gift kya | favorite type ka gift kya. | [Mai, mein, Main), (acha, achcha), (hu, | |
| favourite type of gift kya hain? | | | | |
| gift? | hai? | hai? | hoon)] | |
| mai dhanyavaad, sapse | Comix gets the tense right on "had to | | | |
| I am, thank you for | I am, thanks puchne ke liye. | mein tho thank you... voh | coochne ke liye, yah thoda | call", IndicBART doesn't. Comix doesn't |
| asking. It was just a little | Mai bas thoda dar rahi thi. | thoda scary hein.. mein | scary tha. mujhe 911 call | differentiate well between "aapse vs |
| scary. I had to call 911. | Mujhe 911 ko call karna pada. | 911 ko call karunga | karna pada | mujhse, mujhe vs mera" etc. |
| I sure am! We're getting a | I sure am! Hume bahut saare | me sure am! We're getting | I sure am! We're getting a | |
| lot of orders in and | order mil rahe hain aur har koil | a lot of orders in and | lot of order aa rahe hai aur | |
| everyone is working hard | uhen samay par nikaalane ke | everyone is working hard | Even though Comix is slightly better than | |
| sabhi kaam karne me hard | | | | |
| to get them out on time. | lie bohot mehanat kar raha | to get them out on time. | IndicBART here, it certainly wasn't perfect | |
| ho rahe hai. Thanks for | | | | |
| Thanks for your patience | hai. Aapke patience ke lie | Thanks tumhe patience as | likely because of the difficulty of the | |
| your patience as we work | | | | |
| as we work to get | thanks kyonki hum sabke | we work to get everyone's | sentence and its length. | |
| everyone's orders out as | orders ko jald se jald poora | to get everyone's orders | | |
| orders out as soon as | | | | |
| possible possible | out as soon as possible. | | | |
| son as possible. | tarane ke lie kaam karte hain. | | | |
| I sure can! What type of | Mai bilkul kar sakta hua! Aap | me sure can! konsa type | Comix does great job with getting | |
| I sure can! Tum kis type ke | | | | |
| clothing are you looking | kis type ka clothing dhund | pronouns, grammar and everything else | | |
| ka clothing aapko dekh | clothing dekh rahe ho? | | | |
| rahe hai? | rahe hain? | perfect. | | |
## A.4.6 Comix Vs Indicbart Cosine Similarity Distribution Of Contextual Embeddings
Figure 11 shows the mean and variance of the cosine similarity distribution of 3655 word pairs constructed from the 20 sentences in Figure 13 along different subsets of positive pairs and close negative pairs. We observe that the cosine score for positive pairs based on CoMix text embeddings has a bimodal distribution with high scores for those with same language and spelling, but relative low scores when that is not the case. However, even these low scoring positive pairs are comparable or score higher than close negatives. In the case of IndicBART, we again observe a bimodal distribution for the negative pairs with high scores for pairs that have different semantics but share either the spelling or phonetic representation, which makes it difficult to separate it from the positive pairs. CoMix phonetic embeddings by itself does not seem to be very discriminatory but it is helpful in making up for the shortcomings of CoMix text embeddings for handling phonetic variations.
## A.4.7 Dkga Vs Wsg. Who'S Code-Mixing More?
Since in WSG we're nudging the model to codemix, that behaviour is visible in the generated translations by the two models as well. Figure 14 shows few randomly sampled translations generated by DKGA and DKGA+WSG models. It is visible from the translations that DKGA+WSG model is switching between matrix and embedded language more often, because of its pretraining.
## A.4.8 Ner And Classification Results
Table 12 shows results on validation and test sets for NER and Sequence Classification tasks.
## Example Dkga Attention Matrix
A.4.9 Fig 15 shows DKGA attention matrix for an example sentence.
Table 11: Mean and Variance of the cosine similarity distribution of 3655 word pairs constructed from the 20 sentences in Figure 13 along different subsets of positive pairs and close negative pairs.
| Comix Text | Comix Phonetics | IndicBART | Example | |
|-----------------------------------------------------|------------------------------|--------------|--------------|-------------------------------------------------------------|
| All Pairs | (0.31,0.14) | (0.89,0.04) | (0.47,0.16) | |
| All | (0.68,0.20) | (0.95,0.04) | (0.73,0.23) | |
| Same language & spelling | (0.86, 0.10) | (0.97,0.03) | (0.94, 0.04) | (mein school jaa raha hun, I am going to school) |
| Same language & phonetics, but different spelling | (0.56,0.12) | (0.97, 0.03) | (0.64,0.13) | (Maine has a beautiful coast, Maine ka coost aakarshak hai) |
| Same language but different spelling & phonetics | (0.51, 0.13) | (0.91, 0.03) | (0.64,0.11) | (Maine ka coost aakarshak hai, Maine ka kinara sundar hai) |
| spelling & phonetics | (0.48, 0.09) | (0.92, 0.02) | (0.44, 0.09) | {Maine ka coost aakarshak hai, |
| Different language, | Maine has a beautiful coast} | | | |
| Positive Pairs | All | (0.29,0.10) | (0.88,0.04) | (0.46,0.15) |
| Different language but same spelling | (0.65,0.12) | (0.99, 0.00) | (0.82 0.08) | {main karan kya hai, |
| Negative Pairs | main paatshala jaa raha hun} | | | |
| Same language & phonetics, but different spelling | (0.43, 0.07) | (0.95, 0.07 | (0.67,0.10) | {main karan kya hai, |
| Maine ka coast sundar hai} | | | | |
| Different language and spelling, but same phonetics | (0.53,0.11) | (0.97,0.02) | (0.64,0.12) | {maine rang badal diya, main vajah kya hai} |
Table 12: Validation and Test set results of NER and Classification set
| IndicBERT | CoMix | | | | | | |
|----------------|---------|-------|----------|--------------------|-------|-------|-------|
| Phonetics | DKGA | WSG | DKGA+WSG | DKGA+WSG+Phonetics | | | |
| NER | Val | 79.9 | 80.48 | 81.31 | 79.85 | 82.25 | 81.28 |
| Test | 80.65 | 82.16 | 81.42 | 81.78 | 82.4 | 83.07 | |
| Classification | Val | 59.41 | 59.57 | 58.9 | 60.14 | 60.8 | 60.9 |
| Test | 64.51 | 65.71 | 64.87 | 66.37 | 67.47 | 67.61 | |
![15_image_0.png](15_image_0.png)
![15_image_1.png](15_image_1.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
hsieh-etal-2023-distilling | Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes | https://aclanthology.org/2023.findings-acl.507 | Deploying large language models (LLMs) is challenging because they are memory inefficient and compute-intensive for practical applications. In reaction, researchers train smaller task-specific models by either finetuning with human labels or distilling using LLM-generated labels. However, finetuning and distillation require large amounts of training data to achieve comparable performance to LLMs. We introduce Distilling step-by-step, a new mechanism that (a) trains smaller models that outperform LLMs, and (b) achieves so by leveraging less training data needed by finetuning or distillation. Our method extracts LLM rationales as additional supervision for training small models within a multi-task framework. We present three findings across 4 NLP benchmarks: First, compared to both finetuning and distillation, our mechanism achieves better performance with much fewer labeled/unlabeled training examples. Second, compared to few-shot prompted LLMs, we achieve better performance using substantially smaller model sizes. Third, we reduce both the model size and the amount of data required to outperform LLMs; our finetuned 770M T5 model outperforms the few-shot prompted 540B PaLM model using only 80{\%} of available data on a benchmark, whereas standard finetuning the same T5 model struggles to match even by using 100{\%} of the dataset. | # Distilling Step-By-Step! Outperforming Larger Language Models With Less Training Data And Smaller Model Sizes
Cheng-Yu Hsieh1∗
, Chun-Liang Li2, Chih-Kuan Yeh3**, Hootan Nakhost**2, Yasuhisa Fujii3, Alexander Ratner1, Ranjay Krishna1, Chen-Yu Lee2**, Tomas Pfister**2 1University of Washington, 2Google Cloud AI Research, 3Google Research [email protected]
## Abstract
Deploying large language models (LLMs) is challenging because they are memory inefficient and compute-intensive for practical applications. In reaction, researchers train smaller task-specific models by either finetuning with human labels or distilling using LLM-generated labels. However, finetuning and distillation require large amounts of training data to achieve comparable performance to LLMs. We introduce *Distilling step-by-step*, a new mechanism that (a) trains smaller models that outperform LLMs, and (b) achieves so by leveraging less training data needed by finetuning or distillation. Our method extracts LLM
rationales as additional supervision for training small models within a multi-task framework. We present three findings across 4 NLP benchmarks: First, compared to both finetuning and distillation, our mechanism achieves better performance with much fewer labeled/unlabeled training examples. Second, compared to few-shot prompted LLMs, we achieve better performance using substantially smaller model sizes. Third, we reduce both the model size and the amount of data required to outperform LLMs; our finetuned 770M
T5 model outperforms the few-shot prompted 540B PaLM model using only 80% of available data on a benchmark, whereas standard finetuning the same T5 model struggles to match even by using 100% of the dataset.1
## 1 Introduction
Despite the impressive few-shot ability offered by large language models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Thoppilan et al., 2022; Hoffmann et al., 2022; Smith et al., 2022b; Zhang et al., 2022), these models are challenging to deploy in real world applications due to their sheer
![0_image_0.png](0_image_0.png)
size. Serving a single 175 billion LLM requires at least 350GB GPU memory using specialized infrastructure (Zheng et al., 2022). To make matters worse, today's state-of-the-art LLMs are composed of over 500B parameters (Chowdhery et al., 2022),
requiring significantly more memory and compute.
Such computational requirements are far beyond affordable for most product teams, especially for applications that require low latency performance.
To circumvent these deployment challenges of large models, practitioners often choose to deploy smaller specialized models instead. These smaller models are trained using one of two common paradigms: finetuning or *distillation*.
Finetuning updates a pretrained smaller model
(e.g. BERT (Devlin et al., 2018) or T5 (Raffel et al., 2020)) using downstream human annotated data (Howard and Ruder, 2018). Distillation trains the same smaller models with labels generated by a larger LLM (Tang et al., 2019; Wang et al., 2021; Smith et al., 2022a; Arora et al., 2022). Unfortunately, these paradigms reduce model size at a cost:
to achieve comparable performance to LLMs, finetuning requires expensive human labels, and distillation requires large amounts of unlabeled data which can be hard to obtain (Tang et al., 2019; Liang et al., 2020).
In this work, we introduce **Distilling step-bystep**, a new simple mechanism for training smaller models with less training data. Our mechanism reduces the amount of training data required for both finetuning and distillation of LLMs into smaller model sizes. Core to our mechanism is changing our perspective from viewing LLMs as a source of noisy labels to viewing them as agents that can reason: LLMs can produce natural language rationales justifying their predicted labels (Wei et al.,
2022; Kojima et al., 2022). For example, when asked "Jesse's room is 11 feet long and 15 *feet* wide. If she already has 16 *square feet of carpet.*
How much more carpet does she need to cover the whole floor?", an LLM can be prompted by chain-of-thought (CoT) technique (Wei et al., 2022)
to provide intermediate rationales "Area = *length*
× width. Jesse's room has 11 × 15 *square feet.*"
that better connects the input to the final answer
"(11 × 15) − 16". These *rationales* can contain relevant task knowledge, such as "Area = *length* ×
width", that may originally require many data for small task-specific models to learn. We thus utilize these extracted rationales as additional, richer information to train small models through a multi-task training setup, with both label prediction and rationale prediction tasks (Raffel et al., 2020; Narang et al., 2020).
Distilling step-by-step allows us to learn taskspecific smaller models that outperform LLMs using over 500× less model parameters, and it does so with far fewer training examples compared to traditional finetuning or distillation (Figure 1). Our results show three promising empirical conclusions across 4 NLP benchmarks. First, compared to both finetuning and distillation, our resulting models achieve better performance with over 50% less training examples on average across datasets (and up to over 85% reduction). Second, our models outperform LLMs with much smaller model sizes
(up to 2000× smaller), drastically reducing the computation cost required for model deployment.
Third, we simultaneously reduce the model size as well as the amount of data required to outperform LLMs. We surpass the performance of 540B
parameter LLMs using a 770M T5 model; this smaller model only uses 80% of a labeled dataset that would otherwise be required if using an existing finetuning method. When only unlabeled data is present, our small models still perform on par or better than LLMs. We outperform 540B PaLM's performance with only a 11B T5 model. We further show that when a smaller model performs worse than an LLM, Distilling step-by-step can more efficiently leverage additional unlabeled data to match the LLM performance compared to the standard distillation approach.
## 2 Related Work
Our work distills task-specific knowledge of LLMs into smaller specialist models by leveraging the emergent reasoning capabilities of today's LLMs.
We draw on recent knowledge distillation research and other methods that learn from both humangenerated rationales and LLM-generated rationales.
Knowledge distillation from large models. Knowledge distillation has been successfully used to transfer knowledge from larger, more competent teacher models into smaller student models affordable for practical applications (Bucilua et al. ˇ , 2006; Hinton et al., 2015; Beyer et al., 2022; West et al.,
2021). It supports learning from limited labeled data, since the larger teacher model is often used to generate a training dataset with noisy pseudo labels (Chen et al., 2020; Iliopoulos et al., 2022; Wang et al., 2021; Smith et al., 2022a; Arora et al.,
2022; Agrawal et al., 2022). The one limitation that knowledge distillation often faces is its reliance on large amounts of unlabelled data required to create a useful noisy training dataset. Although prior work has explored using data augmentation techniques to reduce this hunger for data (Tang et al.,
2019; Liang et al., 2020; Srinivas and Fleuret, 2018; Milli et al., 2019), we propose an alternative approach: we reduce the need for large unlabeled data by distilling not just labels but also the teacher's rationales.
Learning with human rationales. While utilizing LLM-generated rationales is a new exciting area of investigation, using human-generated rationales has a rich history (Hase and Bansal, 2021). For instance, human rationales can be used to regularize model behavior (Ross et al., 2017); it can be used as additional inputs to guide a model's predictions (Rajani et al., 2019); it can be used to improve overall model performance (Zaidan et al.,
2007; Zhang et al., 2016; Camburu et al., 2018;
![2_image_0.png](2_image_0.png)
Hancock et al., 2019; Pruthi et al., 2022); and human rationales can be used as gold standard labels to make models more interpretable by generating similar rationales (Wiegreffe et al., 2021; Narang et al., 2020; Eisenstein et al., 2022). Unfortunately, human rationales are expensive.
Learning with LLM generated rationales. Today's LLMs are capable of explaining their predictions by generating high-quality reasoning steps (Wei et al., 2022; Kojima et al., 2022). These reasoning steps have been used to augment input prompts to LLMs, improving their few-shot or zeroshot performance (Wei et al., 2022; Kojima et al.,
2022; Wang et al., 2022b); reasoning steps have also been used as additional finetuning data "selfimprove" LLMs (Zelikman et al., 2022; Huang et al., 2022). Unfortunately, regardless of how LLMs are improved, their large size limits their utility in most test-time applications.
By contrast, we leverage generated rationales as informative supervision to train smaller taskspecific models, i.e. models that can be deployed without incurring large computation or memory costs. In the past few months, three concurrent works have also proposed a similar idea to ours
- that of using extracted rationales as supervision (Wang et al., 2022a; Ho et al., 2022; Magister et al., 2022). Amongst them, PINTO (Wang et al.,
2022a) relies on an LLM to generate rationales at test-time, and thus does not fully solve deployment challenges. Compared with Ho et al. (2022)
and Magister et al. (2022), we go beyond their experiments to provide a granular study by varying training dataset size, exploring downstream model sizes, and demonstrating the effectiveness of our method on fully unlabeled datasets.
## 3 Distilling Step-By-Step
We propose a new paradigm, *Distilling step-bystep*, that leverages the ability of LLMs to reason about their predictions to train smaller models in a data-efficient way. Our overall framework is illustrated in Figure 2. Our paradigm has two simple steps: First, given an LLM and an unlabeled dataset, we prompt the LLM to generate output labels along with *rationales* to justify the labels.
Rationales are natural language explanations that provide support for the model's predicted label
(see Figure 2). Second, we leverage these rationales in addition to the task labels to train smaller downstream models. Intuitively, rationales provide richer, more detailed information about why an input is mapped to a specific output label, and often contain relevant task knowledge that may be hard to infer solely from the original inputs.
## 3.1 Extracting Rationales From Llms
Recent studies observe one intriguing emerging property of LLMs: their ability to generate rationales that support their predictions (Wei et al.,
2022; Kojima et al., 2022). While the studies have largely focused on how to elicit such reasoning capability from LLMs (Nye et al., 2021; Wei et al.,
2022; Kojima et al., 2022), we use them in training smaller downstream models.
Specifically, we utilize Chain-of-Thought (CoT)
![3_image_0.png](3_image_0.png)
prompting (Wei et al., 2022) to elicit and extract rationales from LLMs. As illustrated in Figure 3, given an unlabeled dataset xi ∈ D, we first curate a prompt template p that articulates how the task should be solved. Each prompt is a triplet
(x p, rp, yp), where x pis an example input, y pis its corresponding label and r pis a user-provided rationale that explains why x pcan be categorized as y p. We append each input xito p and use it as an input to prompt the LLM to generate rationales and labels for each xi ∈ D. With the demonstrations seen in p, the LLM is able to mimic the triplet demonstration to generate the rationale rˆi and output yˆi for xi.
## 3.2 Training Smaller Models With Rationales
We first describe the current framework for learning task-specific models. With this framework in place, we extend it to incorporate rationales into the training process. Formally, we denote a dataset as D = {(xi, yi)}
N
i=1 where each xi represents an input and yiis the corresponding desired output label. While our framework supports inputs and outputs of any modality, our experiments limits x and y to be natural language. This text-to-text framework (Raffel et al., 2020) encompasses a variety of NLP tasks: classification, natural language inference, question answering and more.
Standard finetuning and task distillation. The most common practice to train a task-specific model is to finetune a pretrained model with supervised data (Howard and Ruder, 2018). In the absence of human-annotated labels, task-specific distillation (Hinton et al., 2015; Tang et al., 2019)
uses LLM teachers to generates pseudo noisy training labels, yˆiin place of yi (Wang et al., 2021; Smith et al., 2022a; Arora et al., 2022).
For both scenarios, the smaller model f is trained to minimize the label prediction loss:
$${\mathcal{L}}_{\mathrm{label}}={\frac{1}{N}}\sum_{i=1}^{N}\ell(f(x_{i}),{\hat{y}}_{i}),$$
$$\mathrm{(1)}$$
where ` is the cross-entropy loss between the predicted and target tokens. Note that for ease of exposition, we overload yˆiin Eq. 1 to be either human-annotated labels yi for the standard finetuning case, or LLM-predicted labels yˆi for the model distillation case.
Multi-task learning with rationales. To create a more explicit connection between xi's to yˆi's, we use extracted rationales rˆi as additional supervision. There are several ways to incorporate rationales into the downstream model's training process.
One straightforward approach is feed rˆi as an additional input—as proposed by other concurrent research (Rajani et al., 2019; Wang et al., 2022a).
In other words, the f(xi, rˆi) → yˆiis trained with both text and rationale [*x, r*] as inputs:
$${\mathcal{L}}={\frac{1}{N}}\sum_{i=1}^{N}\ell(f(x_{i},{\hat{r}}_{i}),{\hat{y}}_{i}).\qquad\qquad(2)$$
Unfortunately, this design requires an LLM to first generate a rationale before the f can make a prediction. The LLM is still necessary during deployment, limited its deployability.
In this work, instead of using rationales as additional model inputs, we frame learning with rationales as a multi-task problem. Specifically, we train the model f(xi) → (ˆyi, rˆi) to not only predict the task labels but also generate the corresponding rationales given the text inputs:
$${\mathcal{L}}={\mathcal{L}}_{\mathrm{label}}+\lambda{\mathcal{L}}_{\mathrm{rationale}},$$
where Llabel is the label prediction loss in Eq. 1 and Lrationale is the *rationale generation loss*:
$${\mathcal{L}}_{\mathrm{rational}}={\frac{1}{N}}\sum_{i=1}^{N}\ell(f(x_{i}),{\hat{r}}_{i}).\qquad(4)$$
$$({\mathfrak{I}}{\mathfrak{I}})$$
The rationale generation loss enables the model to learn to generate the intermediate reasoning steps for the prediction, and could therefore guide the model in better predicting the resultant label. This is our proposed Distilling step-by-step. Compared with Eq. 2, the rationale rˆiis not required in the test time, which removes the need for an LLM at test-time.
We prepend "task prefixes" ([label],
[rationale]) to the input examples and train the smaller model to output yˆi when
[label] is provided and to produce rˆi with
[rationale] (Raffel et al., 2020).
## 4 Experiments
We empirically validate the effectiveness of Distilling step-by-step. First, we show that when compared to standard finetuning and task distillation approaches, Distilling step-by-step achieves better performance with much fewer number of training examples, substantially improving the data efficiency to learn small task-specific models (Sec. 4.1). Second, we show that Distilling step-by-step surpasses the performance of LLMs with much smaller model size, drastically lowering the deployment cost compared to LLMs (Sec. 4.2).
Third, we investigate the minimum resources required, w.r.t. both number of training examples and model size, for Distilling step-by-step to outperform LLMs. We show that Distilling step-by-step outperforms LLMs by using less data and smaller model, simultaneously improving both data- and deployability-efficiency (Sec. 4.3). Finally, we perform ablation studies to understand the influence of different components and design choices in the Distilling step-by-step framework (Sec. 4.4).
Setup. In the experiments, we consider the 540B
PaLM model (Chowdhery et al., 2022) as the LLM.
For task-specific downstream models, we use T5 models (Raffel et al., 2020) where we initialize the models with pretrained weights obtained from publicly available sources2. For CoT prompting, we follow Wei et al. (2022) when available, and curate our own examples for new datasets. We include more implementation details in Appendix A.1.
Datasets. We conduct the experiments on 4 popular benchmark datasets across 3 different NLP tasks: *e-SNLI* (Camburu et al., 2018) and ANLI (Nie et al., 2020) for natural language inference; CQA (Talmor et al., 2019; Rajani et al., 2019)
for commonsense question answering; *SVAMP* (Patel et al., 2021) for arithmetic math word problems.
We include more dataset details in Appendix A.2.
## 4.1 Reducing Training Data
We compare Distilling step-by-step to two most common methods in learning task-specific models:
2https://huggingface.co/
(1) STANDARD FINETUNING when human-labeled examples are available, and (2) STANDARD TASK
DISTILLATION when only unlabeled examples are available. Specifically, standard finetuning refers to the prevailing pretrain-then-finetune paradigm that finetunes a model with ground-truth labels via standard label supervision (Howard and Ruder, 2018).
On the other hand, when only unlabeled examples are available, standard task distillation learns the task-specific model by treating a teacher LLM's predicted labels as ground-truths (Hinton et al.,
2015; Chen et al., 2020; Wang et al., 2021; Smith et al., 2022a; Arora et al., 2022).
In the following set of experiments, we fix the task-specific models to be 220M T5-Base models, and compare the task performances achieved by different methods under varying number of available training examples.
## Distilling Step-By-Step Outperforms Standard Finetuning With Much Less Labeled Examples.
When finetuned with human-labeled examples, Figure 4 shows that Distilling step-by-step consistently achieves better performance than standard finetuning across varying numbers of labeled examples used. Furthermore, we see that Distilling step-bystep can achieve the same performance as standard finetuning with much less labeled examples.
In particular, by using only 12.5% of the full eSNLI dataset, Distilling step-by-step can outperform standard finetuning trained with 100% of the full dataset. Similarly, we achieve 75%, 25%, and 20% reduction in training examples required to outperform standard finetuning on ANLI, CQA, and SVAMP respectively.
## Distilling Step-By-Step Outperforms Standard
distillation with much less unlabeled examples.
When only unlabeled data is available, we compare Distilling step-by-step to standard task distillation.
In Figure 5, we observe an overall similar trend to the finetuning setup. Specifically, we see that Distilling step-by-step outperforms standard task distillation on all 4 datasets under different numbers of unlabeled data used. We as well see that Distilling step-by-step requires much less unlabeled data to outperform standard task distillation. For instance, we need only 12.5% of the full unlabeled dataset to outperform the performance achieved by standard task distillation using 100% of the training examples on e-SNLI dataset.
![5_image_0.png](5_image_0.png)
![5_image_1.png](5_image_1.png)
## 4.2 Reducing Model Size
In the following set of experiments, we hold the training set size fixed (using 100% of the datasets),
and compare varying sizes of small T5 models trained with Distilling step-by-step and standard approaches to LLMs. Specifically, we consider 3 different sizes of T5 models, i.e., 220M T5-Base, 770M T5-Large, and 11B T5-XXL. For LLMs, we include two baseline methods: (1) FEW-SHOT
COT (Wei et al., 2022), and (2) PINTO TUN-ING (Wang et al., 2022a). Few-shot CoT directly utilizes CoT demonstrations to prompt the 540B
PaLM to generate intermediate steps before predicting the final labels without any further finetuning of the LLM. PINTO tuning refers to our extension of Wang et al. (2022a) to handle tasks beyond question-answering, which are not studied by Wang et al. (2022a). Here, we finetune a 220M T5-Base model on top of the outputs generated from the PaLM model, which can be viewed as a finetuning method for LLMs with additional parameters (Zhang et al., 2020; Lester et al., 2021).
We present the experimental results under the two broad scenarios of having access to labeled datasets or unlabeled datasets in Figure 6 and Figure 7, respectively. We plot each method by their deployed model sizes for prediction (x-axis), and their corresponding task performances (y-axis).
## Distilling Step-By-Step Improves Over Standard
baselines across varying model sizes used. In Figure 6 and Figure 7 respectively, we see that Distilling step-by-step consistently improves over standard finetuning and standard distillation across all sizes of T5 models. The improvements are most pronounced on ANLI, where Distilling step-bystep outperforms standard finetuning and distillation by an average of 8% and 13% on task accuracy respectively.
## Distilling Step-By-Step Outperforms Llms By Using Much Smaller Task-Specific Models. In
Figure 6 when human-labeled datasets are available, Distilling step-by-step can always outperform Few-shot CoT and PINTO tuning on all 4 datasets considered, by using much smaller T5 models. For instance, we can achieve better performances than 540B PaLM model's Few-shot CoT
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
with 220M (over 2000× smaller) T5 model on eSNLI, 770M (over 700× smaller) T5 models on ANLI and SVAMP, and 11B (over 45× smaller)
T5 model on CQA. These results hold true even by further finetuning the 540B PaLM model on available labeled data with PINTO tuning3.
In Figure 7, by only utilizing unlabeled examples, Distilling step-by-step also outperforms the teacher LLM on 3 out of 4 datasets. Specifically, Distilling step-by-step surpasses the 540B PaLM
model's Few-shot CoT performance by using 11B
T5 with less than 3% of PaLM's size. On SVAMP
where the distilled model underperforms, we hypothesize that the performance gap is due to the relatively small number of data points in the dataset
(i.e., 800). In reaction, we propose to augment the dataset with additional unlabeled examples to close the performance gap as shown in next.
3We note that PETuning methods may outperform PINTO
tuning. However, they require massive resource in both training and deployment, which is not the focus of this work.
Unlabeled data augmentation further improves Distilling step-by-step. We augment the SVAMP training set with unlabeled examples from the *ASDiv* dataset (Miao et al., 2020). ASDiv dataset contains a total of 2, 305 examples, where each example is a math word problem similar to the ones in SVAMP. In Figure 7 on SVAMP, we show the performances of Distilling step-by-step and standard task distillation using 11B T5 model after augmenting the training set with ASDiv. We see the data augmentation much improves the performance for both Distilling step-by-step and standard task distillation. However, even with the added unlabeled examples, standard task distillation still underperforms Few-shot CoT. On the other hand, Distilling step-by-step is able to much more efficiently exploit the value of the added examples to achieve the same performance level of Few-shot CoT, again, using a T5 model of size less than 3% of the 540B PaLM.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
## 4.3 Outperforming Llms Using Minimum Model Size And Least Training Data
Here, using the LLM's performance as an anchor point, we explore the most efficient resource requirements in terms of both number of training examples and deployed model size, that Distilling step-by-step and standard finetuning/distillation need to outperform the LLM. We present the results, again under human-labeled setting and unlabeled setting, in Figure 8 and Figure 9 respectively.
We visualize the results by plotting different resultant models by (1) the number of training examples used (x-axis), (2) the final task performance achieved (y-axis), and (3) the size of the model
(visualized by the size of the shaded area).
## Distilling Step-By-Step Outperforms Llms With
much smaller models by using less data. On all datasets in Figure 8, we see that Distilling stepby-step outperforms PaLM's Few-shot CoT with much smaller T5 models using only a subset of the available training examples. Specifically, on e-SNLI, Distilling step-by-step can achieve better performance than Few-shot CoT with a model over 2000× smaller (220M T5) and only 0.1% of the full dataset. In Figure 9 where only unlabeled datasets are available, we observe the same trend that Distilling step-by-step can, at most time, outperform Few-shot CoT with smaller model as well as less data. For instance, on ANLI, Distilling stepby-step outperforms the LLM with a 45× smaller model and 50% of the full unlabeled set.
## Standard Finetuning And Distillation Require
more data and larger model. Finally, in Figure 8 and Figure 9, we see that standard finetuning and distillation often need either more data or larger models to match LLM's performance. For instance, on e-SNLI in Figure 8, we observe that Distilling step-by-step outperform the LLM using only 0.1% of the dataset while standard finetuning requires more data to match the performance. Furthermore, on ANLI in Figure 8, we observe that Distilling step-by-step can outperform PaLM using 770M
model with only 80% of the training set while standard finetuning struggles to match the LLM even
| Dataset | | | | | |
|-------------------------|------|--------|-------|-------|-------|
| Method | LLM | e-SNLI | ANLI | CQA | SVAMP |
| STANDARD FINETUNING | N/A | 88.38 | 43.58 | 62.19 | 62.63 |
| DISTILLING STEP-BY-STEP | 20B | 89.12 | 48.15 | 63.25 | 63.00 |
| DISTILLING STEP-BY-STEP | 540B | 89.51 | 49.58 | 63.29 | 65.50 |
using the full dataset and thus requires larger model to close the performance gap.
## 4.4 Further Ablation Studies
So far, we have focused on showing the effectiveness of Distilling step-by-step on reducing the training data required for finetuning or distilling smaller task-specific models. In this section, we perform further studies to understand the influence of different components in the Distilling step-by-step framework. Specifically, we study (1) how different LLMs, from which the rationales are extracted, affect the effectiveness of Distilling step-by-step, and (2) how the multi-task training approach compares to other potential design choices in training small task-specific models with LLM rationales. Here, we fix the small task-specific models to be 220M T5 models, and utilize 100% of the data on all datasets.
## Distilling Step-By-Step Works With Different
sizes of decently trained LLMs. In addition to using 540B PaLM as the LLM, here we consider a relatively smaller LLM, 20B GPT-NeoX
model (Black et al., 2022), from which we extract rationales for Distilling step-by-step. In Table 1, we see that when coupled with LLMs of different sizes, Distilling step-by-step can still provide performance improvements compared to standard finetuning. However, the performance lift is smaller when rationales are extracted from the 20B GPTNeoX model instead of from the 540B PaLM. This can be due to the fact that the larger PaLM model provides higher-quality rationales that are more beneficial for learning the task.
## Multi-Task Training Is Much More Effective Than Single-Task Rationale And Label Joint Prediction.
There are different possible ways to train taskspecific models with LLM-rationales as output supervisions. One straightforward approach is to concatenate the rationale rˆi and label yˆiinto a single
| Dataset | | | | |
|----------------------|--------|-------|-------|-------|
| Method | e-SNLI | ANLI | CQA | SVAMP |
| STANDARD FINETUNING | 88.38 | 43.58 | 62.19 | 62.63 |
| SINGLE-TASK TRAINING | 88.88 | 43.50 | 61.37 | 63.00 |
| MULTI-TASK TRAINING | 89.51 | 49.58 | 63.29 | 65.50 |
sequence [ˆri, yˆi] and treat the entire sequence as the target output in training small models, as considered in (Magister et al., 2022; Ho et al., 2022):
$${\mathcal{L}}_{\mathrm{single}}={\frac{1}{N}}\sum_{i=1}^{N}\ell(f(x_{i}),[{\hat{r}}_{i},{\hat{y}}_{i}]).\qquad(5)$$
In Table 2, we compare this single-task training approach to our proposed multi-task training approach for utilizing LLM-rationales. We see that not only multi-task training consistently leads to better performance, single-task training with LLMrationales can at times leads to worse performance than standard finetuning, e.g., on ANLI and CQA.
In fact, similar results have also been observed in (Wiegreffe et al., 2021; Magister et al., 2022; Ho et al., 2022) that simply treating rationale and label predictions as a single joint task may harm the model's performance on label prediction. This validates our use of the multi-task training approach, and highlights the need to treat the rationales carefully so as to unleash their actual benefits.
## 5 Discussion
We propose Distilling step-by-step to extract rationales from LLMs as informative supervision in training small task-specific models. We show that Distilling step-by-step reduces the training dataset required to curate task-specific smaller models; it also reduces the model size required to achieve, and even surpass, the original LLM's performance. Distilling step-by-step proposes a resource-efficient training-to-deployment paradigm compared to existing methods. Further studies demonstrate the generalizability and the design choices made in Distilling step-by-step. Finally, we discuss the limitations, future directions and ethics statement of our work below.
## Limitations
There are a number of limitations with our approach. First, we require users to produce a few example demonstrations (∼ 10-shot for all tasks)
in order to use the few-shot CoT (Wei et al., 2022)
prompting mechanism. This limitation can be improved by using recent advances that suggest that rationales can be elicited without any userannotated demonstrations (Kojima et al., 2022).
Second, training task-specific models with rationales incur slight training-time computation overhead. However, at test time, our multi-task design naturally avoids the computation overhead since it allows one to only predict labels without generating the rationales. Finally, while we observe success using LLM rationales, there is evidence that LLMs exhibit limited reasoning capability on more complex reasoning and planning tasks (Valmeekam et al., 2022). Future work should characterize how rationale quality affects Distilling step-by-step.
## Ethics Statement
It is worth noting that the behavior of the our downstream smaller models is subject to biases inherited from the larger teacher LLM. We envision that the same research progress in reducing anti-social behaviors in LLMs can also be applied to improve smaller language models.
## References
Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, and Mirella Lapata. 2022. Qameleon: Multilingual qa with only 5 examples. *arXiv* preprint arXiv:2211.08264.
Simran Arora, Avanika Narayan, Mayee F Chen, Laurel J Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, and Christopher Ré. 2022. Ask me anything: A simple strategy for prompting language models. *arXiv preprint arXiv:2210.02441*.
Lucas Beyer, Xiaohua Zhai, Amélie Royer, Larisa Markeeva, Rohan Anil, and Alexander Kolesnikov. 2022.
Knowledge distillation: A good teacher is patient and consistent. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition, pages 10925–10934.
Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and
Samuel Weinbach. 2022. GPT-NeoX-20B: An opensource autoregressive language model. In *Proceedings of the ACL Workshop on Challenges & Perspectives in Creating Large Language Models*.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Cristian Bucilua, Rich Caruana, and Alexandru ˇ
Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541.
Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. *Advances in Neural Information Processing Systems*, 31.
Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. 2020. Big self-supervised models are strong semi-supervised learners. *Advances in neural information processing systems*, 33:22243–22255.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Jacob Eisenstein, Daniel Andor, Bernd Bohnet, Michael Collins, and David Mimno. 2022. Honest students from untrusted teachers: Learning an interpretable question-answering pipeline from a pretrained language model. arXiv preprint arXiv:2210.02498.
Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot!
arXiv preprint arXiv:1901.05415.
Peter Hase and Mohit Bansal. 2021. When can models learn from explanations? a formal framework for understanding the roles of explanation data. *arXiv* preprint arXiv:2102.02201.
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015.
Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7).
Namgyu Ho, Laura Schmid, and Se-Young Yun.
2022. Large language models are reasoning teachers. *arXiv preprint arXiv:2212.10071*.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556.
Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 328–339, Melbourne, Australia.
Association for Computational Linguistics.
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022.
Large language models can self-improve. arXiv preprint arXiv:2210.11610.
Fotis Iliopoulos, Vasilis Kontonis, Cenk Baykal, Gaurav Menghani, Khoa Trinh, and Erik Vee. 2022.
Weighted distillation with unlabeled examples. In Advances in Neural Information Processing Systems.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *arXiv* preprint arXiv:2205.11916.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*.
Kevin J Liang, Weituo Hao, Dinghan Shen, Yufan Zhou, Weizhu Chen, Changyou Chen, and Lawrence Carin. 2020. Mixkd: Towards efficient distillation of large-scale language models. arXiv preprint arXiv:2011.00593.
Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2022.
Teaching small language models to reason. *arXiv* preprint arXiv:2212.08410.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2020. A diverse corpus for evaluating and developing english math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984.
Smitha Milli, Ludwig Schmidt, Anca D Dragan, and Moritz Hardt. 2019. Model reconstruction from model explanations. In *Proceedings of the Conference on Fairness, Accountability, and Transparency*,
pages 1–9.
Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. 2020.
Wt5?! training text-to-text models to explain their predictions. *arXiv preprint arXiv:2004.14546*.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In *Proceedings of the 58th Annual Meeting of the Association for Computational*
Linguistics. Association for Computational Linguistics.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language models. *arXiv preprint arXiv:2112.00114*.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094, Online.
Association for Computational Linguistics.
Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C
Lipton, Graham Neubig, and William W Cohen.
2022. Evaluating explanations: How much do explanations from the teacher aid students? *Transactions of the Association for Computational Linguistics*, 10:359–375.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-totext transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself!
leveraging language models for commonsense reasoning. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons:
Training differentiable models by constraining their explanations. *arXiv preprint arXiv:1703.03717*.
Ryan Smith, Jason A Fries, Braden Hancock, and Stephen H Bach. 2022a. Language models in the loop: Incorporating prompting into weak supervision. *arXiv preprint arXiv:2205.02318*.
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. 2022b. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model.
arXiv preprint arXiv:2201.11990.
Suraj Srinivas and François Fleuret. 2018. Knowledge transfer with jacobian matching. In International Conference on Machine Learning, pages 4723–4731.
PMLR.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),
pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics.
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling taskspecific knowledge from bert into simple neural networks. *arXiv preprint arXiv:1903.12136*.
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. *arXiv preprint arXiv:2201.08239*.
Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. 2022. Large language models still can't plan (a benchmark for llms on planning and reasoning about change). arXiv preprint arXiv:2206.10498.
Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, and Xiang Ren. 2022a. Pinto: Faithful language reasoning using prompt-generated rationales.
arXiv preprint arXiv:2211.01562.
Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. Want to reduce labeling cost? gpt-3 can help. *arXiv preprint* arXiv:2108.13487.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022b. Self-consistency improves chain of thought reasoning in language models. *arXiv preprint arXiv:2203.11171*.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*.
Peter West, Chandra Bhagavatula, Jack Hessel, Jena D
Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2021. Symbolic knowledge distillation: from general language models to commonsense models. arXiv preprint arXiv:2110.07178.
Sarah Wiegreffe, Ana Marasovic, and Noah A. Smith. ´
2021. Measuring association between labels and free-text rationales. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10266–10284, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Omar Zaidan, Jason Eisner, and Christine Piatko. 2007.
Using "annotator rationales" to improve machine learning for text categorization. In Human Language Technologies 2007: The Conference of the
North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 260–267, Rochester, New York.
Association for Computational Linguistics.
Eric Zelikman, Yuhuai Wu, and Noah D Goodman.
2022. Star: Bootstrapping reasoning with reasoning.
arXiv preprint arXiv:2203.14465.
Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, and Jitendra Malik. 2020. Sidetuning: a baseline for network adaptation via additive side networks. In European Conference on Computer Vision, pages 698–714. Springer.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al.
2022. Opt: Open pre-trained transformer language models. *arXiv preprint arXiv:2205.01068*.
Ye Zhang, Iain Marshall, and Byron C. Wallace. 2016.
Rationale-augmented convolutional neural networks for text classification. In *Proceedings of the 2016* Conference on Empirical Methods in Natural Language Processing, pages 795–804, Austin, Texas.
Association for Computational Linguistics.
Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Joseph E Gonzalez, et al. 2022. Alpa: Automating inter-and intraoperator parallelism for distributed deep learning.
arXiv preprint arXiv:2201.12023.
## A Experiment Detail A.1 Implementation
We perform our experiments on cloud A100×16 GPU instances. We train the T5 models with the following hyperparameters, using publicly available packages from https://github.com/ huggingface/transformers:
- T5-Base (220M) and T5-Large (770M): We train the models with learning rate = 5 ×
10−5, batch size = 64, max input length =
1024, for a maximum of 10000 steps.
- T5-XXL (11B): We train the models with learning rate = 5 × 10−5, batch size = 32, max input length = 1024, for a maximum of 4000 steps.
We report all the results over 4 random runs, and include the standard error in the presented plots.
## A.2 Datasets
We provide more detailed descriptions on the datasets used in our experiments. We include the sources from which we obtain the datasets as well as their original sources released from the authors.
We refer readers to these sources for their license or terms for use and/or distribution. To the best of our knowledge, the datasets used do not contain information that names or uniquely identifies individual people or offensive content.
- e-SNLI: The dataset was originally released in (Camburu et al., 2018), and made publicly available at https://github.com/ OanaMariaCamburu/e-SNLI. We obtain the dataset from https://huggingface.co/
datasets/esnli.
- ANLI: The dataset was originally released in (Nie et al., 2020), and made publicly available at https://github.com/
facebookresearch/anli. We obtain the dataset from https://huggingface.co/ datasets/anli. We use the R1 split in our experiments.
- CQA: The dataset was originally released in (Talmor et al., 2019), and made publicly available at https://www.tau-nlp.sites. tau.ac.il/commonsenseqa. It was then augmented with human-labeled explanations
| Dataset | Train | Validation | Test |
|-----------|---------|--------------|--------|
| e-SNLI | 549,367 | 9,842 | 9,824 |
| ANLI | 16,946 | 1,000 | 1,000 |
| CQA | 8,766 | 975 | 1,221 |
| SVAMP | 720 | 80 | 200 |
by (Rajani et al., 2019), which is available at https://github.com/salesforce/
cos-e. We obtain the dataset used in our experiments from https://huggingface.co/
datasets/cos_e.
- SVAMP: The dataset was originally released in (Patel et al., 2021). We obtain the dataset from https://github.com/
arkilpatel/SVAMP.
- ASDiv: The dataset was originally released in (Miao et al., 2020). We obtain the dataset from https://github.com/
chaochun/nlu-asdiv-dataset.
For each dataset, we randomly subsample 10%
of the original training set to serve as validation set when validation set is not originally provided. For CQA, we use the original validation set to serve as our test set since the ground-truth labels are not available for the original test set. We provide the dataset statistics in Table 3.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6
✓ A2. Did you discuss any potential risks of your work?
Section 6
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Appendix
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4, Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4, Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4, Appendix
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4, Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
huang-etal-2023-prosody | Prosody-{TTS}: Improving Prosody with Masked Autoencoder and Conditional Diffusion Model For Expressive Text-to-Speech | https://aclanthology.org/2023.findings-acl.508 | Expressive text-to-speech aims to generate high-quality samples with rich and diverse prosody, which is hampered by \textbf{dual challenges}: 1) prosodic attributes in highly dynamic voices are difficult to capture and model without intonation; and 2) highly multimodal prosodic representations cannot be well learned by simple regression (e.g., MSE) objectives, which causes blurry and over-smoothing predictions. This paper proposes Prosody-TTS, a two-stage pipeline that enhances \textbf{prosody modeling and sampling} by introducing several components: 1) a self-supervised masked autoencoder to model the prosodic representation without relying on text transcriptions or local prosody attributes, which ensures to cover diverse speaking voices with superior generalization; and 2) a diffusion model to sample diverse prosodic patterns within the latent space, which prevents TTS models from generating samples with dull prosodic performance. Experimental results show that Prosody-TTS achieves new state-of-the-art in text-to-speech with natural and expressive synthesis. Both subjective and objective evaluation demonstrate that it exhibits superior audio quality and prosody naturalness with rich and diverse prosodic attributes. Audio samples are available at \url{https://improved_prosody.github.io} |
## Prosody-Tts: Improving Prosody With Masked Autoencoder And Conditional Diffusion Model For Expressive Text-To-Speech
Rongjie Huang1∗
, Chunlei Zhang2∗
, Yi Ren1**, Zhou Zhao**1†
, Dong Yu2 Zhejiang University1, Tencent AI Lab2
{rongjiehuang, rayeren, zhaozhou}@zju.edu.cn
{cleizhang, dyu}@global.tencent.com
## Abstract
Expressive text-to-speech aims to generate high-quality samples with rich and diverse prosody, which is hampered by **dual challenges**: 1) prosodic attributes in highly dynamic voices are difficult to capture and model without intonation; and 2) highly multimodal prosodic representations cannot be well learned by simple regression (e.g., MSE) objectives, which causes blurry and over-smoothing predictions. This paper proposes Prosody-TTS,
a two-stage pipeline that enhances **prosody**
modeling and sampling by introducing several components: 1) a self-supervised masked autoencoder to model the prosodic representation without relying on text transcriptions or local prosody attributes, which ensures to cover diverse speaking voices with superior generalization; and 2) a diffusion model to sample diverse prosodic patterns within the latent space, which prevents TTS models from generating samples with dull prosodic performance. Experimental results show that Prosody-TTS achieves new state-of-the-art in text-to-speech with natural and expressive synthesis. Both subjective and objective evaluation demonstrate that it exhibits superior audio quality and prosody naturalness with rich and diverse prosodic attributes. 1
## 1 Introduction
Text-to-speech (TTS) (Wang et al., 2017; Ren et al.,
2019; Kim et al., 2020; Huang et al., 2023) aims to generate human-like audios using text and auxiliary conditions, which attracts broad interest in the machine learning community. TTS models have been extended to more complex scenarios, requiring more natural and expressive voice generation with improved prosody modeling (Min et al., 2021; Chen et al., 2021; Li et al., 2021). A growing number of applications, such as personalized voice assistants and game commentary, have been actively developed and deployed to real-world applications.
Expressive text-to-speech aims to generate samples with natural, rich, and diverse prosodic attributes (e.g., duration, pitch, and energy), which is challenged by two major obstacles: 1) Prosody patterns (Qian et al., 2021; Wang et al., 2018) in human speech are often very sparse, which are difficult to capture and model without supervision signals (i.e., detailed transcriptions); 2) machine learning models (Li et al., 2018; Wang et al., 2022)
usually learn a mean distribution over input data, resulting a dull prediction with prosody learners which fails to produce natural and diverse prosodic styles in the generated speech. Although recent studies (Choi et al., 2021; Kim et al., 2021; Ren et al., 2022) have proposed several ways to enhance prosody for high-fidelity TTS, there still exist **dual**
challenges: - **Prosody capturing and modeling.** Researchers leverage several designs to capture and model prosodic attributes: 1) Local prosody features.
Ren et al. (2020) and Choi et al. (2021) introduce the idea of predicting pitch and energy explicitly.
However, those signal processing-based prosodic attributes may have inevitable errors, which make the optimization of TTS models difficult and degrade performance. 2) Variational latent representations. A series of works (Sun et al., 2020; Kenter et al., 2019; Liu et al., 2022) utilize conditional variational auto-encoder to model prosody in a latent space, where global, local, or hierarchical features are sampled from a prior distribution.
Nevertheless, they generally request speech-text parallel data for modeling prosody, which constrain the learned representation to the paired TTS data.
- **Prosody producing and sampling.** Most works (Wang et al., 2017; Min et al., 2021; Yang et al., 2021a) utilize regression losses (e.g.,
8018 MSE) for prediction and assume that the latent space follows a unimodal distribution. However, the highly multimodal (a phoneme may be pronounced in various speaking styles) prosodic representations cannot be well modeled by these simple objectives, which causes blurry and oversmoothing predictions.
To address the above **dual challenges** for prosody-enhanced expressive text-to-speech, we propose Prosody-TTS, a two-stage TTS pipeline that improves both prosody modeling and sampling by introducing several novel designs:
- **Self-supervised prosody pre-training.** To handle different acoustic conditions for expressive speech, we propose prosody masked autoencoders (Prosody-MAE), a transformer-based model that captures prosody patterns (e.g., local rises and falls of the pitch and stress) in a selfsupervised manner. It is trained on audio-only data, which avoids inevitable errors and ensures to cover diverse speech corpora with superior generalization.
## - **Generative Diffusion Modeling In Latent Space.**
A diffusion model is explored to bridge TTS inputs (i.e., text and target speaker) and speaking prosody in latent space. We formulate the generative process with multiple conditional diffusion steps, and thus we expect our model to exhibit better diversity and prevent generating samples with dull prosodic performance.
Experimental results on LJSpeech and LibriTTS benchmarks demonstrate that our proposed Prosody-TTS achieves new state-of-the-art results for text-to-speech with natural and expressive synthesis. Both subjective and objective evaluations demonstrate that Prosody-TTS exhibits superior audio quality and prosody naturalness with rich and diverse prosodic attributes.
## 2 Related Works 2.1 Prosody Modeling In Text-To-Speech
Prosody modeling has been studied for decades in the TTS community. The idea of pitch and energy prediction (Łancucki ´ , 2021; Ren et al., 2020) represents a popular way to address the one-to-many mapping challenges. Wang et al. (2019) utilize the VQ-VAE framework to learn a latent representation for the F0 contour of each linguistic unit and adopt a second-stage model which maps from linguistic features to the latent features. Choi et al. (2021)
further use a new set of analysis features, i.e., the wav2vec and Yingram feature for self-supervised training. However, these signal processing-based prosodic attributes have inevitable errors, which make the optimization of TTS models difficult and result in degraded TTS performance. Instead of relying on local prosody attributes, a series of works (Sun et al., 2020; Kenter et al., 2019; Liu et al., 2022) utilize conditional variational autoencoder to model prosody in a latent space, where global, local, or hierarchical features are sampled from a prior distribution. Nevertheless, they generally request speech-text parallel data for modeling prosody, which constrained the learned representation to the paired TTS data and explicit poor generalization (Wang et al., 2022). Ren et al. (2022)
introduces a prosody encoder to disentangle the prosody to latent vectors, while the requirement of a pre-trained TTS model hurts model generalization. In this work, we propose to learn the prosodic distribution given speech-only corpora without relying on pre-trained TTS models or text transcriptions.
## 2.2 Self-Supervised Learning In Speech
Recently, self-supervised learning (SSL) has emerged as a popular solution to many speech processing problems with a massive amount of unlabeled speech data. HuBERT (Hsu et al., 2021) is trained with a masked prediction with masked continuous audio signals. SS-AST (Gong et al., 2022)
is a self-supervised learning method that operates over spectrogram patches. Baade et al. (2022)
propose a simple yet powerful improvement over the recent audio spectrogram transformer (SSAST)
model. Audio-MAE (Xu et al., 2022) is a simple extension of image-based Masked Autoencoders
(MAE) (He et al., 2022) for SSL from audio spectrograms. Unlike most of the speech SSL models which capture linguistic content for style-agnostic representation, we focus on learning prosodic representation in expressive speech, which is relatively overlooked.
## 2.3 Diffusion Probabilistic Model
Denoising diffusion probabilistic models
(DDPMs) (Ho et al., 2020; Song et al., 2020a)
are likelihood-based generative models that have recently advanced the SOTA results in several important domains, including image (Dhariwal and Nichol, 2021; Song et al., 2020a), audio (Huang et al., 2022c; Liu et al., 2021; Huang et al.,
2022d), and 3D point cloud generation (Luo and Hu, 2021). In this work, we investigate generative modeling for latent representations with a conditional diffusion model. Unlike regression-based prediction, it generates realistic results that match the ground-truth distribution and avoid over-smoothing predictions.
## 3 Prosody-Tts
In this section, we first overview the Prosody-TTS
framework, introducing several critical designs with prosody masked autoencoder (Prosody-MAE),
latent diffusion model, and the vector quantization layer. Finally, we present the pre-training, training, and inference pipeline, which supports high-fidelity speech synthesis with natural, rich, and diverse prosodic attributes.
## 3.1 Problem Formulation
Expressive text-to-speech aims to generate highfidelity speech samples with natural and diverse prosody (e.g., duration, pitch, and energy). Since the duration attribute has been inherently wellstudied in non-autoregressive literature (Ren et al.,
2020; Min et al., 2021; Huang et al., 2021, 2022b),
we mainly explore prosody on **rises and falls of**
the pitch and stress in this work.
## 3.2 Overview
As illustrated in Figure 1, to address the aforementioned **dual challenges** for prosody-enhanced expressive text-to-speech, we introduce a multistage pipeline with the following key designs: 1)
a prosody masked autoencoder (Prosody-MAE)
to **capture and model** prosody feature in a selfsupervised manner. 2) a generative diffusion model to **produce and sample** prosody in latent space.
Specifically:
1) In the pre-training stage, the Prosody-MAE
captures prosodic information from large-scale unpaired speech data without relying on transcriptions or local prosody attributes. The self-supervised training manner ensures Prosody-MAE learns discriminative prosody representations covering diverse speech corpora; 2) In training TTS models, the converged prosody encoder derives style representations z for optimizing the latent diffusion model (LDM), which bridges the TTS conditions (i.e., textual features and target speaker)
and prosody representations via diffusion process q(zt|zt−1); 3) In inference time, the LDM samples diverse latent representations within the prosodic space through reverse denoising pθ(zt−1|zt). It breaks the generation process into several conditional diffusion steps, which exhibits better diversity and prevents generating dull samples with a constrained prosodic distribution. We describe these designs in detail in the following subsections.
## 3.3 Self-Supervised Prosody Pre-Training
In this part, we propose Prosody-MAE, a selfsupervised autoencoder (AE) consisting of an encoder and decoder that can effectively capture and model prosodic style given speech samples without relying on text annotations. Moreover, we design several techniques to learn prosodic representation in a self-supervised manner:
- Information flow. Through analysis of speech attributes, Prosody-MAE enjoys a carefully-crafted bottleneck design to disentangle linguistic and speaker information, ensuring the prosody stream to learn discriminative style-aware representations.
- Multi-task learning. Auxiliary style (i.e., pitch and energy) classifications have been included in training SSL models, and it guarantees to discover style representation aware of the pitch/stress rises and falls.
## 3.3.1 Information Flow
Most voice reconstruction tasks (Choi et al., 2021; Polyak et al., 2021) can be defined by synthesizing and controlling three aspects of voice, **i.e., linguistic, speaker, and prosody encoder**. It motivates us to develop an autoencoder that can analyze voice into these properties and then synthesize them back into a speech **(transformer decoder)**.
Linguistic Encoder. Learning the linguistic content C from the speech signal is crucial to construct an intelligible speech signal, and we obtain linguistic representation using a pre-trained XLSR-53.
Since SSL representation (Choi et al., 2021; Qian et al., 2022; Gat et al., 2022) contain both linguistic and acoustic information, we perturb the speaker and prosody patterns in audios by randomly shifting pitch and shaping energy values, ensuring it only provides the linguistic-related (i.e., prosodicagnostic) information. More details have been included in Appendix E.
Speaker Encoder. Speaker S is perceived as the timbre characteristic of a voice. It has been
![3_image_0.png](3_image_0.png)
reported that (Choi et al., 2021) the features from the first layer of XLSR-53 perform as clusters representation for each speaker.
Prosody Encoder. Prosody is a vital part of the domain style, where different emotions or styles have distinctive prosody patterns. In the multilayer transformer prosody encoder, 1) speech is first transformed and embedded into spectrogram patches, and 2) the encoders f : *X 7→ P* take patches X as input and effectively capture prosodic latent representations p1*, . . . ,* pT for T time-steps.
3) Some tokens are masked by randomly replacing them with a learned masking token (miillustrated in Figure 1(a)). In practice, we mask by shuffling the input patches and keeping the first 1 − p proportion of tokens.
Transformer Decoder. As illustrated in Figure 1(a), we conduct the element-wise addition operation between the linguistic content C, speaker S and the prosody P representations before passing through the transformer decoder with a series of transformer blocks. To this end, the carefully crafted bottleneck design in Prosody-MAE disentangles linguistic, speaker, and prosody attributes and then synthesizes them back into a speech with a transformer decoder, ensuring the prosody stream to learn discriminative prosody-aware representations agnostic to linguistic content and speaker.
## 3.3.2 Multi-Task Learning
For training autoencoders, reconstruction loss Lg is calculated as a mean squared error between the output of the linear reconstruction head and the input patches. Contrastive head (Gong et al., 2022)
creates an output vector vi similar to the masked input patch xi but dissimilar to other masked inputs, where we consider different masked inputs as negative samples and implement the InfoNCE (Oord et al., 2018) as a criterion.
Moreover, to enhance the model in deriving style attributes, we explore the frame-level style
(i.e., pitch Lp, energy Le) classification with crossentropy criterion (Oord et al., 2018) as the complemental tasks. To formulate the classification target, we respectively 1) quantize the fundamental frequency (f0) of each frame to 256 possible values piin log-scale; and 2) compute the L2-norm of the amplitude of each short-time Fourier transform
(STFT) and then quantize to 256 possible values ei uniformly. On that account, Prosody-MAE better discovers prosodic representations which are aware of the pitch/stress rises and falls.
## 3.4 Generative Modeling Of Prosodic Representations
To produce and sample prosodic representation z within the latent space learned in Prosody-MAE,
we implement our prosody generator over Latent Diffusion Models (LDMs) (Rombach et al.,
2022; Gal et al., 2022), a recently introduced class of Denoising Diffusion Probabilistic Models
(DDPMs) (Ho et al., 2020) that operate in the latent space. As illustrated in Figure 1(c), the denoising WaveNet θ conditions on phonetic representation, breaking the generation process into several conditional diffusion steps. The training loss is defined as the mean squared error in the noise ϵ ∼ N (0, I)
space, and efficient training is optimizing a random term of t with stochastic gradient descent:
$${\cal L}_{\theta}=\left\|\epsilon_{\theta}\left(\alpha_{t}{\bf z}_{0}+\sqrt{1-\alpha_{t}^{2}}\epsilon\right)-\epsilon\right\|_{2}^{2}\tag{1}$$
To this end, our prosody generator produces and samples prosody faithfully, which strongly matches the ground-truth distribution and exhibits better diversity. It avoids incorrect unimodal distribution assumptions by regression objectives (e.g., MSE)
and prevents generating samples with dull prosodic performance. We refer the reader to Section 4.2 for summary of our findings.
## 3.5 Vector Quantization
It has been reported (Rombach et al., 2022) that due to the expressiveness of diffusion models, the produced latent spaces z could be highly variant and diverse. To avoid instability, we impose a vector quantization (VQ) layer after the latent diffusion for regularization.
Denote the latent space e ∈ RK×D where K is the size of the discrete latent space (i.e., a K-way categorical), and D is the dimensionality of each latent embedding vector ei. Note that there are K embedding vectors ei ∈ R
D, i ∈ 1, 2*, . . . , K*.
To make sure the representation sequence commits to an embedding and its output does not grow, we add a commitment loss following previous work (van den Oord et al., 2017):
$${\mathcal{L}}_{c}=\left\|\mathbf{z}-\mathrm{{sg}}[e]\right\|_{2}^{2},$$
, (2)
where $\mathrm{sg}$ stands for stop gradient.
## 3.6 Pre-Training, Training And Inference Procedures 3.6.1 Pre-Training And Training
We pre-train the Prosody-MAE to derive prosodic representation in a self-supervised manner with the following objectives: 1) reconstruction loss Lg:
the MSE between the estimated and ground-truth sample; 2) contrastive loss Ld: the discriminative gradient to pick the correct patch for each masked position from all patches being masked, and 3)
frame-level style (i.e., pitch, energy) classification losses Lp,Le: the cross entropy error between the estimated and ground-truth style attributes.
In training Prosody-TTS, the final loss terms consist of the following parts: 1) duration loss Ldur:
MSE between the predicted and the GT phonemelevel duration in log scale; 2) diffusion losses in prosody generator Lldm and mel decoder Ldec: calculating between the estimated and gaussian noise according to Equation 1; 3) commitment loss Lc:
regularizing vector quantization layer according to Equation 2.
## 3.6.2 Inference
As illustrated in Figure 1, Prosody-TTS generates expressive speech with natural, rich, and diverse prosody in the following pipeline: 1) The text encoder takes the phoneme sequence as input, which is expanded according to the predicted durations; 2) conditioning on linguistic and speaker information, the prosody generator randomly samples a noise latent and iteratively denoises to produce a new prosodic representation in latent space, and 3)
the mel decoder converts randomly sampled noise latent and iteratively decodes to expressive melspectrograms.
## 4 Experiments 4.1 Experimental Setup 4.1.1 Pre-Training Prosody-Mae
In the pre-training stage, we utilize the commonlyused LibriSpeech (Panayotov et al., 2015) dataset with labels discarded, which provides 960 hours of audiobook data in English, read by over 1,000 speakers. We convert the 16kHz waveforms into 128-dimensional log-Melfilterbank features with a frame length of 25 ms and frame shift of 10 ms.
The spectrogram is then split into 16×16 patches.
By default, we use an encoder with 6 layers and a decoder of 2 layers, both using 12 heads and a width of 768. We train Prosody-MAE for up to 400k iterations on 8 NVIDIA V100 GPUs using the publicly-available *fairseq* framework (Ott et al., 2019), and the pre-training takes about 5 days.
For downstream evaluation, we use the standard SUPERB (Yang et al., 2021b) training and testing framework. More detailed information has been attached in Appendix C.
## 4.1.2 Training Prosody-Tts
Dataset. For a fair and reproducible comparison against other competing methods, we use the benchmark LJSpeech dataset (Ito, 2017), which consists of 13,100 audio clips from a female speaker for about 24 hours in total. For the multi-speaker scenario, we utilize the 300-hour LibriTTS (Zen et al.,
2019) dataset derived from LibriSpeech. We convert the text sequence into the phoneme sequence with an open-source grapheme-to-phoneme conver-
Method LJSpeech **LibriTTS**
MOS-P MOS-Q MCD NDB JSD MOS-P MOS-Q MCD NDB JSD
GT 4.36±0.05 4.39±0.06 / / / 4.38±0.05 4.42±0.06 / /
GT(voc.) 4.31±0.06 4.25±0.06 1.67 19 0.02 4.35±0.04 4.22±0.05 1.52 41 0.01 FastSpeech 2 3.92±0.07 3.84±0.06 3.88 45 0.05 3.89±0.06 3.81±0.07 4.35 74 0.04
StyleSpeech 3.94±0.06 3.88±0.05 5.54 41 0.07 3.95±0.07 3.91±0.08 3.78 58 **0.01**
Glow-TTS 3.88±0.06 3.91±0.06 3.54 34 **0.03** 3.91±0.08 3.86±0.08 5.38 61 0.03 Grad-TTS 3.91±0.07 3.92±0.06 5.01 49 0.13 3.96±0.06 3.97±0.05 3.93 71 0.05
YourTTS 3.97±0.06 3.96±0.06 5.09 47 0.08 3.99±0.07 3.99±0.06 4.61 73 0.06
Prosody-TTS 4.10±0.06 4.03±**0.05 3.52 30** 0.04 4.12±0.07 4.09±**0.06 3.39 52 0.01**
Table 2: Prosody modeling and sampling approaches comparison with other models.
## Sion Tool (Sun Et Al., 2019) 2.
Following the common practice (Chen et al.,
2021; Min et al., 2021), we conduct preprocessing on the speech and text data: 1) convert the sampling rate of all speech data to 16kHz; 2) extract the spectrogram with the FFT size of 1024, hop size of 256, and window size of 1024 samples; 3)
convert it to a mel-spectrogram with 80 frequency bins.
Model Configurations. Prosody-TTS consists of 4 feed-forward transformer blocks for the phoneme encoder. We add a linear layer to transform the 768-dimensional prosody latent representation from Prosody-MAE to 256 dimensions. The default size of the codebook in the vector quantization layer is set to 1000. The diffusion model comprises a 1x1 convolution layer and N convolution blocks with residual connections to project the input hidden sequence with 256 channels. For any step t, we use the cosine schedule βt = cos(0.5πt).
More detailed information has been attached in Appendix A.
Training and Evaluation. We train ProsodyTTS for 200,000 steps using 4 NVIDIA V100 GPUs with a batch size of 64 sentences. Adam optimizer is used with β1 = 0.9, β2 = 0.98, ϵ = 10−9.
We utilize HiFi-GAN (Kong et al., 2020) as the vocoder to synthesize waveform from the melspectrogram in our experiments.
We conduct crowd-sourced human evaluations on the testing set via Amazon Mechanical Turk, which is reported with 95% confidence intervals
(CI). We analyze the MOS in two aspects: prosody
(naturalness of pitch, energy, and duration) and audio quality (clarity, high-frequency and original timbre reconstruction), respectively scoring MOSP and MOS-Q. We further include objective evaluation metrics: MCD (Kubichek, 1993) measures the audio and prosody quality, NDB and JSD (Richardson and Weiss, 2018) explore the diversity of generated mel-spectrograms. More details have been attached in Appendix F.
Baseline Models. We compare the quality of generated audio samples with other systems, including 1) GT, the ground-truth audio; 2) GT (voc.),
we first convert the ground-truth audio into melspectrograms and then convert them back to audio using HiFi-GAN (V1) (Kong et al., 2020); 3) FastSpeech 2 (Ren et al., 2020): a model that predicts local prosody attributes; 4) Meta-StyleSpeech (Kim et al., 2020): the finetuned multi-speaker model with meta-learning; 5) Glow-TTS (Kim et al.,
2020): a flow-based TTS model trained with monotonic alignment search; 6) Grad-TTS (Popov et al.,
2021): a denoising diffusion probabilistic models for speech synthesis. 7) YourTTS (Casanova et al.,
2022): an expressive model for zero-shot multispeaker synthesis which is built upon VITS (Kim et al., 2021). We list the prosody modeling and sampling approaches in baseline models in Table 2.
| Model | Prosody Capturing | Prosody Sampling |
|--------------|---------------------|--------------------|
| FastSpeech 2 | Local Prosody | Regression |
| StyleSpeech | Local Prosody | Regression |
| Glow-TTS | Local Prosody | Generative |
| Grad-TTS | Local Prosody | Generative |
| YourTTS | Variational | Generative |
| Prosody-TTS | Self-Supervised | Generative |
## 4.2 Quantitative Results
Both objective and subjective evaluation results are presented in Table 1, and we have the following observations: 1) In terms of **audio quality**, Prosody-TTS achieves the highest perceptual quality with MOS-Q of 4.03 (LJSpeech) and 4.09 2https://github.com/Kyubyong/g2p
![6_image_0.png](6_image_0.png)
(LibriTTS). For objective evaluation, Prosody-TTS
also demonstrates the outperformed performance in MCD, superior to all baseline models. 2) For prosody diversity and naturalness, Prosody-TTS
scores the highest overall MOS-P with a gap of 0.21 (LJSpeech) and 0.23 (LibriTTS) compared to the ground truth audio. Prosody-TTS scores the superior NDB with scores of 30 (LJSpeech)
and 52 (LibriTTS), producing samples covering diverse prosodic patterns (e.g., local rises and falls of the pitch and stress). Informally, by breaking the generation process into several conditional diffusion steps, generative latent modeling prevents TTS from synthesizing samples with dull prosodic performance.
The evaluation of the TTS models is very challenging due to its subjective nature in perceptual quality, and thus we include a site-by-site AXY test in Table 3. For each reference (A), the listeners are asked to choose a preferred one among the samples synthesized by baseline models (X) and proposed Prosody-TTS (Y), from which AXY preference rates are calculated. It indicates that raters prefer our model synthesis against baselines in terms of prosody naturalness and expressiveness. Without relying on text transcriptions or local prosody attributes, Prosody-TTS is trained on an audio-only corpus in a self-supervised manner, covering diverse speaking styles and avoiding dull synthesis with similar patterns.
## 4.3 Qualitative Findings
As illustrated in Figure 2, we plot the melspectrograms and corresponding pitch tracks generated by the TTS systems and have the follow-
| Baseline | 7-point score | X | Neutral | Y |
|--------------|-----------------|------|-----------|-----|
| FastSpeech 2 | 1.13 ±0.19 | 21% | 10% | 69% |
| StyleSpeech | 1.50±0.11 | 33% | 12% | 55% |
| Glow-TTS | 1.11±0.11 | 13% | 22% | 65% |
| Grad-TTS | 1.20±0.08 | 19 % | 21% | 60% |
| YourTTS | 1.42±0.10 | 28% | 13% | 59% |
ing observations: 1) Prosody-TTS can generate mel-spectrograms with rich details in frequency bins between two adjacent harmonics, unvoiced frames, and high-frequency parts, which results in more natural sounds. 2) Prosody-TTS demonstrates its ability to generate samples with diverse prosodic styles. In contrast, some baseline models have difficulties addressing the dual challenges of prosody modeling and sampling: some of them learn a mean pitch contour (YourTTS, Grad-TTS)
or incomplete sampling (FastSpeech 2), others suffer from a perturbed distribution with acute contour
(SC-GlowTTS, Meta-StyleSpeech).
## 4.4 Ablation Studies And Model Properties
In this section, we conduct ablation studies to demonstrate the effectiveness of several designs to alleviate the dual challenges in prosody-enhanced text-to-speech:
- For **prosody capturing and modeling**, we explore Prosody-MAE with different model properties in the style-aware downstream challenges, including the frame-level pitch and energy recognition on the commonly-used
(c) **Prosody Sampling**
dataset IEMOCAP (Busso et al., 2008).
- For **prosody producing and sampling**, we investigate the generative modeling in ProsodyTTS with diffusion prosody generator and vector quantization modulewith through CMOS
evaluation.
| Model | CMOS-P CMOS-Q | |
|-----------------------|-----------------|-------|
| Prosody-MAE | 0.00 | 0.00 |
| w/o LDM | -0.11 | -0.04 |
| w/o VQ | -0.04 | -0.08 |
| Local Prosody | -0.12 | -0.02 |
| Variational Inference | -0.10 | -0.03 |
## 4.4.1 Prosody Capturing And Modeling
Pretext task. We investigate the impact of different pretext tasks for pre-training the Prosody-MAE,
and find that 1) the additional contrastive objective Ld leads to better performance for all tasks, and 2) the joint multi-task learning with frame-level style classification Lp,Le has witnessed a distinct promotion of downstream accuracy, demonstrating its efficiency in learning style-aware prosody representations.
Information flow. We conduct ablation studies to demonstrate the effectiveness of the carefullycrafted information flow in learning prosodic style attributes: 1) Dropping the linguistic and speaker encoder has witnessed a distinct degradation of downstream performance, proving that they disentangle the linguistic and speaker information, ensuring the prosody stream to learn style-aware representations; and 2) Removing the information perturbation also decreases accuracy, demonstrating that the perturbation assists to selectively provide only the linguistic (i.e., prosodic-agnostic) and eliminate undesired information.
More ablations on masking strategies, **network**
architecture, and further **comparision with other**
state-of-the-art have been attached in Appendix B
## 4.4.2 Prosody Producing And Sampling
To verify the effectiveness of prosody producing and sampling in Prosody-TTS, we respectively replace the latent diffusion model and remove the vector quantization module. The CMOS evaluation results have been presented in Table 4(c), and we have the following observations: 1) Replacing the diffusion prosody generator with regression-based predictor results in decrease prosody naturalness, suggesting that generative latent diffusion avoids producing blurry and over-smoothing results. 2)
Removing the vector quantization layer has witnessed a distinct drop in audio quality, verifying that the VQ compression layer is efficient in regularizing latent spaces and preventing arbitrarily high-variance predictions. 3) Since baseline models with local attributes have inevitable errors, and variational inference requires parallel speech-text data which constrains learned representation, they both lead to the degradation in prosody naturalness.
## 5 Conclusion
In this work, we propose Prosody-TTS, improving prosody with masked autoencoder and conditional diffusion model for expressive text-to-speech. To tackle **dual challenges of prosody modeling and**
sampling, we design a two-stage pipeline to enhance high-quality synthesis with prosperous and diverse prosody: 1) Prosody-MAE was introduced to pre-train on large-scale unpaired speech datasets to capture prosodic representations without relying on text transcriptions. It ensured that the model covered diverse speaking voices and avoided inevitable error. 2) The latent diffusion model was adopted to produce diverse patterns within the learned prosody space. It broke the generation process into several conditional diffusion steps, avoiding generating samples with dull prosodic performance. Experimental results demonstrated that Prosody-TTS
promoted prosody modeling and synthesized highfidelity speech samples, achieving new state-ofthe-art results with outperformed audio quality and
| IF IP | PA | PM EM | |
|-----------------------------------------------|-------------------------------------------------------------------------------|---------|----|
| Objective | PA | PM | EM |
| Lg | 70.0 8.35 4.63 | | |
| +Ld | 73.1 7.50 3.13 | | |
| +Ld + Lp + Le 75.2 7.22 1.76 (a) Pretext Task | % % 73.0 7.50 3.13 " % 74.9 7.76 6.26 " " 75.2 7.22 1.76 (b) Information Flow | | |
prosody expressiveness. For future work, we will further extend Prosody-TTS to more challenging scenarios such as multilingual prosody learning.
We envisage that our work serve as a basis for future prosody-aware TTS studies.
## 6 Limitation
Prosody-TTS adopts generative diffusion models for high-quality synthesis, and thus it inherently requires multiple iterative refinements for better results. Besides, latent diffusion models require typically require more computational resources, and degradation could be witnessed with decreased training data. One of our future directions is to develop lightweight and fast diffusion models for accelerating sampling.
## 7 Ethics Statement
Prosody-TTS lowers the requirements for highquality and expressive text-to-speech synthesis, which may cause unemployment for people with related occupations, such as broadcasters and radio hosts. In addition, there is the potential for harm from non-consensual voice cloning or the generation of fake media, and the voices of the speakers in the recordings might be overused than they expect.
## Acknowledgements
This work was supported in part by the National Key R&D Program of China under Grant No.2022ZD0162000, National Natural Science Foundation of China under Grant No.62222211, Grant No.61836002 and Grant No.62072397.
## References
Alan Baade, Puyuan Peng, and David Harwath. 2022.
Mae-ast: Masked autoencoding audio spectrogram transformer. *arXiv preprint arXiv:2203.16691*.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
Advances in Neural Information Processing Systems, 33:12449–12460.
Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S
Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. *Language resources* and evaluation, 42(4):335–359.
Edresson Casanova, Julian Weber, Christopher D
Shulby, Arnaldo Candido Junior, Eren Gölge, and
Moacir A Ponti. 2022. Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone. In International Conference on Machine Learning, pages 2709–2720. PMLR.
Mingjian Chen, Xu Tan, Bohan Li, Yanqing Liu, Tao Qin, Sheng Zhao, and Tie-Yan Liu. 2021. Adaspeech:
Adaptive text to speech for custom voice. *arXiv* preprint arXiv:2103.00993.
Hyeong-Seok Choi, Juheon Lee, Wansoo Kim, Jie Lee, Hoon Heo, and Kyogu Lee. 2021. Neural analysis and synthesis: Reconstructing speech from selfsupervised representations. *Advances in Neural Information Processing Systems*, 34:16251–16265.
Prafulla Dhariwal and Alex Nichol. 2021. Diffusion models beat gans on image synthesis. arXiv preprint arXiv:2105.05233.
Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel CohenOr. 2022. An image is worth one word: Personalizing text-to-image generation using textual inversion.
arXiv preprint arXiv:2208.01618.
Itai Gat, Felix Kreuk, Ann Lee, Jade Copet, Gabriel Synnaeve, Emmanuel Dupoux, and Yossi Adi. 2022.
On the robustness of self-supervised representations for spoken language modeling. *arXiv preprint* arXiv:2209.15483.
Yuan Gong, Cheng-I Lai, Yu-An Chung, and James Glass. 2022. Ssast: Self-supervised audio spectrogram transformer. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pages 10699–10709.
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000–16009.
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models.
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. *IEEE/ACM Transactions on Audio,*
Speech, and Language Processing, 29:3451–3460.
Kuan Po Huang, Yu-Kuan Fu, Yu Zhang, and Hungyi Lee. 2022a. Improving distortion robustness of self-supervised speech processing tasks with domain adaptation. *arXiv preprint arXiv:2203.16104*.
Rongjie Huang, Feiyang Chen, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. 2021. Multi-singer:
Fast multi-singer singing voice vocoder with a largescale corpus. In *Proceedings of the 29th ACM International Conference on Multimedia*, pages 3945–
3954.
Rongjie Huang, Chenye Cui, Feiyang Chen, Yi Ren, Jinglin Liu, Zhou Zhao, Baoxing Huai, and Zhefeng Wang. 2022b. Singgan: Generative adversarial network for high-fidelity singing voice generation. In Proceedings of the 30th ACM International Conference on Multimedia, pages 2525–2535.
Rongjie Huang, Max WY Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, and Zhou Zhao. 2022c. Fastdiff:
A fast conditional diffusion model for high-quality speech synthesis. *arXiv preprint arXiv:2204.09934*.
Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, et al. 2023.
Audiogpt: Understanding and generating speech, music, sound, and talking head. arXiv preprint arXiv:2304.12995.
Rongjie Huang, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech. In Advances in Neural Information Processing Systems.
Rongjie Huang, Zhou Zhao, Huadai Liu, Jinglin Liu, Chenye Cui, and Yi Ren. 2022d. Prodiff: Progressive fast diffusion model for high-quality text-to-speech.
In *Proceedings of the 30th ACM International Conference on Multimedia*, pages 2595–2605.
Keith Ito. 2017. The lj speech dataset. https://
keithito.com/LJ-Speech-Dataset/.
Tom Kenter, Vincent Wan, Chun-An Chan, Rob Clark, and Jakub Vit. 2019. Chive: Varying prosody in speech synthesis with a linguistically driven dynamic hierarchical conditional variational network. In *International Conference on Machine Learning*, pages 3331–3340. PMLR.
Jaehyeon Kim, Sungwon Kim, Jungil Kong, and Sungroh Yoon. 2020. Glow-tts: A generative flow for text-to-speech via monotonic alignment search. *Advances in Neural Information Processing Systems*,
33:8067–8077.
Jaehyeon Kim, Jungil Kong, and Juhee Son. 2021.
Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech. In *International Conference on Machine Learning*, pages 5530–5540. PMLR.
Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020.
Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. Proc. of NeurIPS.
Robert Kubichek. 1993. Mel-cepstral distance measure for objective speech quality assessment. 1:125–128.
Max W. Y. Lam, Jun Wang, Rongjie Huang, Dan Su, and Dong Yu. 2021. Bilateral denoising diffusion models.
Adrian Łancucki. 2021. Fastpitch: Parallel text-to- ´
speech with pitch prediction. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6588–6592.
IEEE.
Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C
Kot. 2018. Domain generalization with adversarial feature learning. In *Proceedings of the IEEE conference on computer vision and pattern recognition*,
pages 5400–5409.
Xiang Li, Changhe Song, Jingbei Li, Zhiyong Wu, Jia Jia, and Helen Meng. 2021. Towards multi-scale style control for expressive speech synthesis. *arXiv* preprint arXiv:2104.03521.
Jinglin Liu, Chengxi Li, Yi Ren, Feiyang Chen, Peng Liu, and Zhou Zhao. 2021. Diffsinger: Singing voice synthesis via shallow diffusion mechanism. *arXiv* preprint arXiv:2105.02446, 2.
Zhengxi Liu, Qiao Tian, Chenxu Hu, Xudong Liu, Menglin Wu, Yuping Wang, Hang Zhao, and Yuxuan Wang. 2022. Controllable and lossless nonautoregressive end-to-end text-to-speech. arXiv preprint arXiv:2207.06088.
Shitong Luo and Wei Hu. 2021. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2837–2845.
Dongchan Min, Dong Bok Lee, Eunho Yang, and Sung Ju Hwang. 2021. Meta-stylespeech: Multispeaker adaptive text-to-speech generation. pages 7748–7759.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018.
Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. *arXiv preprint arXiv:1904.01038*.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In *2015* IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206–5210.
IEEE.
Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. Speech resynthesis from discrete disentangled self-supervised representations. arXiv preprint arXiv:2104.00355.
Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, and Mikhail Kudinov. 2021. Grad-tts:
A diffusion probabilistic model for text-to-speech.
In *International Conference on Machine Learning*,
pages 8599–8608. PMLR.
Kaizhi Qian, Yang Zhang, Shiyu Chang, Mark Hasegawa-Johnson, and David Cox. 2020. Unsupervised speech decomposition via triple information bottleneck. In *International Conference on Machine* Learning, pages 7836–7846. PMLR.
Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. In *Advances in Neural Information Processing Systems*, pages 6309–6318.
Kaizhi Qian, Yang Zhang, Shiyu Chang, Jinjun Xiong, Chuang Gan, David Cox, and Mark HasegawaJohnson. 2021. Global rhythm style transfer without text transcriptions. *arXiv preprint arXiv:2106.08519*.
Kaizhi Qian, Yang Zhang, Heting Gao, Junrui Ni, Cheng-I Lai, David Cox, Mark Hasegawa-Johnson, and Shiyu Chang. 2022. Contentvec: An improved self-supervised speech representation by disentangling speakers. In *International Conference on Machine Learning*, pages 18003–18017. PMLR.
Xin Wang, Shinji Takaki, Junichi Yamagishi, Simon King, and Keiichi Tokuda. 2019. A vector quantized variational autoencoder (vq-vae) autoregressive neural f_0 model for statistical parametric speech synthesis. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:157–170.
Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2020. Fastspeech 2: Fast and high-quality end-to-end text to speech.
arXiv preprint arXiv:2006.04558.
Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al.
2017. Tacotron: Towards end-to-end speech synthesis. *arXiv preprint arXiv:1703.10135*.
Yi Ren, Ming Lei, Zhiying Huang, Shiliang Zhang, Qian Chen, Zhijie Yan, and Zhou Zhao. 2022.
Prosospeech: Enhancing prosody with quantized vector pre-training in text-to-speech. In *ICASSP 2022-*
2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7577– 7581. IEEE.
Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. Fastspeech: Fast, robust and controllable text to speech. *Advances in* Neural Information Processing Systems, 32.
Hu Xu, Juncheng Li, Alexei Baevski, Michael Auli, Wojciech Galuba, Florian Metze, Christoph Feichtenhofer, et al. 2022. Masked autoencoders that listen.
arXiv preprint arXiv:2207.06405.
Jinhyeok Yang, Jae-Sung Bae, Taejun Bak, Youngik Kim, and Hoon-Young Cho. 2021a. Ganspeech:
Adversarial training for high-fidelity multi-speaker speech synthesis. *arXiv preprint arXiv:2106.15153*.
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695.
Shu-wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y Lin, Andy T Liu, Jiatong Shi, Xuankai Chang, GuanTing Lin, et al. 2021b. Superb: Speech processing universal performance benchmark. arXiv preprint arXiv:2105.01051.
Jiaming Song, Chenlin Meng, and Stefano Ermon.
2020a. Denoising diffusion implicit models. In Proc.
of ICLR.
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole.
2020b. Score-based generative modeling through stochastic differential equations. In *Proc. of ICLR*.
Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J
Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu. 2019.
Libritts: A corpus derived from librispeech for textto-speech. *arXiv preprint arXiv:1904.02882*.
Guangzhi Sun, Yu Zhang, Ron J Weiss, Yuan Cao, Heiga Zen, and Yonghui Wu. 2020. Fullyhierarchical fine-grained prosody modeling for interpretable speech synthesis. In ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 6264–6268.
IEEE.
Hao Sun, Xu Tan, Jun-Wei Gan, Hongzhi Liu, Sheng Zhao, Tao Qin, and Tie-Yan Liu. 2019. Token-level ensemble distillation for grapheme-to-phoneme conversion. *arXiv preprint arXiv:1904.03446*.
Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. 2022. Generalizing to unseen domains: A survey on domain generalization. *IEEE*
Transactions on Knowledge and Data Engineering.
Yuxuan Wang, Daisy Stanton, Yu Zhang, RJ-Skerry Ryan, Eric Battenberg, Joel Shor, Ying Xiao, Ye Jia, Fei Ren, and Rif A Saurous. 2018. Style tokens:
Unsupervised style modeling, control and transfer in end-to-end speech synthesis. In *International* Conference on Machine Learning, pages 5180–5189.
PMLR.
Eitan Richardson and Yair Weiss. 2018. On gans and gmms. In *Proc. of ICONIP*.
## A Details Of Models
In this section, we describe hyper-parameters and details of several modules.
## A.1 Model Configurations
We list the model hyper-parameters of ProsodyTTS in Table 5.
## A.2 Diffusion Mechanism
![11_image_0.png](11_image_0.png)
For the training prosody latent diffusion model, the clean prosodic representation derived by Prosody-MAE passes through the vector quantification layer, which is also adopted to optimize the latent diffusion model (LDM) via the forward diffusion process. In inference time, the LDM samples diverse latent representations within the prosodic space through reverse backward denoising. According to the spectrogram denoiser, sampling from the Gaussian prior distribution is regarded as a common assumption. The diffusion decoder receives the textual hidden representation as a conditional signal and iteratively denoises Gaussian noise to reconstruct the target distribution by reverse sampling.
## B Downstream Evaluation On Model Properties
In the fine-tuning phase, we remove the decoder and only fine-tune the encoder on the commonlyused dataset IEMOCAP (Busso et al., 2008) that contains about 12 hours of emotional speech. We use a fixed learning rate of 1e-4 and max iteration of 10k and fine-tune on 4 V100 GPUs for 60 epochs using the SUPERB (Yang et al., 2021b) framework.
We further evaluate the architecture and masking strategies designs in Prosody-MAE:
Network architecture. Similar to the MAE paper demonstrated for the visual domain, increasing the decoder depth only provides minor improvements if any, indicating that the decoder depth can be small relative to the encoder.
Masking strategies. We compare different masking ratios for pre-training Prosody-MAE, and observe that a high masking ratio (70% in our case)
is optimal for audio spectrograms. Due to the fact that audio spectrograms and images are continuous signals with significant redundancy, and thus SSL
models still could reconstruct results given most tokens dropped, which is consistent with the masked autoencoders (He et al., 2022) in the visual domain.
Comparision with other state-of-the-art. We compare our proposed Prosody-MAE with prior state-of-the-art SSL models, including: 1) wav2vec 2.0 (Baevski et al., 2020), 2) hubert (Hsu et al.,
2021), 3) robust hubert (Huang et al., 2022a), and 4) mae-ast (Baade et al., 2022) and find that our proposed Prosody-MAE achieves the best performance across all tasks compared to other systems.
Specifically, the majority of the speech SSL models focus on learning the linguistic content information, which try to disentangle unwanted variations (e.g. acoustic variations) from the content. In contrast, we hope to capture prosodic information from speech, and thus Prosody-MAE exhibits outperformed capability in capturing style attributes.
## C **Details Of Pre-Training And Fine-Tuning**
We list the pre-training and fine-tuning settings in Table 7.
| Settings | Values | |
|--------------------|--------------------|--------|
| Optimizer | Adam | |
| Base Learning Rate | 0.0001 | |
| Batch Size | 900 | |
| Optimizer Momentum | 0.9,0.98 | |
| Weight Decay | 0.01 | |
| Warmup Updates | 32000 | |
| Pre-training | Optimizer | Adam |
| Fine-tuning | Base Learning Rate | 0.0001 |
| Batch Size | 4 | |
## D Diffusion Probabilistic Models
Given i.i.d. samples {x0 ∈ R
D} from an unknown data distribution p*data*(x0). In this section, we introduce the theory of diffusion probabilistic model (Ho et al., 2020; Lam et al., 2021; Song et al., 2020a,b), and present diffusion and reverse process given by denoising diffusion probabilistic models
(DDPMs), which could be used to learn a model distribution pθ(x0) that approximates p*data*(x0).
| Hyperparameter | Prosody-TTS | |
|----------------------------------------|---------------------------------------|-----|
| Phoneme Embedding | 192 | |
| Encoder Layers | 4 | |
| Encoder Hidden | 256 | |
| Encoder Conv1D Kernel | 9 | |
| Encoder Conv1D Filter Size | 1024 | |
| Encoder Attention Heads | 2 | |
| Encoder Dropout | 0.1 | |
| Text Encoder | Duration Predictor Conv1D Kernel | 3 |
| Duration Predictor | Duration Predictor Conv1D Filter Size | 256 |
| Duration Predictor Dropout | 0.5 | |
| VQ Codebook Size | 1000 | |
| Latent Diffusion Residual Layers | 30 | |
| Latent Diffusion Residual Channels | 256 | |
| Latent Diffusion WaveNet Conv1d Kernel | 3 | |
| Latent Diffusion WaveNet Conv1d Filter | 512 | |
| Prosody Generator | Diffusion Embedding | 256 |
| Residual Layers | 20 | |
| Residual Channels | 256 | |
| WaveNet Conv1d Kernel | 3 | |
| WaveNet Conv1d Filter | 512 | |
| Total Number of Parameters | 53M | |
| Diffusion Decoder | | |
Table 5: Hyperparameters of Prosody-TTS models.
| Model | PA | PM EM | | | | | |
|--------------------------------------------------------------------------------|----------------|----------------|------------|----|-------|-------------|----------------|
| Layers | PA | PM EM | Mask Ratio | PA | PM EM | wav2vec 2.0 | 70.7 7.34 3.21 |
| HuBERT | 69.9 8.00 5.63 | | | | | | |
| Robust HuBERT 69.5 7.95 5.37 MAE-AST 73.1 8.17 5.43 Prosody-MAE 75.2 7.22 1.76 | | | | | | | |
| (c) Comparision with other state-of-the-art | | | | | | | |
| 2 | 75.2 7.22 1.76 | | | | | | |
| 4 | 75.3 7.41 2.01 | | | | | | |
| 6 | 75.5 7.73 2.25 | | | | | | |
| 8 | 74.6 7.85 2.52 | | | | | | |
| (a) Network Architecture | 80% | 75.2 7.22 1.76 | | | | | |
| 70% | 75.2 7.11 1.65 | | | | | | |
| 60% | 74.9 7.05 2.11 | | | | | | |
| 50% | 74.6 7.34 2.82 | | | | | | |
| (b) Masking Strategies | | | | | | | |
Table 6: **Ablations and model properties**. We report the evaluation metrics including accuracy (PA↑), mean absolute error (PM↓) in pitch recognition, and mean absolute error (EM↓) in energy recognition to evaluate model properties.
Diffusion process Similar as previous work (Ho et al., 2020; Song et al., 2020a), we define the data distribution as q(x0). The diffusion process is defined by a fixed Markov chain from data x0 to the latent variable xT :
$$q(\mathbf{x}_{1},\cdots,\mathbf{x}_{T}|x_{0})=\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}),\qquad(3)$$
For a small positive constant βt, a small Gaussian noise is added from xtto the distribution of xt−1 under the function of q(xt|xt−1).
The whole process gradually converts data x0 to whitened latent xT according to the fixed noise schedule β1, · · · , βT .
q(xt|xt−1) := N (xt; p1 − βtxt−1, βtI) (4)
Efficient training is optimizing a random term of t with stochastic gradient descent:
$${\mathcal{L}}_{\theta}=\left\|\mathbf{\epsilon}_{\theta}\left(\alpha_{t}\mathbf{x}_{0}+{\sqrt{1-\alpha_{t}^{2}}}\mathbf{\epsilon}\right)-\mathbf{\epsilon}\right\|_{2}^{2}\quad(5)$$
Reverse process Unlike the diffusion process, reverse process is to recover samples from Gaussian noises. The reverse process is a Markov chain from xT to x0 parameterized by shared θ:
$$p_{\theta}(\mathbf{x}_{0},\cdots,\mathbf{x}_{T-1}|x_{T})=\prod_{t=1}^{T}p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}),\quad(\mathbf{6})$$
$\pmb{E}t-1,\beta_{t}I$) ...
where each iteration eliminates the Gaussian noise added in the diffusion process:
$$p(\mathbf{x}_{t-1}|\mathbf{x}_{t}):={\mathcal{N}}(\mathbf{x}_{t-1};\mu_{\theta}(\mathbf{x}_{t},t),\sigma_{\theta}(\mathbf{x}_{t},t)^{2}I)\tag{7}$$
## E Information Perturbation
XLSR-53 is pre-trained on 56k hours of speech in 53 languages, to provide linguistic information.
We apply the following functions (Qian et al.,
2020; Choi et al., 2021) on acoustic features (i.e.,
pitch, and energy) to create acoustic-perturbed speech samples Sˆ, while the linguistic content remains unchanged, including 1) formant shifting fs, 2) pitch randomization pr, and 3) random frequency shaping using a parametric equalizer peq.
- For fs, a formant shifting ratio is sampled uniformly from Unif(1, 1.4). After sampling the ratio, we again randomly decided whether to take the reciprocal of the sampled ratio or not.
- In pr, a pitch shift ratio and pitch range ratio are sampled uniformly from Unif(1, 2) and Unif(1, 1.5), respectively. Again, we randomly decide whether to take the reciprocal of the sampled ratios or not. For more details for formant shifting and pitch randomization, please refer to Parselmouth https://github.
com/YannickJadoul/Parselmouth.
- peq represents a serial composition of lowshelving, peaking, and high-shelving filters. We use one low-shelving HLS, one high-shelving HHS, and eight peaking filters HPeak.
## F Evaluation F.1 Subjective Evaluation
For MOS tests, the testers present and rate the samples, and each tester is asked to evaluate the subjective naturalness on a 1-5 Likert scale. For CMOS,
listeners are asked to compare pairs of audio generated by systems A and B and indicate which of the two audio they prefer, and choose one of the following scores: 0 indicating no difference, 1 indicating a small difference, 2 indicating a large difference and 3 indicating a very large difference.
For quality evaluation, we explicitly instruct the raters to "*(focus on examining the audio quality* and naturalness, and ignore the differences of style
(timbre, emotion and prosody).)". For prosody evaluation, we explicitly instruct the raters to "*(focus* on the naturalness of the prosody and style, and ignore the differences of content, grammar, or audio quality.)".
Our subjective evaluation tests are crowdsourced and conducted by 25 native speakers via Amazon Mechanical Turk. The screenshots of instructions for testers have been shown in Figure 4.
We paid $8 to participants hourly and totally spent about $800 on participant compensation. A small subset of speech samples used in the test is available at https://Prosody-TTS.github.io/.
## F.2 Objective Evaluation
Mel-cepstral distortion (MCD) (Kubichek, 1993)
measures the spectral distance between the synthesized and reference mel-spectrum features.
F0 Frame Error (FFE) combines voicing decision error and F0 error metrics to capture F0 information.
Number of Statistically-Different Bins (NDB)
and Jensen-Shannon divergence (JSD) (Richardson and Weiss, 2018). They measure diversity by 1)
clustering the training data into several clusters, and 2) measuring how well the generated samples fit into those clusters.
![14_image_0.png](14_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
See section 6
✓ A2. Did you discuss any potential risks of your work?
See section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
See section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** See Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
See section 4.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
See section 4.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
See section 4.1
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
See section 4.1 and Appendix F
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
See section 4.1 and Appendix F
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
See section 4.1 and Appendix F
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
wu-2023-duplex | Duplex Diffusion Models Improve Speech-to-Speech Translation | https://aclanthology.org/2023.findings-acl.509 | Speech-to-speech translation is a typical sequence-to-sequence learning task that naturally has two directions. How to effectively leverage bidirectional supervision signals to produce high-fidelity audio for both directions? Existing approaches either train two separate models or a multitask-learned model with low efficiency and inferior performance. In this paper, we propose a duplex diffusion model that applies diffusion probabilistic models to both sides of a reversible duplex Conformer, so that either end can simultaneously input and output a distinct language{'}s speech. Our model enables reversible speech translation by simply flipping the input and output ends. Experiments show that our model achieves the first success of reversible speech translation with significant improvements of ASR-BLEU scores compared with a list of state-of-the-art baselines. | # Duplex Diffusion Models Improve Speech-To-Speech Translation
## Xianchao Wu
NVIDIA
[email protected], [email protected]
## Abstract
Speech-to-speech translation is a typical sequence-to-sequence learning task that naturally has two directions. How to effectively leverage bidirectional supervision signals to produce high-fidelity audio for both directions?
Existing approaches either train two separate models or a multitask-learned model with low efficiency and inferior performance. In this paper, we propose a duplex diffusion model that applies diffusion probabilistic models to both sides of a reversible duplex Conformer, so that either end can simultaneously input and output a distinct language's speech. Our model enables reversible speech translation by simply flipping the input and output ends. Experiments show that our model achieves the first success of reversible speech translation with significant improvements of ASR-BLEU scores compared with a list of state-of-the-art baselines.
## 1 Introduction
Direct speech-to-speech translation (S2ST) (Lee et al., 2021; Inaguma et al., 2022), transforming a source language's speech to the target language's speech, is essential for online international communications and is friendly to numerous languages that do not have their own writing systems or textual vocabularies. S2ST circumvents a cascaded architecture (Lavie et al., 1997; Nakamura et al., 2006; Wahlster, 2000) of combining automatic speech recognition (ASR) of the source speech, textual source-to-target machine translation (MT),
and target text-to-speech (TTS) synthesis where multiple types of datasets are required, error propagates, latency is high, and unavailable for thousands of (spoken) languages who do not have a writing system.
For S2ST, speech-to-speech parallel data is required, and it is costly to collect a comparable size dataset with textual counterparts. To alleviate the data scarcity problem, self-supervised pre-training and data augmentation techniques were used by Popuri et al. (2022), and unsupervised and weaklysupervised speech and text data under Translatotron 2 (Jia et al., 2021) were leveraged by Jia et al.
(2022a). Techniques such as multi-task learning (Weiss et al., 2017), pseudo labeling (Pino et al.,
2020), and knowledge distillation (Inaguma et al.,
2021) have also been adapted and achieved promising results.
From S2ST architecture's point of view, Inaguma et al. (2022) describes four categories, (1)
Translatotron (Jia et al., 2019) style which includes a speech encoder and a spectrogram decoder, (2)
Translatotron2+ (Jia et al., 2021) style which inserts a first-pass text decoder followed by a TTS
encoder between the two modules of Translatotron,
(3) speech-to-unit translation (S2UT) (Lee et al.,
2021) that uses discrete clustered units of the target language speech instead of spectrogram, and (4)
UnitY (Inaguma et al., 2022) that inserts a firstpass text decoder followed by a text-to-unit (T2U)
encoder between the two modules in S2UT.
In this paper, following the motivations of textual duplex machine translation (Zheng et al., 2021), we leverage S2ST's two directions: effectively utilizing supervision signals from both directions is estimated to both relieve the pain of data scarcity and bring novel architectures of training and inferencing. Existing architectures (e.g., Translatotron1/2, S2UT, and UnitY) either train two separate models or a multitask-learned model with low efficiency and inferior performance. In contrast, we propose a duplex diffusion model that applies diffusion probabilistic models to both sides of a reversible duplex Conformer, so that either end can simultaneously input and output a distinct language's speech. Our model enables reversible speech translation by simply flipping the input and output ends. Experiments show that our model achieves the first success of reversible speech translation with significant improvements of ASR-BLEU scores compared with a list of strong baselines.
8035 Our contributions are concluded as follows:
- a novel *reversible duplex Conformer* that extends the widely used Conformer (Gulati et al.,
2020) architecture from ASR to S2ST, with reversible and symmetrical forward/reverse building blocks;
- a novel *duplex diffusion model* that jointly train one reversible duplex Conformer in diffusion ways to fit two translation directions;
- significantly better or comparable ASRBLEU scores are achieved by comparing with a list of state-of-the-art baselines including Translatotron, Translatotron2, S2UT, and UnitY.
## 2 Backgrounds 2.1 Reder
REDER, REversible Duplex TransformER, was proposed by Zheng et al. (2021) for reversible textual machine translation through duplex sequenceto-sequence (seq2seq) learning. A neural network with a parameter set θ is *duplex* when it satisfies the following conditions. First, the network has two ends, each end can take one language as its input or output. Second, the network defines a forward mapping function f→
θ: *X 7→ Y*, and a backward
(reverse) mapping function f←
θ: *Y 7→ X* , that satisfies two reversibilities: f←
θ = (f→
θ
)−1and f→
θ = (f←
θ
)−1. Third, the network satisfies the cycle consistencies: ∀x ∈ X : f←
θ
(f→
θ
(x)) = x and ∀y ∈ Y : f→
θ
(f←
θ
(y)) = y.
However, building duplex seq2seq networks is non-trivial and must satisfy the following constraints, **reversibility** and **homogeneity**. First, vanilla encoder-decoder network, such as frequently used Transformer (Vaswani et al., 2017)
and its variants, is irreversible. It is not feasible for the output end of the decoder side to take in input signals to exhibit the encoding functionality and vice versa. Second, the natural network architectures of the non-autoregressive encoder and the autoregressive decoder are heterogeneous.
Therefore, REDER, leverages reversible Transformer layers (Gomez et al., 2017a) and fully nonautoregressive modeling without explicit encoder and decoder division, is designed to solve these two challenges. As reported in (Zheng et al., 2021),
REDER worked in a duplex way that better exploited the bidirectional supervisions for achieving
![1_image_0.png](1_image_0.png)
better downstream reversible machine translation tasks' performance.
The architecture of REDER is a stack of L reversible duplex transformer layers where the 1-st to L/2-th layers are mirror of the (L/2+1)-th to L-th layers to ensure the whole model being symmetric.
In particular, each layer contains a multi-head selfattention (MHSA) module and a feed-forward network (FFN) module with a novel reversible design to ensure duplex behavior, where the input and output tensors of such a layer are split into two halves, Hl−1 = [H
(1)
l−1
; H
(2)
l−1
] and Hl = [H
(1)
l; H
(2)
l], respectively. Formally, the *regular form* of the l-th layer Fl performs as follow:
$$\begin{array}{c}{{[{\bf H}_{l}^{(1)};{\bf H}_{l}^{(2)}]={\mathcal F}_{l}([{\bf H}_{l-1}^{(1)};{\bf H}_{l-1}^{(2)}]),}}\\ {{{\bf H}_{l}^{(1)}={\bf H}_{l-1}^{(1)}+\mathrm{MHSA}({\bf H}_{l-1}^{(2)}),}}\\ {{{\bf H}_{l}^{(2)}={\bf H}_{l-1}^{(2)}+\mathrm{FFN}({\bf H}_{l}^{(1)}).}}\end{array}$$
]), (1)
), (2)
(4) (5) (6) (1) $\frac{1}{2}$ (2) $\frac{1}{2}$ (3) $\frac{1}{2}$ (4) $\frac{1}{2}$ (4) $\frac{1}{2}$ (5) $\frac{1}{2}$ (6) $\frac{1}{2}$ (7) $\frac{1}{2}$ (8) $\frac{1}{2}$ (9) $\frac{1}{2}$ (10) $\frac{1}{2}$ (11) $\frac{1}{2}$ (12) $\frac{1}{2}$ (13) $\frac{1}{2}$ (14) $\frac{1}{2}$ (15) $\frac{1}{2}$ (16) $\frac{1}{2}$ (17) $\frac{1}{2}$ (18) $\frac{1}{2}$
l). (3)
The *reverse form* F
−1 lcan be computed by subtracting the residuals:
$$\begin{array}{c}{{[{\bf H}_{l-1}^{(1)};{\bf H}_{l-1}^{(2)}]={\mathcal F}_{l}^{-1}([{\bf H}_{l}^{(1)};{\bf H}_{l}^{(2)}]),}}\\ {{{\bf H}_{l-1}^{(2)}={\bf H}_{l}^{(2)}-\mathrm{FFN}({\bf H}_{l}^{(1)}),}}\\ {{{\bf H}_{l-1}^{(1)}={\bf H}_{l}^{(1)}-\mathrm{MHSA}({\bf H}_{l-1}^{(2)}).}}\end{array}$$
l]), (4)
l), (5)
). (6)
## 2.2 Ddpm
We briefly introduce the diffusion and reconstruction processes in Denoising Diffusion Probabilistic Models (DDPM). Given a data point x0 sampled from a real data distribution q(x) (x0 ∼ q(x)), Ho et al. (2020) define a *forward diffusion process* in which small amount of Gaussian noise is added to sample x0 in T steps to obtain a sequence of noisy samples x0*, ...,* xT . A predefined (hyper-parameter)
variance schedule {βt ∈ (0, 1)}
T
t=1 controls the step sizes:
q(xt|xt−1) = N (xt; p1 − βtxt−1, βtI); (7) q(x1:T |x0) := Y T t=1 q(xt|xt−1). (8)
When T → ∞, xT is equivalent to following an isotropic Gaussian distribution. Note that, there are no trainable parameters used in this forward diffusion process.
Let αt = 1 − βt and α¯t =Qt i=1 αi, we can express an arbitrary step t's diffused sample xt by the initial data sample x0:
data sample $\mathbf{x}_0$. $\mathbf{x}_t=\sqrt{\bar{\alpha}_t}\mathbf{x}_0+\sqrt{1-\bar{\alpha}_t}\boldsymbol{\epsilon}_t.$ $\mathbf{a}\boldsymbol{\epsilon}\sim\boldsymbol{\epsilon}\cdot\mathcal{N}(0,\mathbf{I})$ shares the same sh.
Here, noise ϵt ∼ N (0,I) shares the same shape with x0 and xt.
In order to reconstruct from a Gaussian noise input xT ∼ N (0,I), we need to learn a model pθ to approximate the conditional probabilities to run the *reverse diffusion (reconstruction) process*:
$$p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1};\mu_{\theta}(\mathbf{x}_{t},t),\mathbf{\Sigma}_{\theta}(\mathbf{x}_{t},t));$$ $$p_{\theta}(\mathbf{x}_{0:T}):=p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}).\tag{10}$$
Note that the reverse conditional probability is tractable by first applying Bayes' rule to three Gaussian distributions and then completing the
"quadratic component" in the exp(·) function:
$$q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t-1};\tilde{\boldsymbol{\mu}}_{t}(\mathbf{x}_{t},\mathbf{x}_{0}),\tilde{\beta}_{t}\mathbf{I})\tag{11}$$ $$=q(\mathbf{x}_{t}|\mathbf{x}_{t-1},\mathbf{x}_{0})\frac{q(\mathbf{x}_{t-1}|\mathbf{x}_{0})}{q(\mathbf{x}_{t}|\mathbf{x}_{0})}$$ (12) $$\propto\exp(-\frac{1}{2\tilde{\beta}_{t}}(\mathbf{x}_{t-1}-\tilde{\boldsymbol{\mu}}_{t})^{2}).\tag{13}$$
$\begin{array}{c}\vdash\\ \vdash\end{array}$ (12) $\begin{array}{c}\vdash\\ \vdash\end{array}$ (13) ...
Here, variance β˜tis a scalar and mean µ˜t depends on xt and noise ϵt:
$$\begin{array}{c}{{\tilde{\beta}_{t}=\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t};}}\\ {{\tilde{\mu}_{t}=\frac{1}{\sqrt{\alpha_{t}}}({\bf x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{t}).}}\end{array}$$
ϵt). (15)
Intuitively, q(xt−1|xt, x0) acts as a *reference* to learn pθ(xt−1|xt). We can use the variational lower bound (VLB) to optimize the negative loglikelihood:
$$-\log p_{\theta}({\bf x}_{0})\leq-\log p_{\theta}({\bf x}_{0})+$$ $$D_{{\bf KL}}(q({\bf x}_{1:T}|{\bf x}_{0})\parallel p_{\theta}({\bf x}_{1:T}|{\bf x}_{0})).\tag{16}$$
![2_image_0.png](2_image_0.png)
$$(\mathbf{9})$$
Using the definitions of q(x1:T |x0) in Equation 8 and pθ(x0:T ) in Equation 10, a loss item Lt (1 ≤
t ≤ T − 1) is expressed by:
$$\mathcal{L}_{t}=D_{\mathbf{KL}}(q(\mathbf{x}_{t}|\mathbf{x}_{t+1},\mathbf{x}_{0})\parallel p_{\theta}(\mathbf{x}_{t}|\mathbf{x}_{t+1}))\tag{17}$$ $$=\mathbb{E}_{\mathbf{x}_{0},\epsilon_{t}}\left[\frac{\parallel\hat{\boldsymbol{\mu}}_{t}-\boldsymbol{\mu}_{\theta}(\mathbf{x}_{t},t)\parallel^{2}}{2\parallel\boldsymbol{\Sigma}_{\theta}(\mathbf{x}_{t},t)\parallel_{2}^{2}}\right].$$
We further reparameterize the Gaussian noise term instead to predict ϵt from time step t's input xt and use a simplified objective that ignores the weighting term:
L
$$\sigma_{t}^{\rm simple}=\mathbb{E}_{t\sim[1,T],{\bf x}_{0},\epsilon_{t}}\left[\parallel\epsilon_{t}-\epsilon_{\theta}({\bf x}_{t},t)\parallel^{2}\right]\tag{18}$$
## 3 Reversible Duplex Conformer
In this paper, we extend the widely used Conformer
(Gulati et al., 2020) architecture for encoding the speech signals into dense and compact representations of both ends. Conformer has achieved impressive results in supervised ASR by leveraging transformer's capturing of content-based *global* interactions and convolutional neural network's exploiting of *local* features. In Conformer, two macaron-like FFN layers with half-step residual connections sandwich the MHSA and convolution
(CNN) modules followed by a post layer normalization. Besides supervised ASR, Conformer has also been successfully used in self-supervised Wav2Vec
(Schneider et al., 2019; Baevski et al., 2020) pretraining for downstream application tasks' finetuning.
(14) $\binom{15}{2}$ .
## 3.1 Forward And Reverse Building Blocks
Following (Gomez et al., 2017b; Zheng et al.,
2021), we split the l-th layer's (left-end) input tensor into two parts, Hl−1 = [x
(1); x
(2)]. The
(right-end) output tensor is split in the same way, Hl = [z
(1); z
(2)]. Thus, the forward target of this REDER-style Conformer layer is [x
(1); x
(2)] 7→
[z
(1); z
(2)].
We introduce two intermediate tensors, y
(1) and y
(2), for intuitive understanding and mathematical convenient. Both Conformer's four sub-modules
(two FFNs, one MHSA and one CNN) and four residual connections are kept in our reversible duplex Conformer.
Figure 2 depicts the forward (a) and reverse (b)
building blocks for one layer in our proposed reversible duplex Conformer. In Figure 2, the reverse block is a mirror of the forward block with symmetrical network connections and subtract residual connections. The forward block can be formally expressed as follows:
$$\begin{array}{l}{{{\bf{y}}^{(1)}={\bf{x}}^{(1)}+0.5\times\mathrm{FFN}({\bf{x}}^{(2)});}}\\ {{{\bf{y}}^{(2)}={\bf{x}}^{(2)}+\mathrm{MHSA}({\bf{y}}^{(1)});}}\\ {{{\bf{z}}^{(1)}={\bf{y}}^{(1)}+\mathrm{CNN}({\bf{y}}^{(2)});}}\\ {{{\bf{z}}^{(2)}={\bf{y}}^{(2)}+0.5\times\mathrm{FFN}({\bf{z}}^{(1)}).}}\end{array}$$
(2)); (19)
(1)); (20)
(2)); (21)
(1)). (22)
Symmetrically, the reverse block is expressed by:
$$\begin{array}{l}{{{\bf y}^{(2)}={\bf z}^{(2)}-0.5\times\mathrm{FFN}({\bf z}^{(1)});}}\\ {{{\bf y}^{(1)}={\bf z}^{(1)}-\mathrm{CNN}({\bf y}^{(2)});}}\\ {{{\bf x}^{(2)}={\bf y}^{(2)}-\mathrm{MHSA}({\bf y}^{(1)});}}\\ {{{\bf x}^{(1)}={\bf y}^{(1)}-0.5\times\mathrm{FFN}({\bf x}^{(2)}).}}\end{array}$$
(1)); (23)
(2)); (24)
(1)); (25)
(2)). (26)
We employ Layer Normalization (LN) (Ba et al.,
2016) at the beginning of each module, i.e., PreLN
(Xiong et al., 2020). The FFN module processes the input tensor x by six components:
$$\operatorname{FFN}(\mathbf{x})=p_{2}\circ W_{2}\circ p_{1}\circ\operatorname{SiLU}\circ W_{1}\circ\operatorname{LN}(\mathbf{x}).$$
Here, ◦ means a layer takes ◦'s right-hand-side network's output (e.g., LN(x)) as the input of
◦'s left-hand-side network (e.g., W1 to perform W1(LN(x))). W1 and W2 are two linear layers that preforms h 7→ 4h and 4h 7→ h linear projections, respectively. Two dropout layers p1 and p2 are used. The Sigmoid Linear Unit (SiLU)
(Elfwing et al., 2017) activation function is inserted between the two linear layers. The MHSA module contains three components:
MHSA = p ◦ Attention ◦ LN(x).
We use multi-head attention with relative positional embedding (Shaw et al., 2018) for the "Attention"
![3_image_0.png](3_image_0.png)
component. Note that, the attention module is extendable to cross-attention cases where a source sequence's encoded representation acts as memory
(i.e., key and value) to the target sequence. Finally, the CNN module utilizes two types of convolutions, pointwise (PW) and 1D depthwise (DW), to capture local-range dependencies of the input speech.
The idea of employing attention for global context modeling and convolution for local context modeling is also inspired by the long-short range attention mechanism used in the lite transformer
(Wu et al., 2020). Formally,
$$\begin{array}{c}{{\mathrm{CNN}(\mathbf{x})=p\circ\mathrm{PW}_{2}\circ\mathrm{Swish}\circ\mathrm{BN}}}\\ {{\circ\mathrm{DW}\circ\mathrm{Glu}\circ\mathrm{PW}_{1}\circ\mathrm{LN}(\mathbf{x}).}}\end{array}$$
$$(23)$$
Here, BN stands for batch normalization. Two types of activation functions, Glu (Dauphin et al.,
2016) and Swish (Ramachandran et al., 2017), are inserted between convolution networks.
## 3.2 Symmetric Network Architecture
As depicted in Figure 3, the forward and reverse building blocks are arranged symmetrically in the whole architecture to achieve homogeneous computations. Specifically, in the L building blocks, the 1-st to L/2-th layers are set to be reverse blocks whereas the (L/2 + 1)-th to L-th layers be the regular forward form:
$$\begin{array}{c}{{f_{\theta}^{\rightarrow}({\bf x})={\mathcal F}_{L}\circ\cdots\circ{\mathcal F}_{L/2+1}}}\\ {{\qquad\qquad\circ{\mathcal F}_{L/2}^{-1}\circ\cdots\circ{\mathcal F}_{1}^{-1}({\bf x});}}\\ {{f_{\theta}^{\leftarrow}({\bf z})={\mathcal F}_{1}\circ\cdots\circ{\mathcal F}_{L/2}}}\\ {{\qquad\qquad\qquad\circ{\mathcal F}_{L/2+1}^{-1}\circ\cdots\circ{\mathcal F}_{L}^{-1}({\bf z}).}}\end{array}$$
This design makes our reversible duplex Conformer to be homogeneous: the forward computational operation chain reads as a palindrome string < fcmf · · · f cmf|fmcf · · · *fmcf >* and so does the reverse chain, where *f, m, c* denotes FFN, MHSA and CNN, respectively.
Algorithm 1: Duplex Diffusion Model
(DDM) Training Algorithm - One Step 1 Given: x, y, Ex, Ey 2 x0 = Ex(x) ▷ encode by pretrained wav2vec models 3 y0 = Ey(y) ▷ encode by pretrained wav2vec models 4 t ∼ Uniform(1*, ..., T*)
5 ϵx ∼ Nx(0, I), ϵy ∼ Ny(0, I)
6 xt =
√α¯t,xx0 +p1 − α¯t,xϵx 7 yt =
√α¯t,yy0 +p1 − α¯t,yϵy 8 ϵ θx =
←−−Mθ(xt*, t,* y0) ▷ reverse, given y0 9 ϵ θy =
−−→Mθ(yt*, t,* x0) ▷ forward, given x0 10 LDDM = λ1 ∥ ϵx − ϵ θx ∥
2 +λ2 ∥ ϵy − ϵ θy ∥
2
There are several selections of input types of the source and target ends in Figure 3. Popuri et al. (2022) explores self-supervised pretrained models such as (1) wav2vec2 (Baevski et al., 2020)
to encode the source speech and (2) Unit mBART
(Liu et al., 2020a) to encode the target discrete units (Lee et al., 2021), and then translate source speech into target clustered units through fine-tuning. The generated discrete unit sequence is then sent to an independently trained "text"-to-speech (TTS)
model to obtain the final waves.
In this paper, we follow the usage of discrete units that are generated by first using pretrained HuBERT (Hsu et al., 2021) to encode the target speech and then perform k-means clustering. Then, we use the DiffWave (Kong et al., 2020b) vocoder to generate the final waves.
## 4 Duplex Diffusion Model
Cycle consistency has been utilized in textual neural machine translation (Zheng et al., 2021) and image-to-image translation (Su et al., 2022). In this paper, we propose a duplex diffusion model that alternatively optimizes both directions by two diffusion processes.
The training algorithm is described in Algorithm 1. Generally, we borrow DDPM (Ho et al., 2020)'s architecture and extend it to a duplex scenario where sequences of two ends are diffused alternatively during training. At the beginning, the source sequence x and target sequence y are encoded into dense representations by pretrained wav2vec models Ex, Ey through self-supervised learning on monolingual datasets, respectively. Then, time t and two normal Gaussian noise signals ϵx, ϵy are sampled. Note that the lengths of the source and target sequences are diverse. We pre-define two variance schedules {βt,x ∈ (0, 1)}
T
t=1 and {βt,y ∈
(0, 1)}
T
t=1, for the source and target languages, respectively. Thus, we have αt,x = 1 − βt,x, α¯t,x = Qt i=1 αi,x, αt,y = 1 − βt,y and α¯t,y =Qt i=1 αi,x, as used in Algorithm 1.
The variance schedules, initial sequence representations and normal Gaussian noises work together to give us diffused representations, xt and yt, respectively. They are then sent to the reversible duplex Conformer architecture Mθ (Figure 3) to predict the noises.
Originally in Figure 3, we are intended to produce x0 from y0 in the reverse process of
←−−Mθ.
Now, we have two additional inputs, t and xt. The output also changes from predicting x0 to estimating ϵ θx which shares the same shape with x0.
We thus have two ways to organize the network *←−−M*θ: (1) reuse Figure 3's architecture and predicting ϵ θx from y0 by taking xt as the "memory" which acts as key and value in the cross-attention network in Conformer, or (2) follow traditional stable diffusion models (Rombach et al., 2021)
and predict ϵ θx from xt by taking y0 as the conditional "memory". That is, in the MHSA function, we set query xtto be and key/value to be y0, MHSA(q = xt, k = y0, v = y0), so that the identical lengths of q = xt and ϵ θx are ensured.
Note that, in the second choice used in our experiments, we are not limited to use a reversible duplex Conformer, i.e., any transformer architecture with cross-attention are applicable. These two options still hold during inferencing from given y0, T, and xT to iteratively reconstruct x0.
Since the lengths of the source and target sequences are diverse, we follow textual duplex translation (Zheng et al., 2021) and double the source end's length by a upsampling convolutional network.
We only describe the reverse process
←−−Mθ and the forward process
−−→Mθ shares the similar strategies.
To achieve a full cycle consistency, predicting the target Gaussian noise from source sequence by taking target noisy sequence as the "conditional memory" is more appropriate in current scenario setting so that both translation directions are achieved in one duplex diffusion model.
After the reverse and forward processes, we can compute the MSE losses of between the two pairs of reference and predicted noises. They are interpolated together by hyper-parameter weights λ1
(=0.5) and λ2 (=0.5) to the final loss LDDM to be optimized.
## 5 Training
Our reversible duplex Conformer is largely inspired by REDER (Zheng et al., 2021). The novel parts are that (1) we select and reconstruct convolutionenhanced Conformer (Gulati et al., 2020) to synthetically capture global information by attentions and local context by convolutions, and (2) we extend from textual duplex machine translation to
(dense) duplex speech-to-speech translation. When training our reversible duplex Conformer, we borrow and adapt the losses that are used in REDER
to fit our scenario.
In REDER, three types of losses were used. The first loss is to model the variable-length of source and target sequences by a latent alignment approach, i.e., the Connectionist Temporal Classification (CTC) (Graves et al., 2006). Starting from the conditional independence assumption, CTC is capable of efficiently (by dynamic programming) finding all valid (yet monotonic) alignments a which derives from the target y by allowing consecutive repetitions and inserting blank tokens. The CTC
loss is defined by:
$${\mathcal{L}}_{\mathrm{CTC}}=-\mathrm{log}p_{\mathrm{CTC}}(\mathbf{y}|\mathbf{x};\theta)=-\mathrm{log}\sum_{\mathbf{a}}p_{\theta}(\mathbf{a}|\mathbf{x}).$$
We adapt this loss for speech translation when the target are clustered unit sequences. We use the MSE loss instead when the target is a sequence of mel-spectrogram. Also, to ensure the source sequence is always longer than the target sequence, we upsample the source sequences by convolutional layers before sending them to the reversible duplex Conformer.
The second loss measures the layer-wise forward-backward agreement (fba, measured by cosine similarity) of between the forward l-th layer's representation
−→Hl = Fl(
−→Hl−1) and the reverse representation
←−
Hl = Fl(
←−
Hl+1). Thus,
$${\mathcal{L}}_{\mathrm{{fba}}}(\mathbf{y}|\mathbf{x};\theta)={\frac{1}{L}}\sum_{l=1}^{L}\left\{1-\cos({\vec{\mathbf{H}}}_{l},\mathrm{{sg}}({\overleftarrow{\mathbf{H}}}_{l}))\right\},$$
where sg denotes the stop-gradient operation.
The third loss explicitly describe the cycle consistency of a pair of seq2seq tasks, i.e., we minimize the distance between the original x and its reconstruction f←
θ
(f→
θ
(x)) by,
$${\mathcal{L}}_{\mathrm{cc}}(\mathbf{x};\theta)=\operatorname{distance}(\mathbf{x},f_{\theta}^{+}(f_{\theta}^{\rightarrow}(\mathbf{x}))).$$
For speech translation, the source sequence can be expressed by mel-spectrogram or clustered units so that MSE loss or CTC loss can be applied to them, respectively. Finally, these three types of losses are doubled to two directions and interpolated together for the final loss. That is, when predicting discrete units, the final loss function is:
$$\begin{array}{c}{{{\mathcal{L}}_{\mathrm{unit}}=w_{1}*{\mathcal{L}}_{\mathrm{CTC}}(\mathbf{y}|\mathbf{x})+w_{2}*{\mathcal{L}}_{\mathrm{CTC}}(\mathbf{x}|\mathbf{y})}}\\ {{\qquad+w_{3}*{\mathcal{L}}_{\mathrm{fba}}(\mathbf{y}|\mathbf{x})+w_{4}*{\mathcal{L}}_{\mathrm{fba}}(\mathbf{x}|\mathbf{y})}}\\ {{\qquad+w_{5}*{\mathcal{L}}_{\mathrm{cc}}(\mathbf{y})+w_{6}*{\mathcal{L}}_{\mathrm{cc}}(\mathbf{x}).}}\end{array}$$
When predicting mel-spectrograms, the final loss
function is:
$$\begin{array}{c}{{{\mathcal{L}}_{\mathrm{mel}}=w_{1}*{\mathcal{L}}_{\mathrm{MSE}}(\mathbf{y}|\mathbf{x})+w_{2}*{\mathcal{L}}_{\mathrm{MSE}}(\mathbf{x}|\mathbf{y})}}\\ {{\qquad+w_{3}*{\mathcal{L}}_{\mathrm{fba}}(\mathbf{y}|\mathbf{x})+w_{4}*{\mathcal{L}}_{\mathrm{fba}}(\mathbf{x}|\mathbf{y})}}\\ {{\qquad+w_{5}*{\mathcal{L}}_{\mathrm{cc}}(\mathbf{y})+w_{6}*{\mathcal{L}}_{\mathrm{cc}}(\mathbf{x}).}}\end{array}$$
We reuse the default hyper-parameter values described in REDER (Zheng et al., 2021) for setting weights w1 to w6.
In our experiments, we first train the reversible duplex Conformer architecture by a predefined K1
(=200,000) iterations and then apply the duplex diffusion training algorithm as shown in Algorithm 1.
After another predefined K2 (=200,000) iterations, we fix the diffusion processes and focus on updating the reversible duplex Conformer only so that traditional search algorithms such as beam search can be used for seeking target hypotheses.
## 6 Experimental Setups 6.1 Data
To compare with state-of-the-art baselines' reported results, we align with UnitY (Inaguma et al.,
2022) and use three S2ST datasets: (1) Fisher Es→En (Post et al., 2013) with 170-hour Spanish
(Es) conversational telephone speech with textual transcriptions in Es and En. The English speech is synthesized by a single-female-speaker TTS model.
(2) CVSS-C (Jia et al., 2022b), a public multilingual S2ST corpus from CoVoST2 (Wang et al.,
2020). Again, a single-female-speaker TTS model is employed to synthesize the target speech. (3)
Multi-domain En↔Es corpora (Popuri et al., 2022).
We follow (Inaguma et al., 2022) to collect 1983hour source speech for En→Es and 1404-hour source speech for Es→En.
8040
## 6.2 Pre-Training And Pre-Processing
We use the pretrained wav2vec2.0 (Baevski et al.,
2020) with a 24-layer Conformer (Gulati et al.,
2020) self-trained on the Libri-Light dataset (Kahn et al., 2019), HuBERT (Hsu et al., 2021), mHuBERT (Popuri et al., 2022), and mBART (Liu et al.,
2020b) given in Table 9 of (Inaguma et al., 2022).
For acoustic feature extraction, discrete unit extraction (100 clusters) and text normalization (e.g.,
for evaluation score computing), we follow (Popuri et al., 2022; Inaguma et al., 2022).
## 6.3 Vocoder
Instead of using the HiFi-GAN vocoder (Kong et al., 2020a; Polyak et al., 2021) which converts mel-spectrograms or discrete units to waveform sequences for TTS and direct speech-tospectrogram/unit models, we borrow a comparable diffusion based vocoder, DiffWave (Kong et al.,
2020b), for reconstructing waveforms from spectrogram or unit sequences.
## 6.4 Training And Decoding Configurations
We implement our models based on the Fairseq toolkit1(Ott et al., 2019). All our models are optimized with a mixed precision training for footprint saving. Our reversible duplex Conformer uses the settings of Conformer-Large with 135.1M parameters (Gulati et al., 2020). The two diffusion variance schedules used in our duplex diffusion model follow stable diffusion (Rombach et al., 2021). We use a NVIDIA DGX-A100*8 workstation to perform the training with a total of 2,500 GPU hours.
During inferencing, we set the beam size to be 10 which aligns with most of the baselines for fair comparison. Other configurations not mentioned here follow their default settings in their open-source repositories.
## 6.5 Evaluation
We use a pre-trained ASR model to transcribe the target speech into texts and then calculate 4gram BLEU scores (Papineni et al., 2002), denoted as ASR-BLEU. The target languages' ASR
models are fine-tuned from pretrained wav2vec2.0
(Baevski et al., 2020) models with the CTC objective (Graves et al., 2006) when we taking discrete unit sequences as the prediction target. The same criterion has been used in (Inaguma et al., 2022).
1https://github.com/facebookresearch/fairseq
| Model | dev | dev2 | test |
|-------------------|-------|--------|--------|
| ASR-MT-TTS | 42.1 | 43.5 | 43.9 |
| S2TT-TTS, C | 47.8 | 48.9 | 48.3 |
| S2TT-TTS, C-w2v2 | 51.0 | 52.2 | 52.1 |
| S2Sp-Tn, C | 43.9 | 44.4 | 43.8 |
| S2Sp-Tn, C-w2v2 | 45.5 | 47.6 | 46.3 |
| S2Sp-Tn2+, C | 50.4 | 51.1 | 50.8 |
| S2Sp-Tn2+, C-w2v2 | 58.4 | 59.5 | 58.6 |
| S2Sp-RDC (Ours) | 46.1 | 47.3 | 47.0 |
| S2Sp-RDC, w2v2 | 50.7 | 51.5 | 51.0 |
| S2Sp-DDM (Ours) | 52.4 | 55.1 | 54.8 |
| S2Sp-DDM, w2v2 | 58.9 | 59.8 | 59.1 |
| S2U, C | 46.2 | 47.6 | 47.4 |
| S2U, C-w2v2 | 53.4 | 53.9 | 53.7 |
| UnitY, C | 50.5 | 51.6 | 51.4 |
| UnitY, C-w2v2 | 55.1 | 56.5 | 55.9 |
| S2U-RDC (Ours) | 48.1 | 49.0 | 48.5 |
| S2U-RDC, w2v2 | 50.8 | 52.1 | 51.8 |
| S2U-DDM (Ours) | 52.2 | 53.6 | 53.1 |
| S2U-DDM, w2v2 | 56.3 | 58.0 | 57.4 |
## 7 Experimental Results
7.1 Fisher Es→ En In Table 1, we compare the ASR-BLEU scores of our systems (RDC and DDM) with three cascaded systems, four speech-to-spectrogram baselines which are variants of Translatotron (Jia et al.,
2019, 2021), and four speech-to-unit baselines which are variants of (Lee et al., 2021) and UnitY
(Inaguma et al., 2022). Baseline results are originally listed in (Inaguma et al., 2022).
We use RDC to denote our reversible duplex Conformer architecture that are trained in a similar way with textual REDER (Zheng et al., 2021).
Our DDM further "boost" the quality of pretrained RDC models by bidirectional diffusion processes and can be recognized as an integration of the diffusion framework with RDC. Of the three categories, S2Sp and S2U achieved significantly better
(p < 0.01) ASR-BLEU scores than the three traditional cascaded systems. In the S2Sp paradigm, our
"S2Sp-DDM, w2v2" model achieves comparable results with the best baseline "S2Sp-Tn2+, C-w2v2".
In the S2U paradigm, our model "S2U-DDM,
w2v2" achieves significantly better (p < 0.05) re-
Model Avg. High Mid Low
S2TT-TTS, ASR 12.7 30.7 18.3 4.4
S2TT-TTS, w2v-b 13.2 21.3 16.1 9.2
S2Sp-Tn2, w2v-b 17.9 32.5 22.9 10.9
S2Sp-Tn2+, w2v-b 20.8 31.6 25.4 **15.4**
S2Sp-RDC (Ours) 18.2 32.4 22.1 10.2
S2Sp-DDM (Ours) **22.1 33.5 27.4** 15.2
S2U, w2v-b 20.8 31.6 25.4 15.4 UnitY, w2v-b 24.5 34.6 28.9 19.3 S2U-RDC (Ours) 22.1 32.5 27.1 17.8
S2U-DDM (Ours) **24.9 35.2 30.2 20.4**
sults than the best baseline "UnitY, C-w2v2", with 1.2%, 1.5% and 1.5% absolute ASR-BLEU points.
These reflects that our proposed duplex seq2seq learning can be boosted by the bidirectional diffusion processes to better capture the translation distributions of among the source and target sides.
In addition, wav2vec2.0 acts as an essential component for all the model variants.
Table 1 also lists four variants of our models for ablation study. When we only use S2U-RDC, it performs better than the S2U+Conformer baseline.
However, this advantage disappears when w2v2 is further employed to these two variants. S2URDC also performs relatively worse than UnitY
which employs two pass decoding of basing on texts and units whereas our S2U-RDC uses units only. These reflect that, (1) additional textual information brings better results than duplex training,
(2) diffusion processes can partly "hedge" the benefits from two-pass decoding used in UnitY and enhance the performance of duplex translations.
## 7.2 Cvss-C
The CVSS-C corpus's ASR-BLEU scores of six baselines from three categories and our models are listed in Table 2. We observe almost the same tendencies with the result comparisons in the Fisher task (Table 1). The best baseline is still the two-pass UnitY model enhanced by a pretrained wav2vec-BERT model. Our S2U-DDM model improves UnitY by 0.4% ASR-BLEU points on average, comparable yet not significant.
7.3 Multi-domain En↔Es The bidirectional multi-domain En↔Es results are listed in Table 3. We again compare with six stateof-the-art baselines in three categories. On both directions, our model variants meet the best performances on the two test sets. We notice that the baselines perform less stable under the Europarl-ST corpus with ASR-BLEU ranges from 23.4% to 34.2%.
In the S2Sp scenario, both our RDC and DDM variants perform significantly better (p < 0.01) than the two baselines. Our S2U-DDM variant performs significantly better (p < 0.05) than UnitY and is comparable to the best cascaded system. Note that we only require one run training for bidirectional translations.
## 7.4 Inference Speed
| Model En→Es | E-ST | MuST-C |
|-----------------|----------|----------|
| ASR-MT-TTS | 36.8 | 30.8 |
| S2TT-TTS | 36.4 | 33.4 |
| S2Sp-Tn2+ | 35.6 | 33.5 |
| S2Sp-Tn2+, mB | 36.9 | 34.3 |
| S2Sp-RDC (Ours) | 35.1 | 32.7 |
| S2Sp-DDM (Ours) | 37.2 | 34.3 |
| UnitY | 35.1 | 33.7 |
| UnitY, mB | 35.3 | 34.1 |
| S2U-RDC (Ours) | 34.7 | 32.6 |
| S2U-DDM (Ours) | 35.8 | 34.5 |
| Model Es→En | CoVoST-2 | E-ST |
| ASR-MT-TTS | 32.9 | 34.2 |
| S2TT-TTS | 37.2 | 34.0 |
| S2Sp-Tn2+ | 37.0 | 23.4 |
| S2Sp-Tn2+, mB | 37.2 | 23.7 |
| S2Sp-RDC (Ours) | 34.5 | 30.6 |
| S2Sp-DDM (Ours) | 37.1 | 32.8 |
| UnitY | 35.4 | 30.8 |
| UnitY, mB | 36.4 | 33.1 |
| S2U-RDC (Ours) | 35.1 | 31.2 |
| S2U-DDM (Ours) | 36.7 | 34.0 |
We use a NVIDIA DGX-A100*8 workstation to perform the inferencing comparison without additional engineering optimization. We randomly select 500 utterances from the multi-domain Es→En dev set. For end-to-end S2ST inferencing, our final RDC with one-pass decoding achieved 1.72× decoding speed-ups over the best-performance UnitY
(Inaguma et al., 2022) baseline which requires a two-pass text+unit decoding.
## 7.5 Human Evaluation
Finally, we performed an audio-only human evaluation to evaluate the translation quality and acceptances of the best baseline UnitY and our DDM.
For direct comparison, we use the mTEDx test with 989 samples. We obtained a mean translation quality score of 4.202(/5.0) which is comparable to UnitY's 4.197 and an acceptable ratio of 92.89%
which is also comparable to UnitY's 92.94%.
## 8 Conclusion
Aiming at effectively leveraging bidirectional supervision signals of speech-to-speech translation
(S2ST), we have proposed two models for duplex S2ST, a reversible duplex Conformer and a duplex diffusion model. We compare with cascaded S2ST models, single/multi-pass speech-tospectrogram/unit models and report significantly better or comparable ASR-BLEU and humanevaluated scores, with fewer training time and faster inference speed.
## 9 Limitations
Our duplex diffusion model and reversible duplex Conformer architecture do not explicitly take reordering as an essential challenge. However, depicts the language pairs described in the experiments, there are languages such as English and Japanese which shares subject-verb-object (SVO)
and subject-object-verb (SOV) word orders. These limit the scalability of our proposed methods and external pre-ordering (Zhao et al., 2018; Wu et al.,
2011) or post-ordering (Goto et al., 2013) techniques on clustered units of speech should be taken into consideration in the future work.
Large-scale unlabeled speech data is required to train self-supervised wav2vec2.0 or HuBERT models. However, this is frequently not easy to collect.
Moreover, it is even more difficult to collect paired speech-to-speech data and existing TTS models for generating speech from text are still under developing. These are pre-conditions of applying our proposed approaches.
Finally, we still need to train pair-by-pair for S2ST which is quadratic to the number of languages. Our approach is less effective than textual multilingual machine translation architecture in which linear number of translation models are required and achieved comparable results than pairwise baselines. Multilingual S2ST requires novel training architectures and inferencing algorithms.
## 10 Ethics Statement
Our target is to build direct speech-to-speech translation systems with a duplex idea of training both directions in one run. We try our best to reuse existing pretrained wav2vec2.0, HuBERT, mHuBERT
and mBART models to save energy consuming.
In one run, we require much less GPU-hours for obtaining S2ST models for both direction usages.
However, compared with textual duplex MT systems, pre-processing of speech signals still requires much higher costing of GPU-hours and as listed in our limitation section (Section 9), smarter ways of multilingual S2ST architectures are preferred in the future to reduce the cost of energy from current quadratic to linear number of models.
Generally, S2ST circumvents traditional cascaded systems which concatenate ASR, MT and TTS with high latency and high requirements of datasets. There are 3,000 around languages in the world who do not have their own writing systems or textual vocabularies. Through our duplex S2ST
models, we hope to be friendly to these languages so that more and more languages can be covered.
## References
Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton.
2016. Layer normalization. *ArXiv*, abs/1607.06450.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
In *Advances in Neural Information Processing Systems*, volume 33, pages 12449–12460. Curran Associates, Inc.
Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with gated convolutional networks. *CoRR*, abs/1612.08083.
Stefan Elfwing, Eiji Uchibe, and Kenji Doya. 2017.
Sigmoid-weighted linear units for neural network function approximation in reinforcement learning.
CoRR, abs/1702.03118.
Aidan N. Gomez, Mengye Ren, Raquel Urtasun, and Roger B. Grosse. 2017a. The reversible residual network: Backpropagation without storing activations.
CoRR, abs/1707.04585.
Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. 2017b. The reversible residual network: Backpropagation without storing activations.
In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Isao Goto, Masao Utiyama, and Eiichiro Sumita. 2013.
Post-ordering by parsing with itg for japanese-english
statistical machine translation. *ACM Transactions on* Asian Language Information Processing, 12(4).
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In *Proceedings of* the 23rd International Conference on Machine Learning, ICML '06, page 369–376, New York, NY, USA.
Association for Computing Machinery.
Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang.
2020. Conformer: Convolution-augmented Transformer for Speech Recognition. In *Proc. Interspeech* 2020, pages 5036–5040.
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020.
Denoising diffusion probabilistic models. *CoRR*,
abs/2006.11239.
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. *CoRR*, abs/2106.07447.
Hirofumi Inaguma, Tatsuya Kawahara, and Shinji Watanabe. 2021. Source and target bidirectional knowledge distillation for end-to-end speech translation. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1872–1881, Online. Association for Computational Linguistics.
Hirofumi Inaguma, Sravya Popuri, Ilia Kulikov, PengJen Chen, Changhan Wang, Yu-An Chung, Yun Tang, Ann Lee, Shinji Watanabe, and Juan Pino. 2022.
Unity: Two-pass direct speech-to-speech translation with discrete units. *ArXiv*, abs/2212.08055.
Ye Jia, Yifan Ding, Ankur Bapna, Colin Cherry, Yu Zhang, Alexis Conneau, and Nobuyuki Morioka.
2022a. Leveraging unsupervised and weaklysupervised data to improve direct speech-to-speech translation. In *Interspeech*.
Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, and Roi Pomerantz. 2021. Translatotron 2: High-quality direct speech-to-speech translation with voice preservation. In *International Conference on Machine* Learning.
Ye Jia, Michelle Tadmor Ramanovich, Quan Wang, and Heiga Zen. 2022b. CVSS corpus and massively multilingual speech-to-speech translation. In *Proceedings of the Thirteenth Language Resources and* Evaluation Conference, pages 6691–6703, Marseille, France. European Language Resources Association.
Ye Jia, Ron J. Weiss, Fadi Biadsy, Wolfgang Macherey, Melvin Johnson, Z. Chen, and Yonghui Wu. 2019.
Direct speech-to-speech translation with a sequenceto-sequence model. In *Interspeech*.
Jacob Kahn, Morgane Rivière, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, Tatiana Likhomanenko, Gabriel Synnaeve, Armand Joulin, Abdelrahman Mohamed, and Emmanuel Dupoux. 2019. Libri-light: A
benchmark for ASR with limited or no supervision.
CoRR, abs/1912.07875.
Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020a.
Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. *ArXiv*,
abs/2010.05646.
Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2020b. Diffwave: A versatile diffusion model for audio synthesis. *ArXiv*,
abs/2009.09761.
A. Lavie, A. Waibel, L. Levin, M. Finke, D. Gates, M. Gavalda, T. Zeppenfeld, and Puming Zhan. 1997.
Janus-iii: speech-to-speech translation in multiple languages. In *1997 IEEE International Conference* on Acoustics, Speech, and Signal Processing, volume 1, pages 99–102 vol.1.
Ann Lee, Peng-Jen Chen, Changhan Wang, Jiatao Gu, Xutai Ma, Adam Polyak, Yossi Adi, Qing He, Yun Tang, Juan Miguel Pino, and Wei-Ning Hsu. 2021.
Direct speech-to-speech translation with discrete units. *CoRR*, abs/2107.05604.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020a. Multilingual denoising pre-training for neural machine translation. *CoRR*,
abs/2001.08210.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020b. Multilingual denoising pre-training for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
S. Nakamura, K. Markov, H. Nakaiwa, G. Kikui, H. Kawai, T. Jitsuhiro, J.-S. Zhang, H. Yamamoto, E. Sumita, and S. Yamamoto. 2006. The atr multilingual speech-to-speech translation system. *IEEE*
Transactions on Audio, Speech, and Language Processing, 14(2):365–376.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*,
pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Juan Miguel Pino, Qiantong Xu, Xutai Ma, Mohammad Javad Dousti, and Yun Tang. 2020. Self-training for end-to-end speech translation. In *Interspeech*.
Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. Speech resynthesis from discrete disentangled selfsupervised representations. *CoRR*, abs/2104.00355.
Sravya Popuri, Peng-Jen Chen, Changhan Wang, Juan Pino, Yossi Adi, Jiatao Gu, Wei-Ning Hsu, and Ann Lee. 2022. Enhanced direct speech-to-speech translation using self-supervised pre-training and data augmentation. *arXiv preprint arXiv:2204.02967*.
Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, and Sanjeev Khudanpur. 2013. Improved speech-to-text translation with the fisher and callhome Spanish-English speech translation corpus. In *Proceedings of the 10th International Workshop on Spoken Language Translation:*
Papers, Heidelberg, Germany.
Prajit Ramachandran, Barret Zoph, and Quoc V. Le.
2017. Searching for activation functions. *CoRR*,
abs/1710.05941.
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2021. Highresolution image synthesis with latent diffusion models. *CoRR*, abs/2112.10752.
Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised Pre-Training for Speech Recognition. In *Proc. Interspeech 2019*, pages 3465–3469.
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018.
Self-attention with relative position representations.
arXiv preprint arXiv:1803.02155.
Xu Su, Jiaming Song, Chenlin Meng, and Stefano Ermon. 2022. Dual diffusion implicit bridges for imageto-image translation. *ArXiv*, abs/2203.08382.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Wolfgang Wahlster, editor. 2000. *Verbmobil: Foundations of Speech-to-Speech Translation*. Springer, Berlin.
Changhan Wang, Anne Wu, and Juan Miguel Pino. 2020.
Covost 2: A massively multilingual speech-to-text translation corpus. *CoRR*, abs/2007.10310.
Ron J. Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Z. Chen. 2017. Sequence-to-sequence models can directly translate foreign speech. In *Interspeech*.
Xianchao Wu, Katsuhito Sudoh, Kevin Duh, Hajime Tsukada, and Masaaki Nagata. 2011. Extracting preordering rules from predicate-argument structures. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 29–37, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.
Zhanghao Wu, Zhijian Liu, Ji Lin, Yujun Lin, and Song Han. 2020. Lite transformer with long-short range attention.
Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. 2020. On layer normalization in the transformer architecture. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org.
Yang Zhao, Jiajun Zhang, and Chengqing Zong. 2018.
Exploiting pre-ordering for neural machine translation. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation
(LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Zaixiang Zheng, Hao Zhou, Shujian Huang, Jiajun Chen, Jingjing Xu, and Lei Li. 2021. Duplex sequence-to-sequence learning for reversible machine translation. In *Advances in Neural Information* Processing Systems, volume 34, pages 21070–21084.
Curran Associates, Inc.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
9
✓ A2. Did you discuss any potential risks of your work?
10
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 7
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
7 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 6
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
7
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
7
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 7
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
6
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
6 D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
jiang-etal-2023-global | Global and Local Hierarchy-aware Contrastive Framework for Implicit Discourse Relation Recognition | https://aclanthology.org/2023.findings-acl.510 | Due to the absence of explicit connectives, implicit discourse relation recognition (IDRR) remains a challenging task in discourse analysis. The critical step for IDRR is to learn high-quality discourse relation representations between two arguments. Recent methods tend to integrate the whole hierarchical information of senses into discourse relation representations for multi-level sense recognition. Nevertheless, they insufficiently incorporate the static hierarchical structure containing all senses (defined as global hierarchy), and ignore the hierarchical sense label sequence corresponding to each instance (defined as local hierarchy). For the purpose of sufficiently exploiting global and local hierarchies of senses to learn better discourse relation representations, we propose a novel GlObal and Local Hierarchy-aware Contrastive Framework (GOLF), to model two kinds of hierarchies with the aid of multi-task learning and contrastive learning. Experimental results on PDTB 2.0 and PDTB 3.0 datasets demonstrate that our method remarkably outperforms current state-of-the-art models at all hierarchical levels. |
## Global And Local Hierarchy-Aware Contrastive Framework For Implicit Discourse Relation Recognition
Yuxin Jiang1,2 Linhan Zhang3 Wei Wang**1,2,4**
1The Hong Kong University of Science and Technology (Guangzhou)
2The Hong Kong University of Science and Technology 3School of Computer Science and Engineering, The University of New South Wales 4Guangzhou Municipal Key Laboratory of Materials Informatics, The Hong Kong University of Science and Technology (Guangzhou)
[email protected], [email protected], [email protected]
## Abstract
Due to the absence of explicit connectives, implicit discourse relation recognition (IDRR) remains a challenging task in discourse analysis. The critical step for IDRR is to learn highquality discourse relation representations between two arguments. Recent methods tend to integrate the whole hierarchical information of senses into discourse relation representations for multi-level sense recognition. Nevertheless, they insufficiently incorporate the static hierarchical structure containing all senses (defined as *global hierarchy*), and ignore the hierarchical sense label sequence corresponding to each instance (defined as *local hierarchy*).
For the purpose of sufficiently exploiting global and local hierarchies of senses to learn better discourse relation representations, we propose a novel GlObal and Local Hierarchy-aware Contrastive Framework (GOLF), to model two kinds of hierarchies with the aid of multi-task learning and *contrastive learning*. Experimental results on PDTB 2.0 and PDTB 3.0 datasets demonstrate that our method remarkably outperforms current state-of-the-art models at all hierarchical levels. 1
## 1 Introduction
Implicit discourse relation recognition (IDRR)
aims to identify logical relations (named senses)
between a pair of text segments (named arguments)
without an explicit connective (e.g., however, because) in the raw text. As a fundamental task in discourse analysis, IDRR has benefitted a wide range of Natural Language Processing (NLP) applications such as question answering (Liakata et al.,
2013), summarization (Cohan et al., 2018), information extraction (Tang et al., 2021), etc.
The critical step for IDRR is to learn high-quality discourse relation representations between two arguments. Early methods are dedicated to manually 1Our code is publicly available at https://github.
com/YJiangcm/GOLF_for_IDRR
Figure 1: An IDRR instance in the PDTB 2.0 corpus
![0_image_0.png](0_image_0.png)
(Prasad et al., 2008). Argument 1 is in italics, and argument 2 is in bold. The implicit connective is not present in the original discourse context but is assigned by annotators. All senses defined in PDTB are organized in a three-layer hierarchical structure (defined as global hierarchy in our paper), and the implicit connectives can be regarded as the most fine-grained senses.
designing shallow linguistic features (Pitler et al.,
2009; Park and Cardie, 2012) or constructing dense representations relying on word embeddings (Liu and Li, 2016; Dai and Huang, 2018; Liu et al.,
2020). Despite their successes, they train multiple models to predict multi-level senses independently, while ignoring that the sense annotation of IDRR
follows a hierarchical structure (as illustrated in Figure 1). To solve this issue, some researchers propose global hierarchy-aware models to exploit the prior probability of label dependencies based on Conditional Random Field (CRF) (Wu et al.,
2020) or the sequence generation model (Wu et al.,
2022).
However, existing hierarchy-aware methods still have two limitations. *Firstly*, though they exploit the fact that there are complex dependencies among senses and such information should be encoded into discourse relation representations, their manners of encoding the holistic hierarchical graph of senses may not be sufficient, since they fail to strengthen the correlation between the discourse relation representation and its associated sense labels, which is Figure 2: Three instances from PDTB 2.0. The sense label sequence of each instance is defined as *local hierarchy* in our paper.
highly useful for classification (Chen et al., 2020a).
Secondly, they only consider the graph of the entire label hierarchy and ignore the benefit of the label sequence corresponding to each instance. As shown in Figure 2, the label sequences of Instances
(1) and (2) differ at both the top and second levels, while the label sequences of Instances (1) and
(3) only differ at the most fine-grained level. The similarity between label sequences provides valuable information for regularizing discourse relation representations, e.g., by ensuring that the distance between representations of Instance (1) and (2) is farther than the distance between representations of Instance (1) and (3). Under such an observation, we categorize the sense hierarchy into global and local hierarchies to fully utilize the hierarchical information in IDRR. We define *global hierarchy* as the entire hierarchical structure containing all senses, while *local hierarchy* is defined as a hierarchical sense label sequence corresponding to each input instance. Therefore, global hierarchy is static and irrelevant to input instances, while local hierarchy is dynamic and pertinent to input instances.
Built on these motivations, we raise our research question: *How to sufficiently incorporate global* and local hierarchies to learn better discourse relation representations? To this end, we propose a novel GlObal and Local Hierarchy-aware Contrastive Framework (GOLF), to inject additional information into the learned relation representation through additional tasks that are aware of the global and local hierarchies, respectively. This is achieved via the joint use of *multi-task learning* and contrastive learning. The key idea of contrastive learning is to narrow the distance between two semantically similar representations, meanwhile, pushing away representations of dissimilar pairs (Chen et al., 2020b; Gao et al., 2021). It has achieved extraordinary successes in representation learning
(He et al., 2020). Finally, our multi-task learning framework consists of classification tasks and two additional contrastive learning tasks. The global hierarchy-aware contrastive learning task explicitly matches textual semantics and label semantics in a text-label joint embedding space, which refines the discourse relation representations to be semantically similar to the target label representations while semantically far away from the incorrect label representations. In the local hierarchy-aware contrastive learning task, we propose a novel scoring function to measure the similarity among sense label sequences. Then the similarity is utilized to guide the distance between discourse relation representations.
The main contributions of this paper are threefold:
- We propose a novel global and local hierarchyaware contrastive framework for IDRR, which sufficiently incorporates global and local hierarchies to learn better discourse relation representations.
- To our best knowledge, our work is the first attempt to meticulously adapt contrastive learning to IDRR considering the global and local hierarchies of senses.
- Comprehensive experiments and thorough analysis demonstrate that our approach delivers state-of-the-art performance on PDTB
2.0 and PDTB 3.0 datasets at all hierarchical levels, and more consistent predictions on multi-level senses.
## 2 Related Work 2.1 Implicit Discourse Relation Recognition
Early studies resort to manually-designed features to classify implicit discourse relations into four toplevel senses (Pitler et al., 2009; Park and Cardie, 2012). With the rapid development of deep learning, many methods explore the direction of building deep neural networks based on static word embeddings. Typical works include shallow CNN (Zhang et al., 2015), LSTM with Multi-Level Attention
(Liu and Li, 2016), knowledge-augmented LSTM
(Dai and Huang, 2018, 2019; Guo et al., 2020),
etc. These works aim to learn better semantic representations of arguments as well as capture the semantic interaction between them. More recently, contextualized representations learned from large pre-trained language models (PLMs) and prompting (Schick and Schütze, 2021) have substantially improved the performance of IDRR. More finedgrained levels of senses have been explored by (Liu et al., 2020; Long and Webber, 2022; Chan et al.,
2023b). Besides, researchers such as (Wu et al.,
2020, 2022) utilize the dependence between hierarchically structured sense labels to predict multilevel senses simultaneously. However, these methods may be insufficient to exploit the global and local hierarchies for discourse relation representations.
## 2.2 Contrastive Learning
Contrastive learning is initially proposed in Computer Vision (CV) as a weak-supervised representation learning method, aiming to pull semantically close samples together and push apart dissimilar samples (He et al., 2020; Chen et al., 2020b). In NLP, contrastive learning has also achieved extraordinary successes in various tasks including semantic textual similarity (STS) (Gao et al., 2021; Shou et al., 2022; Jiang et al., 2022), information retrieval (IR) (Hong et al., 2022), relation extraction
(RE) (Chen et al., 2021), etc. Though intuitively supervised contrastive learning could be applied to IDRR through constructing positive pairs according to the annotated sense labels, it ignores the hierarchical structure of senses. This paper is the first work to meticulously adapt contrastive learning to IDRR considering the global and local hierarchies of senses.
## 3 Problem Definition
Given M hierarchical levels of defined senses S =
(S
1, ..., Sm*, ..., S*M), where S
m is the set of senses at the m-th hierarchical level, and a sample input consisting of two text spans, or xi = (arg1*, arg*2), our model aims to output a sequence of sense yi =
(y 1 i
, ..., ym i
, ..., yM
i), where y m i ∈ S
m.
## 4 Methodology
Figure 3 illustrates the overall architecture of our multi-task learning framework. Beginning at the left part of Figure 3, we utilize a Discourse Relation Encoder to capture the interaction between two input arguments and map them into a discourse relation representation h. After that, the discourse relation representation h is fed into a Staircase Classifier to perform classification at three hierarchical levels dependently. While training, we will use two additional tasks, the global hierarchy-aware contrastive loss L*Global* (in the upper right part of Figure 3) and the local hierarchy-aware contrastive loss L*Local* (in the lower right part of Figure 3)
as additional regularization to refine the discourse relation representation h. During inference, we only use the Discourse Relation Encoder and the Staircase Classifier for classification and *discard* the Global and Local Hierarchy-aware Contrastive Learning modules. Detailed descriptions of our framework are given below.
## 4.1 Discourse Relation Encoder
Given an instance xi = (arg1*, arg*2), we concatenate the two arguments and formulate them as a sequence with special tokens:
[CLS] arg1 [SEP] arg2 [SEP], where [CLS]
and [SEP] denote the beginning and the end of sentences, respectively. Then we feed the sequence through a Transformer (Vaswani et al., 2017) encoder to acquire contextualized token representations H. Previous works (Liu and Li, 2016; Liu et al., 2020) indicate that deep interactions between two arguments play an important role in IDRR. To this end, we propose a Multi-Head Interactive Attention (MHIA) module to facilitate bilateral multiperspective matching between arg1 and arg2. As shown in the left part of Figure 3, we separate H into Harg1 and Harg2
, denoting as the contextualized representations of arg1 and arg2. Then MHIA reuses the Multi-Head Attention (MHA)
in Transformer, but the difference is that we take Harg1 as Query, Harg2 as Key and *Value* and vice versa. The intuition behind MHIA is to simulate human's transposition thinking process: respectively considering each other's focus from the standpoint of arg1 and arg2. Note that the MHIA module may be stacked for L1 layers. Finally, we use the representation of [CLS] in the last layer as the discourse relation representation and denote it as h for simplicity.
## 4.2 Staircase Classifier
Given the discourse relation representation hi of an instance, we propose a "staircase" classifier inspired by (Abbe et al., 2021) to output the label logits t m iat each hierarchical level m ∈ [1, M] in 8050
![3_image_0.png](3_image_0.png)
a top-down manner, where the higher-level logits are used to guide the logits at the current level:
$$t_{i}^{m}=h_{i}W_{1}^{m}+t_{i}^{m-1}W_{2}^{m}+b^{m}\tag{1}$$ where $W_{1}^{m}\in\mathbb{R}^{d_{h}\times|S^{m}|}$, $W_{2}^{m}\in\mathbb{R}|S^{m-1}|\times|S^{m}|$, $\cdot|S^{m}|$, $\cdot|S^{m}|$, $\cdot|S^{m}|$, $\cdot|S^{m}|$, $\cdot|S^{m}|$, $\cdot|S^{m}|$, \(\cdot|S^{m}|
b m ∈ R|Sm|, t 0 i = ⃗0. Then the cross-entropy loss of the classifier is defined as follows:
$${\mathcal{L}}_{C E}=-{\frac{1}{|N|}}\sum_{i\in N}\sum_{m=1}^{M}\mathbb{E}_{{\bar{q}}_{i}^{m}}[\mathrm{LogSoftmax}(t_{i}^{m})]\quad(2)$$
where ⃗ym iis the one-hot encoding of the groundtruth sense label y m i
.
## 4.3 Global Hierarchy-Aware Contrastive Learning
The Global Hierarchy-aware Contrastive Learning module first exploits a Global Hierarchy Encoder to encode global hierarchy into sense label embeddings. Then, it matches the discourse relation representation of an input instance with its corresponding sense label embeddings in a joint embedding space based on contrastive learning.
## 4.3.1 Global Hierarchy Encoder
To encode label hierarchy in a global view, we regard the hierarchical structure of senses as an undirected graph, where each sense corresponds to a graph node. Then we adopt a graph convolutional network (GCN) (Welling and Kipf, 2016) to induce node embeddings for each sense based on properties of their neighborhoods. The adjacent matrix A ∈ R|S|×|S|is defined as follows:
$$A_{ij}=\begin{cases}1,&if\ i=j;\\ 1,&if\ child(i)=j\ or\ child(j)=i;\\ 0,&otherwise.\end{cases}\tag{3}$$
where S is the set of all senses, *i, j* ∈ S,
child(i) = j means that sense j is the subclass of sense i. By setting the number layer of GCN
as L2, given the initial representation of sense i as r 0 i ∈ R
dr, GCN updates the sense embeddings with the following layer-wise propagation rule:
$$r_{i}^{l}=R e L U(\sum_{j\in S}D_{i i}^{-\frac{1}{2}}A_{i j}D_{j j}^{-\frac{1}{2}}r_{j}^{l-1}W^{l}+b^{l})\tag{4}$$
where l ∈ [1, L2], Wl ∈ R
dr×dr and b l ∈ R
dr are P
learnable parameters at the l-th GCN layer, Dii =
j Aij . Finally, we take the output {r L2 i}i∈S of the L2-th layer as the sense embeddings and denote them as {ri}i∈S for simplicity.
8051
## 4.3.2 Semantic Match In A Joint Embedding Space
In this part, we match textual semantics and label semantics in a text-label joint embedding space where correlations between text and labels are exploited, as depicted in the upper right part of Figure 3. We first project the discourse relation representation hi of an instance xi and the sense label embeddings {ri}i∈S into a common latent space by two different Multi-Layer Perception (MLP) Φ1 and Φ2. Then, we apply a contrastive learning loss to capture text-label matching relationships, by regularizing the discourse relation representation to be semantically similar to the target label representations and semantically far away from the incorrect label representations:
$$\mathcal{L}_{G}=-\frac{1}{|N|}\sum_{i\in N}\sum_{j\in S}\mathds{1}_{j\in yi}$$ $$\times\log\frac{\exp\left(sim\Big{(}\Phi_{1}(h_{i}),\Phi_{2}(r_{j})\Big{)}/\tau\right)}{\sum_{j\in S}\exp\left(sim\Big{(}\Phi_{1}(h_{i}),\Phi_{2}(r_{j})\Big{)}/\tau\right)}\tag{5}$$ where $\Phi_{1}$ is the $\tau$-th order of $\tau$-th order of $\tau$.
where N denotes a batch of training instances, yi is the sense label sequence of instance xi, sim(·)
is the cosine similarity function, τ is a temperature hyperparameter. By minimizing the global hierarchy-aware contrastive learning loss, the distribution of discourse relation representations is refined to be similar to the label distribution.
Here we would like to highlight the key differences between our model and LDSGM (Wu et al.,
2022), since we both utilize a GCN to acquire label representations. Firstly, We use a different approach to capture the associations between the acquired label representations and the input text.
In (Wu et al., 2022), the associations are *implicitly* captured using the usual attention mechanism.
In contrast, our model *explicitly* learns them by refining the distribution of discourse relation representations to match the label distribution using contrastive learning. Secondly, our work introduces a novel aspect that has been overlooked by earlier studies including (Wu et al., 2022): the utilization of local hierarchy information, which enables our model to better differentiate between similar discourse relations and achieve further improvements.
## 4.4 Local Hierarchy-Aware Contrastive Learning
Following (Gao et al., 2021), we duplicate a batch of training instances N as N + and feed N as well as N + through our Discourse Relation Encoder E
with diverse dropout augmentations to obtain 2|N| discourse relation representations. Then we apply an MLP layer Φ3 over the representations, which is shown to be beneficial for contrastive learning
(Chen et al., 2020b).
To incorporate local hierarchy into discourse relation representations, it is tempting to directly apply supervised contrastive learning (Gunel et al.,
2021) which requires positive pairs to have identical senses at each hierarchical level m ∈ [1, M]:
$$\begin{split}\mathcal{L}_{L^{\prime}}&=-\frac{1}{|N|}\sum_{i\in N}\sum_{j\in N^{+}}\left(\prod_{m=1}^{M}\mathds{1}_{y_{i}^{m}=y_{j}^{m}}\right)\\ &\quad\times\log\frac{\exp\left(sim\!\left(\Phi_{3}(h_{i}),\Phi_{3}(h_{j})\right)/\tau\right)}{\sum_{j\in N^{+}}\exp\left(sim\!\left(\Phi_{3}(h_{i}),\Phi_{3}(h_{j})\right)/\tau\right)}\end{split}\tag{6}$$ However, Equation (6) is more than more subtle as
However, Equation (6) ignores the more subtle semantic structures of the local hierarchy, since it only admits positive examples as having *identical*,
no account for examples with highly similar annotations. To illustrate, consider Instances (1) and
(3) in Figure 2, where their sense label sequences only differ at the most fine-grained level. However, they are regarded as a negative pair in Equation
(6), rather than a "relatively" positive pair. The standard of selecting positive pairs is too strict in Equation (6), thus may result in semantically similar representations being pulled away. To loosen this restriction, we regard all instance pairs as positive pairs but assign the degree of positive, by using a novel scoring function to calculate the similarity among label sequences yi = (y 1 i
, ..., ym i
, ..., yM
i)
and yj = (y 1 j
, ..., ym j
, ..., yM
j
).
In our case, there exist three hierarchical levels including Top, Second, and Connective, and we use T, S, and C to denote them. Consequently, there are in total K = 6 sub-paths in the hierarchies, i.e.,
P = {T, S, C, TS, SC, TSC}. Then we calculate the Dice similarity coefficient for each sub-path among the hierarchical levels and take the average as the similarity score between yi and yj , which is formulated below:
$$S c o r e(y_{i},y_{j})=\frac{1}{K}\sum_{k=1}^{K}D i c e(P_{i}^{k},P_{j}^{k})\qquad\quad(7)$$
where Dice(*A, B*) = (2|A ∩ B|)/(|A| + |B|), P
k i is the k-th sub-path label set of yi. Taking Instances
(1) and (3) in Figure 2 as examples, their label sequences are *Top: Comparison, Sec: Contrast,*
Conn: but and Top: Comparison, Sec: Contrast, Conn: however, respectively. Then the similarity score would be 16
(
2×1 1+1 +
2×1 1+1 +
2×0 1+1 +
2×2 2+2 +
2×1 2+2 +
2×2 3+3 ) ≈ 0.7.
Finally, our local hierarchy-aware contrastive loss utilizes the similarity scores to guide the distance between discourse relation representations:
$$\mathcal{L}_{L}=-\frac{1}{|N|}\sum_{i\in N}\sum_{j\in N^{+}}Score(y_{i},y_{j})$$ $$\times\log\frac{\exp\left(sim\Big{(}\Phi_{3}(h_{i}),\Phi_{3}(h_{j})\Big{)}/\tau\right)}{\sum_{j\in N^{+}}\exp\left(sim\Big{(}\Phi_{3}(h_{i}),\Phi_{3}(h_{j})\Big{)}/\tau\right)}\tag{8}$$ Compared with Equation (6), Equation (8) consid
Compared with Equation (6), Equation (8) considers more subtle semantic structures of the local hierarchy for selecting positive pairs. It increases the relevance of representations for all similarly labeled instances and only pushes away instances with entirely different local hierarchies. Thus, the local hierarchical information is sufficiently incorporated into discourse relation representations.
The overall training goal is the combination of the classification loss, the global hierarchy-aware contrastive loss, and the local hierarchy-aware contrastive loss:
$${\mathcal{L}}={\mathcal{L}}_{C E}+\lambda_{1}\cdot{\mathcal{L}}_{G}+\lambda_{2}\cdot{\mathcal{L}}_{L}$$
L = LCE + λ1 · LG + λ2 · LL (9)
where λ1 and λ2 are coefficients for the global and local hierarchy-aware contrastive loss, respectively.
We set them as 0.1 and 1.0 while training, according to hyperparameter search (in Appendix C).
## 5 Experiments 5.1 Dataset
The Penn Discourse Treebank 2.0 (PDTB 2.0)
PDTB 2.0 (Prasad et al., 2008) is a large-scale English corpus annotated with information on discourse structure and semantics. PDTB 2.0 has three levels of senses, i.e., classes, types, and sub-types.
Since only part of PDTB instances is annotated with third-level senses, we take the top-level and second-level senses into consideration and regard the implicit connectives as third-level senses. There are 4 top-level senses including Temporal (Temp),
Contingency (Cont), Comparison (Comp), and Expansion (Expa). Further, there exist 16 second-level senses, but we only consider 11 major second-level implicit types following previous works (Liu et al.,
2020; Wu et al., 2022). For the connective classification, we consider all 102 connectives defined in PDTB 2.0.
The Penn Discourse Treebank 3.0 (PDTB 3.0)
PDTB 3.0 (Webber et al., 2019) is the updated version of PDTB 2.0, which includes an additional 13K annotations and corrects some inconsistencies in PDTB 2.0. Following the preprocess of PDTB
2.0, we consider 4 top-level senses, 14 majority second-level senses, and all 186 connectives defined in PDTB 3.0.
Appendix A shows the detailed statistics of the PDTB corpora. We follow early works (Ji and Eisenstein, 2015; Liu et al., 2020; Wu et al., 2022)
using Sections 2-20 of the corpus for training, Sections 0-1 for validation, and Sections 21-22 for testing. In PDTB 2.0 and PDTB 3.0, there are around 1% data samples with multiple annotated senses. Following (Qin et al., 2016), we treat them as separate instances during training for avoiding ambiguity. At test time, a prediction matching one of the gold types is regarded as the correct answer.
## 5.2 Baselines
To validate the effectiveness of our method, we contrast it with the most advanced techniques currently available. As past research generally assessed one dataset (either PDTB 2.0 or PDTB 3.0), we utilize distinct baselines for each. Due to PDTB 3.0's recent release in 2019, there are fewer baselines available for it compared to PDTB 2.0.
## Baselines For Pdtb 2.0
- **NNMA** (Liu and Li, 2016): a neural network with multiple levels of attention.
- **KANN** (Guo et al., 2020): a knowledgeenhanced attentive neural network.
- **PDRR** (Dai and Huang, 2018): a paragraphlevel neural network that models interdependencies between discourse units as well as discourse relation continuity and patterns.
- **IDRR-Con** (Shi and Demberg, 2019): a neural model that leverages the inserted connectives to learn better argument representations.
- **IDRR-C&E** (Dai and Huang, 2019): a neural model leveraging external event knowledge and coreference relations.
- **MTL-MLoss** (Nguyen et al., 2019): a neural model which predicts the labels and connectives simultaneously.
- **HierMTN-CRF** (Wu et al., 2020): a hierarchical multi-task neural network with a conditional random field layer.
- **BERT-FT** (Kishimoto et al., 2020): a model applying three additional training tasks.
- **RoBERTa (Fine-tuning)**: a RoBERTa-based model fine-tuned on three sense levels separately.
- **BMGF-RoBERTa** (Liu et al., 2020): a RoBERTa-based model with bilateral multiperspective matching and global information fusion.
- **LDSGM** (Wu et al., 2022): a label dependence-aware sequence generation model.
- **ChatGPT** (Chan et al., 2023a): a ChatGPTbased method equipped with an in-context learning prompt template.
- **RoBERTa (Fine-tuning)**: a RoBERTa-based model fine-tuned on three sense levels separately.
- **BMGF-RoBERTa** (Liu et al., 2020): we reproduce the model on PDTB 3.0.
- **LDSGM** (Wu et al., 2022): we reproduce the model on PDTB 3.0.
- Secondly, employing RoBERTa-large embeddings in GOLF leads to a significant improvement in its performance. This observation indicates that our GOLF model can effectively benefit from larger pre-trained language models (PLMs).
- **ConnPrompt** (Xiang et al., 2022b): a PLMbased model using a connective-cloze Prompt to transform the IDRR task as a connectivecloze prediction task.
## 5.4 Results Baselines For Pdtb 3.0 5.3 Implementation Details
the gradient to be easily backpropagated to the encoder. The node embeddings of senses with the dimension 100 are randomly initialized by *kaiming_normal* (He et al., 2015). To avoid overfitting, we apply dropout with a rate of 0.1 after each GCN
layer. We adopt AdamW optimizer with a learning rate of 1e-5 and a batch size of 32 to update the model parameters for 15 epochs. The evaluation step is set to 100 and all hyperparameters are determined according to the best average model performance at three levels on the validation set.
All experiments are performed five times with different random seeds and all reported results are averaged performance.
Multi-label Classification Comparison The primary experimental results are presented in Table 1, which enables us to draw the following conclusions:
- Firstly, our GOLF model has achieved new state-of-the-art performance across all three levels, as evidenced by both macro-F1 and accuracy metrics. Specifically, on PDTB 2.0, GOLF (base) outperforms the current state-ofthe-art LDSGM model (Wu et al., 2022) by 2.03%, 1.25%, and 1.11% in three levels, respectively, in terms of macro-F1. Additionally, it exhibits 1.34%, 0.83%, and 0.65% improvements over the current best results in terms of accuracy. Moreover, in the case of PDTB
3.0, GOLF (base) also outperforms the current state-of-the-art ConnPrompt model (Xiang et al., 2022b) by 1.37% F1 and 1.19%
accuracy at the top level.
- **MANF** (Xiang et al., 2022a): a multi-attentive neural fusion model to encode and fuse both semantic connection and linguistic evidence.
We implement our model based on Huggingface's transformers (Wolf et al., 2020) and use the pretrained RoBERTa (Liu et al., 2019) (base or large version) as our Transformer encoder. The layer number of MHIA and GCN are both set to 2. We set temperature τ in contrastive learning as 0.1. We set Φ1, Φ2, Φ3 as a simple MLP with one hidden layer and *tanh* activation function, which enables
- Finally, despite the impressive performance of recent large language models (LLMs) such as ChatGPT (OpenAI, 2022) in few-shot and zero-shot learning for various understanding and reasoning tasks (Bang et al., 2023; Jiang et al., 2023), they still lag behind our GOLF
(base) model by approximately 30% in PDTB
2.0. This difference suggests that ChatGPT
may struggle to comprehend the abstract sense
| Model | Embedding | Top-level | Second-level | Connective | | | |
|----------------------------------|-------------|-------------|----------------|--------------|-------|-------|-------|
| F1 | Acc | F1 | Acc | F1 | Acc | | |
| PDTB 2.0 | | | | | | | |
| NNMA (Liu and Li, 2016) | GloVe | 46.29 | 57.57 | - | - | - | - |
| KANN (Guo et al., 2020) | GloVe | 47.90 | 57.25 | - | - | - | - |
| PDRR (Dai and Huang, 2018) | word2vec | 48.82 | 57.44 | - | - | - | - |
| IDRR-Con (Shi and Demberg, 2019) | word2vec | 46.40 | 61.42 | - | 47.83 | - | - |
| IDRR-C&E (Dai and Huang, 2019) | ELMo | 52.89 | 59.66 | 33.41 | 48.23 | - | - |
| MTL-MLoss (Nguyen et al., 2019) | ELMo | 53.00 | - | - | 49.95 | - | - |
| HierMTN-CRF (Wu et al., 2020) | BERT | 55.72 | 65.26 | 33.91 | 53.34 | 10.37 | 30.00 |
| BERT-FT (Kishimoto et al., 2020) | BERT | 58.48 | 65.26 | - | 54.32 | - | - |
| RoBERTa (Fine-tuning) | RoBERTa | 62.96 | 69.98 | 40.34 | 59.87 | 10.06 | 31.45 |
| BMGF-RoBERTa (Liu et al., 2020) | RoBERTa | 63.39 | 69.06 | - | 58.13 | - | - |
| LDSGM (Wu et al., 2022) | RoBERTa | 63.73 | 71.18 | 40.49 | 60.33 | 10.68 | 32.20 |
| ChatGPT (Chan et al., 2023a) | - | 36.11 | 44.18 | 16.20 | 24.54 | - | - |
| GOLF (base) | RoBERTa | 65.76 | 72.52 | 41.74 | 61.16 | 11.79 | 32.85 |
| GOLF (large) | RoBERTa | 69.60 | 74.67 | 47.91 | 63.91 | 14.59 | 42.35 |
| PDTB 3.0 | | | | | | | |
| MANF (Xiang et al., 2022a) | BERT | 56.63 | 64.04 | - | - | - | - |
| RoBERTa (Fine-tuning) | RoBERTa | 68.31 | 71.59 | 50.63 | 60.14 | 14.72 | 39.43 |
| BMGF-RoBERTa (Liu et al., 2020) | RoBERTa | 63.39 | 69.06 | - | 58.13 | - | - |
| LDSGM (Wu et al., 2022) | RoBERTa | 68.73 | 73.18 | 53.49 | 61.33 | 17.68 | 40.20 |
| ConnPrompt (Xiang et al., 2022b) | RoBERTa | 69.51 | 73.84 | - | - | - | - |
| GOLF (base) | RoBERTa | 70.88 | 75.03 | 55.30 | 63.57 | 19.21 | 42.54 |
| GOLF (large) | RoBERTa | 74.21 | 76.39 | 60.11 | 66.42 | 20.66 | 45.12 |
Table 1: Model comparison of multi-class classification on PDTB 2.0 and PDTB 3.0 in terms of macro-averaged F1
(%) and accuracy (%).
| Model | Exp. (53%) | Cont. (27%) | Comp. (14%) | Temp. (3%) |
|-------------------------|--------------|---------------|---------------|--------------|
| BMGF (Liu et al., 2020) | 77.66 | 60.98 | 59.44 | 50.26 |
| LDSGM (Wu et al., 2022) | 78.47 | 64.37 | 61.66 | 50.88 |
| GOLF (base) | 79.41 | 62.90 | 67.71 | 54.55 |
| GOLF (large) | 80.96 | 66.54 | 69.47 | 61.40 |
Table 2: Label-wise F1 scores (%) for the top-level senses of PDTB 2.0. The proportion of each sense is listed below its name.
of each discourse relation and extract the relevant language features from the text. Therefore, implicit discourse relation recognition remains a challenging and crucial task for the NLP community, which requires further exploration.
Label-wise Classification Comparison Here we present an evaluation of GOLF's performance on PDTB 2.0 using label-wise F1 comparison for top-level and second-level senses. Table 2 showcases the label-wise F1 comparison for the toplevel senses, demonstrating that GOLF significantly improves the performance of minority senses such as *Temp* and *Comp*. In Table 3, we compare GOLF with the current state-of-the-art models for the second-level senses. Our results show
| Second-level Senses | BMGF | LDSGM | GOLF (base) | GOLF (large) |
|-------------------------|--------|---------|---------------|----------------|
| Exp.Restatement (20%) | 53.83 | 58.06 | 59.84 | 59.03 |
| Exp.Conjunction (19%) | 60.17 | 57.91 | 60.28 | 61.54 |
| Exp.Instantiation (12%) | 67.96 | 72.60 | 75.36 | 77.98 |
| Exp.Alternative (1%) | 60.00 | 63.46 | 63.49 | 61.54 |
| Exp.List (1%) | 0.00 | 8.98 | 27.78 | 43.48 |
| Cont.Cause (26%) | 59.60 | 64.36 | 65.35 | 65.98 |
| Cont.Pragmatic (1%) | 0.00 | 0.00 | 0.00 | 0.00 |
| Comp.Contrast (12%) | 59.75 | 63.52 | 61.95 | 67.57 |
| Comp.Concession (2%) | 0.00 | 0.00 | 0.00 | 11.11 |
| Temp.Asynchronous (5%) | 56.18 | 56.47 | 63.82 | 65.49 |
| Temp.Synchrony (1%) | 0.00 | 0.00 | 0.00 | 13.33 |
that GOLF (base) enhances the F1 performance of most second-level senses, with a notable increase in *Expa.List* from 8.98% to 27.78%. Furthermore, by using RoBERTa-large as embeddings, our GOLF (large) model breaks the bottleneck of previous work in two few-shot second-level senses, Temp.Synchrony and *Comp.Concession*. To further validate our model's ability of deriving better discourse relation representations, we compare the generated representations of GOLF with those of current state-of-the-art models for both top-level and second-level senses in Appendix B.
Model Top-level Second-level Connective **Top-Sec Top-Sec-Conn** F1 Acc F1 Acc F1 Acc
GOLF **65.76 72.52 41.74 61.16 11.79 32.85 59.65 27.55**
-w/o MHIA 64.97 71.85 41.07 60.52 10.80 31.69 58.52 26.18
-w/o staircase 65.43 72.25 41.12 60.81 10.81 31.40 58.43 26.08 -w/o MHIA and staircase 64.77 71.98 40.99 60.10 10.76 31.65 58.49 26.22
-w/o LG 65.37 71.61 40.78 60.40 11.56 32.73 59.01 26.86 -w/o LL 64.34 71.32 40.24 60.42 10.76 31.88 58.69 26.37
-w/o LG and LL 63.85 71.04 39.98 59.92 10.72 30.47 58.23 25.89
-*r.p.* LL with LL′ 64.58 71.56 41.20 61.07 11.43 32.55 59.24 27.05
Multi-level Consistency Comparison Following (Wu et al., 2022), we evaluate the consistency among multi-level sense predictions via two metrics: 1) Top-Sec: the percentage of correct predictions at both the top-level and second-level senses; 2) Top-Sec-Con: the percentage of correct predictions across all three level senses. Our model's results, as displayed in Table 5, demonstrate more consistent predictions than existing state-of-the-art models in both Top-Sec and Top-Sec-Con, verifying the effectiveness of our model in integrating global and local hierarchical information.
| Model | Top-Sec | Top-Sec-Conn |
|--------------|-----------|----------------|
| PDTB 2.0 | | |
| HierMTN-CRF | 46.29 | 19.15 |
| BMGF-RoBERTa | 47.06 | 21.37 |
| LDSGM | 58.61 | 26.85 |
| GOLF (base) | 59.65 | 27.55 |
| GOLF (large) | 61.79 | 36.00 |
| PDTB 3.0 | | |
| HierMTN-CRF | 50.19 | 27.82 |
| BMGF-RoBERTa | 52.33 | 29.16 |
| LDSGM | 60.32 | 34.57 |
| GOLF (base) | 61.31 | 36.97 |
| GOLF (large) | 64.86 | 38.26 |
## 6 Ablation Study
Firstly, we investigate the efficacy of individual modules in our framework. For this purpose, we remove the Multi-Head Interactive Attention (MHIA), the "staircase" in Classifier, the Global Hierarchy-aware Contrastive loss LG, and the Local Hierarchy-aware Contrastive loss LL from GOLF one by one. Note that removing the "staircase" in Classifier means that we keep the crossentropy loss but remove the dependence between logits from different hierarchical levels. Table 4 indicates that eliminating any of the four modules would hurt the performance across all three levels and reduce the consistency among multi-level label predictions. At the same time, the Local Hierarchyaware Contrastive loss contributes mostly. Besides, removing both the Global Hierarchy-aware Contrastive loss LG and the Local Hierarchy-aware Contrastive loss LL significantly hurts the performance. The results show that incorporating label hierarchies from both the global and local perspectives is indeed beneficial. Secondly, we replace the Local Hierarchy-aware Contrastive loss LL (Equation (8)) with the hard-label version LL′ (Equation
(6)) and find that the performance drops notably.
It verifies the usefulness of the scoring function in Equation 7, which considers more subtle semantic structures of local hierarchy. In Appendix C, We also analyze the effects of various hyperparameters consisting of the number layer of MHIA and GCN,
the coefficients λ1 and λ2, and the temperature τ .
## 7 Conclusion
In this paper, we present a novel Global and Local Hierarchy-aware Contrastive Framework for implicit discourse relation recognition (IDRR). It can sufficiently incorporate global and local hierarchies to learn better discourse relation representations with the aid of multi-task learning and contrastive learning. Compared with current state-of-the-art approaches, our model empirically reaches better performance at all hierarchical levels of the PDTB
dataset and achieves more consistent predictions on multi-level senses.
## Limitations
In this section, we illustrate the limitations of our method, which could be summarized into the following two aspects.
Firstly, since the cumbersome data annotation leads to few publicly available datasets of IDRR
tasks, we only conduct experiments on English corpora including PDTB 2.0 and PDTB 3.0. In the future, we plan to comprehensively evaluate our model on more datasets and datasets in other languages.
Secondly, considering that instances of PDTB
are contained in paragraphs of the Wall Street Journal articles, our approach ignores wider paragraphlevel contexts beyond the two discourse arguments.
As shown in (Dai and Huang, 2018), positioning discourse arguments in their wider context of a paragraph may further benefit implicit discourse relation recognition. It is worth exploring how to effectively build wider-context-informed discourse relation representations and capture the overall discourse structure from the paragraph level.
## Ethics Statement
Since our method relies on pre-trained language models, it may run the danger of inheriting and propagating some of the models' negative biases from the data they have been pre-trained on (Bender et al., 2021). Furthermore, we do not see any other potential risks.
## Acknowledgments
W. Wang was supported by HKUST(GZ) Grant G0101000028, GZU-HKUST Joint Research Collaboration Grant GZU22EG04, and Guangzhou Municipal Science and Technology Project (No.
2023A03J0003).
## References
Emmanuel Abbe, Enric Boix-Adsera, Matthew S Brennan, Guy Bresler, and Dheeraj Nagaraj. 2021. The staircase property: How hierarchical structure can guide deep learning. *Advances in Neural Information Processing Systems*, 34:26989–27002.
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. 2023. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. *CoRR*, abs/2302.04023.
Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021, pages 610–623. ACM.
Chunkit Chan, Jiayang Cheng, Weiqi Wang, Yuxin Jiang, Tianqing Fang, Xin Liu, and Yangqiu Song.
2023a. Chatgpt evaluation on sentence level relations: A focus on temporal, causal, and discourse relations. *CoRR*, abs/2304.14827.
Chunkit Chan, Xin Liu, Jiayang Cheng, Zihan Li, Yangqiu Song, Ginny Y. Wong, and Simon See.
2023b. Discoprompt: Path prediction prompt tuning for implicit discourse relation recognition. *CoRR*,
abs/2305.03973.
Boli Chen, Xin Huang, Lin Xiao, Zixin Cai, and Liping Jing. 2020a. Hyperbolic interaction model for hierarchical multi-label classification. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
volume 34, pages 7496–7503.
Tao Chen, Haizhou Shi, Siliang Tang, Zhigang Chen, Fei Wu, and Yueting Zhuang. 2021. CIL: contrastive instance learning framework for distantly supervised relation extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP
2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6191–6200. Association for Computational Linguistics.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020b. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine* Learning Research, pages 1597–1607. PMLR.
Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North
American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 615–621.
Association for Computational Linguistics.
Zeyu Dai and Ruihong Huang. 2018. Improving implicit discourse relation classification by modeling inter-dependencies of discourse units in a paragraph.
In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA,
June 1-6, 2018, Volume 1 (Long Papers), pages 141–
151. Association for Computational Linguistics.
Zeyu Dai and Ruihong Huang. 2019. A regularization approach for incorporating event knowledge and coreference relations into neural discourse parsing.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2976–2987.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
SimCSE: Simple contrastive learning of sentence embeddings. In *Empirical Methods in Natural Language Processing (EMNLP)*.
Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.
OpenReview.net.
Fengyu Guo, Ruifang He, Jianwu Dang, and Jian Wang.
2020. Working memory-driven neural networks with a novel knowledge enhancement paradigm for implicit discourse relation recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7822–7829.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729–9738.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In *Proceedings of the IEEE international conference* on computer vision, pages 1026–1034.
Wu Hong, Zhuosheng Zhang, Jinyuan Wang, and Hai Zhao. 2022. Sentence-aware contrastive learning for open-domain passage retrieval. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 1062–1074.
Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. Trans. Assoc. Comput.
Linguistics, 3:329–344.
Yuxin Jiang, Chunkit Chan, Mingyang Chen, and Wei Wang. 2023. Lion: Adversarial distillation of closed-source large language model. *CoRR*,
abs/2305.12870.
Yuxin Jiang, Linhan Zhang, and Wei Wang. 2022. Improved universal sentence embeddings with promptbased contrastive learning and energy-based learning.
In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3021–3035, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Yudai Kishimoto, Yugo Murawaki, and Sadao Kurohashi. 2020. Adapting BERT to implicit discourse relation classification with a focus on discourse connectives. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 1152–
1158. European Language Resources Association.
Maria Liakata, Simon Dobnik, Shyamasree Saha, Colin Batchelor, and Dietrich Rebholz Schuhmann. 2013.
A discourse-driven content model for summarising scientific articles evaluated in a complex question answering task. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 747–757.
Xin Liu, Jiefu Ou, Yangqiu Song, and Xin Jiang. 2020.
On the importance of word and sentence representation learning in implicit discourse relation classification. In *Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,*
IJCAI 2020, pages 3830–3836. ijcai.org.
Yang Liu and Sujian Li. 2016. Recognizing implicit discourse relations via repeated reading: Neural networks with multi-level attention. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1224–1233.
The Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Wanqiu Long and Bonnie Webber. 2022. Facilitating contrastive learning of discourse relational senses by exploiting the hierarchy of sense relations. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP
2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 10704–10716. Association for Computational Linguistics.
Linh The Nguyen, Ngo Van Linh, Khoat Than, and Thien Huu Nguyen. 2019. Employing the correspondence of relations and connectives to identify implicit discourse relations via label embeddings. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4201–4207. Association for Computational Linguistics.
TB OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. *OpenAI*.
Joonsuk Park and Claire Cardie. 2012. Improving implicit discourse relation recognition through feature set optimization. In *Proceedings of the 13th Annual* Meeting of the Special Interest Group on Discourse and Dialogue, pages 108–112.
Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text.
Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08).
Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016. Shallow discourse parsing using convolutional neural network. In *Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning:*
Shared Task, CoNLL 2016, Berlin, Germany, August 7-12, 2016, pages 70–77. ACL.
Timo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also fewshot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2339–2352. Association for Computational Linguistics.
Wei Shi and Vera Demberg. 2019. Learning to explicitate connectives with seq2seq network for implicit discourse relation classification. In *Proceedings of* the 13th International Conference on Computational Semantics, IWCS 2019, Long Papers, Gothenburg, Sweden, May 23-27 May, 2019, pages 188–199. Association for Computational Linguistics.
Ziyi Shou, Yuxin Jiang, and Fangzhen Lin. 2022. AMRDA: Data augmentation by Abstract Meaning Representation. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3082–3098, Dublin, Ireland. Association for Computational Linguistics.
Jialong Tang, Hongyu Lin, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun, Weijian Xie, and Jin Xu. 2021.
From discourse to narrative: Knowledge projection for event relation extraction. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 732–742. Association for Computational Linguistics.
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. *Journal of machine* learning research, 9(11).
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Bonnie Webber, Rashmi Prasad, Alan Lee, and Aravind Joshi. 2019. The penn discourse treebank 3.0 annotation manual. *Philadelphia, University of Pennsylvania*, 35:108.
Max Welling and Thomas N Kipf. 2016. Semisupervised classification with graph convolutional networks. In *J. International Conference on Learning Representations (ICLR 2017)*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45.
Changxing Wu, Liuwen Cao, Yubin Ge, Yang Liu, Min Zhang, and Jinsong Su. 2022. A label dependenceaware sequence generation model for multi-level implicit discourse relation recognition. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 36, pages 11486–11494.
Changxing Wu, Chaowen Hu, Ruochen Li, Hongyu Lin, and Jinsong Su. 2020. Hierarchical multi-task learning with CRF for implicit discourse relation recognition. *Knowl. Based Syst.*, 195:105637.
Wei Xiang, Bang Wang, Lu Dai, and Yijun Mo. 2022a.
Encoding and fusing semantic connection and linguistic evidence for implicit discourse relation recognition. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3247–3257, Dublin, Ireland. Association for Computational Linguistics.
Wei Xiang, Zhenglin Wang, Lu Dai, and Bang Wang.
2022b. ConnPrompt: Connective-cloze prompt learning for implicit discourse relation recognition.
In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 902–911, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Biao Zhang, Jinsong Su, Deyi Xiong, Yaojie Lu, Hong Duan, and Junfeng Yao. 2015. Shallow convolutional neural network for implicit discourse relation recognition. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2230–2235.
## A Data Statistics
Second-level Senses Train Dev Test Exp.Conjunction 2,814 258 200
Exp.Restatement 2,430 260 211
Exp.Instantiation 1,100 106 118 Exp.List 330 9 12
Exp.Alternative 150 10 9
Cont.Cause 3,234 281 269
Cont.Pragmatic cause 51 6 7
Comp.Contrast 1,569 166 128 Comp.Concession 181 15 17
Temp.Asynchronous 540 46 54
Temp.Synchrony 148 8 14 Total 12,547 1,165 1,039
Table 6: The data statistics of second-level senses in PDTB 2.0.
Table 7: The data statistics of second-level senses in PDTB 3.0.
## B Visualization Of Discourse Relation Representations
| Second-level Senses | Train | Dev | Test |
|-----------------------|---------|-------|--------|
| Exp.Conjunction | 3,566 | 298 | 237 |
| Exp.Level-of-detail | 2,698 | 274 | 214 |
| Exp.Instantiation | 1,215 | 117 | 127 |
| Exp.Manner | 1,159 | 57 | 53 |
| Exp.Substitution | 405 | 32 | 31 |
| Exp.Equivalence | 256 | 25 | 30 |
| Cont.Cause | 4,280 | 423 | 388 |
| Cont.Purpose | 688 | 66 | 59 |
| Cont.Cause+Belief | 140 | 13 | 14 |
| Cont.Condition | 138 | 17 | 14 |
| Comp.Concession | 1,159 | 105 | 97 |
| Comp.Contrast | 813 | 87 | 62 |
| Temp.Asynchronous | 1,025 | 103 | 105 |
| Temp.Synchronous | 331 | 24 | 35 |
| Total | 17,873 | 1,641 | 1,466 |
Here we investigate the quality of discourse relation representations generated by our GOLF model with visualization aids. Figure 4 depicts the 2D t-SNE
(Van der Maaten and Hinton, 2008) visualization of discourse relation representations for top-level and second-level senses on the PDTB 2.0 test set.
As we can see, compared with current state-of-theart models BMGF-RoBERTa (Liu et al., 2020) and LDSGM (Wu et al., 2022), our model can generate more centralized discourse relation representations belonging to the same senses (e.g., *Temporal* at the top level, marked in red), and more separated representations belonging to different senses. It verifies our model's capability of deriving better discourse relation representations.
## C Effects Of Hyperparameters
Here we investigate the effects of various hyperparameters on the development set of PDTB 2.0.
These hyperparameters include the number layer L1 of MHIA (Figure 5), the number layer L2 of GCN (Figure 6), the coefficient λ1 of the global hierarchy-aware contrastive loss (Figure 7), the coefficient λ2 of the local hierarchy-aware contrastive loss (Figure 8), and the temperature τ in contrastive learning (Figure 9). Note that we only change one hyperparameter at a time.
## D Label-Wise Classification On Pdtb 3.0
Table 8: Label-wise F1 scores (%) for the top-level senses of PDTB 3.0. The proportion of each sense is listed behind its name.
Second-level Senses **GOLF**
(base)
GOLF
(large)
Exp.Conjunction (16%) **64.09** 63.69 Exp.Level-of-detail (15%) 52.60 **59.29**
Exp.Instantiation (9%) 72.53 **73.77**
Exp.Manner (4%) **63.53** 62.61
Exp.Substitution (2%) 66.67 **72.22**
Exp.Equivalence (2%) **25.39** 24.00
Cont.Cause (26%) 69.47 **72.49** Cont.Purpose (4%) 71.60 **72.73**
Cont.Cause+Belief (1%) 0.00 0.00
Cont.Condition (1%) 66.67 **92.31**
Comp.Concession (7%) 59.09 **63.37** Comp.Contrast (4%) 43.33 **60.27** Temp.Asynchronous (7%) 68.79 **77.55**
Temp.Synchronous (2%) 41.00 **42.27**
Table 9: Label-wise F1 scores (%) for the second-level senses of PDTB 3.0. The proportion of each sense is listed behind its name.
| Top-level Senses | GOLF (base) | GOLF (large) |
|--------------------|---------------|----------------|
| Exp (47%) | 80.01 | 80.50 |
| Cont (32%) | 74.54 | 74.83 |
| Comp (11%) | 64.67 | 71.59 |
| Temp (10%) | 64.80 | 70.92 |
![13_image_0.png](13_image_0.png)
![13_image_1.png](13_image_1.png)
![13_image_2.png](13_image_2.png)
![13_image_3.png](13_image_3.png)
![13_image_4.png](13_image_4.png)
Acc
![14_image_0.png](14_image_0.png)
![14_image_2.png](14_image_2.png)
![14_image_1.png](14_image_1.png)
I. I
Acc
![14_image_3.png](14_image_3.png)
![14_image_4.png](14_image_4.png) ![14_image_5.png](14_image_5.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The Section named "Limitations"
✓ A2. Did you discuss any potential risks of your work?
The Section named "Ethics Statement"
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In the Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 5.3 And 5.4
✓ B1. Did you cite the creators of artifacts you used?
In Section 5.3 and 5.4
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
In Section 5.1 and Appendix A
## C ✓ **Did You Run Computational Experiments?** In Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In Section 5.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In Section 5.3 and Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In Section 5.3 and 5.4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In Section 5.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
gong-etal-2023-prequant | {P}re{Q}uant: A Task-agnostic Quantization Approach for Pre-trained Language Models | https://aclanthology.org/2023.findings-acl.511 | While transformer-based pre-trained language models (PLMs) have dominated a number of NLP applications, these models are heavy to deploy and expensive to use. Therefore, effectively compressing large-scale PLMs becomes an increasingly important problem. Quantization, which represents high-precision tensors with low-bit fix-point format, is a viable solution. However, most existing quantization methods are task-specific, requiring customized training and quantization with a large number of trainable parameters on each individual task. Inspired by the observation that the over-parameterization nature of PLMs makes it possible to freeze most of the parameters during the fine-tuning stage, in this work, we propose a novel {``}quantize before fine-tuning{''} framework, PreQuant, that differs from both quantization-aware training and post-training quantization. {pasted macro {`}OUR{'}} is compatible with various quantization strategies, with outlier-aware parameter-efficient fine-tuning incorporated to correct the induced quantization error. We demonstrate the effectiveness of PreQuant on the GLUE benchmark using BERT, RoBERTa, and T5. We also provide an empirical investigation into the workflow of PreQuant, which sheds light on its efficacy. | # Prequant: A Task-Agnostic Quantization Approach For Pre-Trained Language Models
Zhuocheng Gong1∗
, Jiahao Liu2, Qifan Wang3, Yang Yang2, Jingang Wang2**, Wei Wu**2 Yunsen Xian2, Dongyan Zhao**1,4,5**†
, Rui Yan6,7†
1Wangxuan Institute of Computer Technology, Peking University 2Meituan; 3Meta AI
4National Key Laboratory of General Artificial Intelligence 5Beijing Institute for General Artificial Intelligence 6Gaoling School of Artificial Intelligence, Renmin University of China 7Engineering Research Center of Next-Generation Intelligent Search and Recommendation, Ministry of Education
{gzhch,zhaody}@pku.edu.cn, [email protected], [email protected]
{liujiahao12,yangyang113,wangjingang02,xianyunsen}@meituan.com [email protected]
## Abstract
While transformer-based pre-trained language models (PLMs) have dominated a number of NLP applications, these models are heavy to deploy and expensive to use. Therefore, effectively compressing large-scale PLMs becomes an increasingly important problem. Quantization, which represents high-precision tensors with low-bit fix-point format, is a viable solution. However, most existing quantization methods are task-specific, requiring customized training and quantization with a large number of trainable parameters on each individual task. Inspired by the observation that the overparameterization nature of PLMs makes it possible to freeze most of the parameters during the fine-tuning stage, in this work, we propose a novel "quantize before fine-tuning" framework, PreQuant, that differs from both quantizationaware training and post-training quantization.
PreQuant is compatible with various quantization strategies, with outlier-aware parameterefficient fine-tuning incorporated to correct the induced quantization error. We demonstrate the effectiveness of PreQuant on the GLUE
benchmark using BERT, RoBERTa, and T5.
We also provide an empirical investigation into the workflow of PreQuant, which sheds light on its efficacy.
## 1 Introduction
Pre-trained language models (PLMs) have shown superior performance in various NLP applications.
Despite their impressive success, these transformerbased models typically contain hundreds of millions of parameters. Massive model scale is becoming an increasing burden, preventing researchers from making full use of large-scale PLMs. According to a recent study, only 0.5% to 4% of research papers published at the recent five NLP conferences tend to adopt large PLMs (PLMs with over a billion parameters) (Ding et al., 2022). This suggests that the inefficiency of deploying large PLMs is hampering the development of NLP research.
Therefore, compressing PLMs becomes an urgent and important problem.
Various model compression methods have been proposed, such as knowledge distillation (Jiao et al.,
2020; Sanh et al., 2019; Wang et al., 2021; Passban et al., 2021), weight sharing (Lan et al., 2019),
network pruning (Liang et al., 2021; Gordon et al., 2020; Li et al., 2021), and quantization (Tao et al.,
2022; Zhang et al., 2020; Bai et al., 2021; Kim et al., 2021). Among these compression methods, quantization is a promising solution. The core idea of quantization is to use low bit precision to store weight and activation tensors, and use fixed-point arithmetic to speed up inference. There are some prior works on quantizing PLMs covering different strategies and granularities. However, these quantization methods generally neglect the characteristics of PLMs - the distinction between fine-tuning a model and training a model from scratch - but treat quantizing PLMs no different from quantizing regular neural networks. In other words, these methods are task-specific, which design customized quantization for PLMs. There are two main limitations:
first, these task-specific methods need to conduct both quantization and fine-tuning for each downstream task, with the quantization being applied either during or after the fine-tuning stage, which is inefficient; Second, the number of trainable parameters are still very large during fine-tuning, which is computational expensive.
In this work, we consider the quantization pipeline specially for the pre-training scenario.
Our motivation starts from the distinction between
"training from scratch" and "pre-training then finetuning". Unlike the weights from random initialization, the weights of the pre-trained model already contain rich information during pre-training. To utilize such information in a more efficient manner, we propose to directly quantize the pre-trained model in a task-agnostic way to obtain a "prequantized" model before fine-tuning. We then introduce a parameter-efficient fine-tuning and show that fine-tuning could be finished with minimal weight updates. In particular, we freeze most of the quantized weights in the "pre-quantized" model, and only fine-tune a very small subset of its model parameters in the fine-tuning process. Through an extensive set of explorations and experiments, we demonstrate the feasibility and advantages of the "quantizing the PLM first, then fine-tuning" pipeline, which we name as PreQuant. The main contributions are summarized as follows:
- We propose a novel quantization framework, PreQuant, tailored for PLMs. We conduct a systematic study to overcome the difficulties of PLM quantization and validate the performance through thorough experiments.
- PreQuant performs task-agnostic quantization, which dramatically reduces the storage requirements for large PLMs and enables efficient deployment of PLMs on different downstream tasks. Moreover, PreQuant only finetunes 0.5% of the model parameters, which is more suitable in resource-limited scenarios.
- PreQuant is highly flexible, which is compatible with a wide range of quantization strategies and fine-tuning techniques. Within this framework, we evaluate the pros and cons of various quantization strategies and discuss the impact of different quantization settings.
## 2 Related Work 2.1 Efficient Transformer
Compressing transformer-based models has been a prosperous topic since PLMs showed remarkable performance in various NLP tasks (Ganesh et al.,
2021). The main idea of model compression is to reduce the memory and computation consumptions without too much performance degradation. There are several strands of research for large-scale transformers compression, including knowledge distillation (Jiao et al., 2020; Sanh et al., 2019; Wang et al.,
2021; Passban et al., 2021), quantization (Tao et al.,
2022; Zhang et al., 2020; Bai et al., 2021; Kim et al., 2021), weight sharing (Lan et al., 2019) and network pruning (Liang et al., 2021; Gordon et al.,
2020; Li et al., 2021). Besides directly compressing transformers, parameter efficient fine-tuning becomes promising by restricting the number of trainable parameters during fine-tuning (Houlsby et al., 2019; Ben Zaken et al., 2022; Hu et al., 2021; Gong et al., 2022). PreQuant propose an outlieraware parameter-efficient fine-tuning method in its second stage.
## 2.2 Quantization
Quantization, which represents the weights and activations of neural networks with low-bit precision, has been widely studied in computer vision and natural language processing (NLP) communities (Gholami et al., 2021). Recently, some researchers attempt to compress PLMs to reduce the deployment cost with quantization methods (Zadeh et al., 2020; Wu et al., 2022; Kim et al., 2021; Bondarenko et al., 2021). Quantization-aware training
(QAT) (Gupta et al., 2015) is a representative approach to quantize a PLM while retaining most of its performance on downstream tasks. Given a downstream task, QAT performs the quantization during the task-specific training(i.e., fine-tuning)
process. For example, Q8BERT (Zafrir et al., 2019)
and Q-BERT (Shen et al., 2020) are typical QAT
methods to compress BERT-based models. Unlike QAT, Post-training quantization (PTQ) disentangles the fine-tuning and quantization. The quantization procedure is conducted after the taskspecific fine-tuning. In comparison to QAT, PTQ
holds the advantages of flexibility and good compatibility. Yao et al. (2022) combines PTQ with knowledge distillation to achieve efficient compression for large PLMs. In addition to NLP scenarios, PTQ is also utilized to compress vision
![2_image_0.png](2_image_0.png)
transformers (Liu et al., 2021). Some very recent researches employ quantization and parameterefficient fine-tuning jointly.Qadapter (Park et al.,
2022) introduces a lightweight module to produce quantization-friendly activations by scaling them channel-wise. AlphaTuning (Kwon et al., 2022)
utilizes binary-coding-quantization (BCQ) by only updating scaling factors.
## 2.3 Outlier Phenomenon And Its Applications In Quantization
Outlier phenomenon in PLMs has been observed in previous research. Kovaleva et al. (2021) reveals that PLMs are surprisingly fragile to the removal of a very small number of features in the layer outputs.
More specifically, in case of BERT-based PLMs, outlier values exist in LayerNorm, the disabling of which would disrupt both the Masked Language Modeling (MLM) loss and the downstream task performance. The outliers are high-magnitude normalization parameters that show up consistently in the same dimensional positions. Outlier phenomenon has some applications in quantization.
For example, Park et al. (2018) proposes to use a low-precision format for the center values and a high-precision format for the outliers in PTQ. Zhao et al. (2019) proposes an outlier channel splitting
(OCS) method that duplicates and halves the channels containing outlier value. Bondarenko et al.
(2021) shows that outlier values detected in the activation of PLMs affect the estimation of corresponding scaling factors, thus disturbs the effectiveness of quantization. Hence, outlier-aware quantization has been proposed to promise the quantization performance. In PreQuant, we take the outlier phenomenon into consideration in both stages, which are first detected and then treated separately in low-precision quantization. During the fine-tuning stage, we cast the outliers back to high-precision representations and only update them.
## 3 Preliminary
A number of works have been employing various quantization techniques on the field of pre-trained language models. Existing quantization methods can be categorized into two prominent branches:
quantization-aware training and post-training quantization.
Basic Notations We consider uniform quantization for both weights and activations. Specifically, for a given tensor x in full precision, we adopt the rounding-to-nearest operation to round x to the nearest unsigned integer grid values x Z, which can be described as:
$$\mathbf{x}^{\mathbb{Z}}=\operatorname{clip}\left(\left\lfloor{\frac{\mathbf{x}}{\alpha}}\cdot2^{b}\right\rfloor+z;0,2^{b}-1\right)\quad\quad(1)$$
it width $\sim$ $\sim$ $\mathbb{T}^n$ is the
where b ∈ N is bit-width, α ∈ R is the scaling factor, and z ∈ N is zero-point. After obtaining the quantized tensor x Z, one can approximate the full-precision version of the tensor bx:
$${\widehat{\mathbf{x}}}={\frac{\left(\mathbf{x}^{\mathbb{Z}}-z\right)\alpha}{2^{b}}}\qquad\qquad\qquad(2)$$
$$\begin{array}{r l}{(\mathbf{OAT})}&{{}\mathbf{OAT}}\end{array}$$
Quantization-aware Training (QAT) QAT
methods (Fig. 1(b)) learn the scaling factors (quantization) along with the weights during the finetuning stage. Since the rounding operation in Eq. 1 is not derivable, gradients through the nondifferentiable operations are usually approximated with the Straight-through Estimator (STE). As the
![3_image_0.png](3_image_0.png)
quantization process of QAT is supervised by the overall training objective, the performance is generally quite promising.
Post-training Quantization (PTQ) PTQ methods (Fig. 1(a)) conduct qunatization after the finetuning. Unlike QAT that relies on the full training data, PTQ requires very little sometimes even zero calibration data to estimate scaling factors. Therefore, the overhead of PTQ is relatively small. However, its ease of use often comes with significant performance penalties.
Generally, existing quantization solutions (both QAT and PTQ) for PLMs are **task-specific**, meaning to quantize either during or after the model fine-tuning stage. However, in PLMs, "pre-training then fine-tuning" replaces conventional "training from scratch", thus pre-trained weights already contain rich information. We wonder if it is possible to perform **task-agnostic** quantization. As shown in Fig. 1(c), PreQuant first conducts task-agnostic quantization on the pre-trained model, followed by parameter-efficient fine-tuning.
## 4 Prequant 4.1 Overview
In contrast to PTQ and QAT, we propose to quantize PLMs prior to fine-tuning. Specifically, our framework consists of two stages, as shown in Fig. 2. The first stage directly quantizes the pretrained weights of PLMs without further adaptation. Hence, the quantization is task-agnostic. The second stage fine-tunes the "pre-quantized" PLM for downstream tasks. We can not simply apply the vanilla fine-tuning setting to a "pre-quantized" PLM. When the vanilla fine-tuning setting is used, it converts low-precision weights back into highprecision representations as weight updates are necessarily in high-precision (low-precision training is practically impossible). This defeats our purpose of quantizing these values. To address the issue, we propose a parameter-efficient tuning method that freezes most of the quantized weights and only fine-tune a small subset of model parameters. The details would be presented in following sections.
## 4.2 Task-Agnostic Quantization
The goal of the uniform quantization in Eq. 1 is to estimate the optimal scaling factor α for each parameter matrix. This can be formulated as an optimization problem that minimizes certain loss functions, such as mean squared error (Choukroun et al., 2019). A more convenient solution is to directly estimate α with statistic information, such as directly utilizing the range of the tensor as the scaling factor (Bondarenko et al., 2021).
In our investigation into the weights of PLMs, we have observed outlier phenomenon: in each parameter matrix of PLMs, a tiny fraction of weights
(i.e., outliers) holds abnormally greater values than the other weights. Empirically, most of weights strictly follow Gaussian distribution while "outliers" falls into the tail of the distribution, which can be detected with:
$$\mathbf{W}_{outlier}=\left\{w\,\left|\,\,\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}>\epsilon,w\in\mathbf{W}\right.\right\},\tag{3}$$
where µ and σ 2are the mean and variance of the parameter matrix W. Outlier values affect the effectiveness of quantization, causing great quantization error (Kovaleva et al., 2021). This addresses this issue, we adopt an intuitive quantization method. We set the quantization scaling factor α to 6σ, which is big enough to clip all the outlier weights according to Eq. 1.
It is worth noting that PreQuant is compatible with the other methods. In addition to the aforementioned *outlier-aware* scaling factor, we implement three other methods for comparison.
- *Min-max* is a basic method that estimates the scaling factor with the minimum and maximum of the tensor (Vanhoucke et al., 2011).
- MSE optimizes the scaling factor by minimizing the mean squared error between quantized and full-precision tensors (Choukroun et al.,
2019; Shin et al., 2016; Zhao et al., 2019).
- *Row-wise quantization* adopts a finer granularity that assigns different scaling factors to each dimension of the matrix (Shen et al.,
2020; Bondarenko et al., 2021).
We conduct a thorough comparison on previous scaling factor estimation methods and discuss the advantages and disadvantages of each in the experiment section. In comparison to previous quantization methods, our quantization method is data-free and task agnostic, as the quantizations are executed directly prior to the fine-tuning.
## 4.3 Outlier-Aware Parameter-Efficient Fine-Tuning
After obtaining a "pre-quantized" PLM, the second stage is to fine-tune the model for specific downstream tasks. In this stage, we encounter a dilemma: on one side, fine-tuning requires updating model weights with high-precision representations, while on the other side, casting the lowprecision weights back to high-precision weights will nullify the effect of quantization. To address the issue, we propose an outlier-aware parameterefficient fine-tuning (outlier-aware tuning) strategy that keeps most of the model parameters frozen in low-precision. Parameter-efficient fine-tuning aims to adapt PLMs by tuning only a few number of parameters (Houlsby et al., 2019; Gong et al., 2022).
(Ben Zaken et al., 2022) and Gong et al. (2022)
have shown that tuning a small subset of parameters of PLMs can be comparable with full-parameter fine-tuning in terms of performance. This approach suits our scenario as it does not modify the model structure.
However, parameter-efficient fine-tuning is more challenging in our case since the quantization step produces pre-quantized PLM wherein the weights are rounded to low-precision. The induced quantization error correlates to the disturbance of the weights. If weights do not change much after quantization, the error will be minimal, and significant if they do. Intuitively, our goal is to identify which parts of the weights cause the most quantization error. By only tuning these specific weights, we can recover much of PLM's damaged representation ability.
In our investigation, we find that the majority of parameters exhibit relatively small disturbance, hence freezing them could preserve most of the PLM's ability. Some particular weights contribute to the most of the induced error and these weights are concentrated in specific dimensions. Moreover, these susceptible-to-quantization weights are exactly outlier weights that we mentioned in the above section. This is because the abnormally large values of outliers are generally clipped according to Eq. 1. We identify the dimensions containing most of outlier weights, then setting them as trainable parameters while freezing the rest. Specifically, in each parameter matrix, we select r outlier dimensions as trainable parameters. r is extremely small, we can guarantee that more than 99% parameters still remain in low-precision. By tuning the subnetwork consisting of outlier dimensions, we expect to recover the damaged representation ability and adapt to specific downstream task at minimal trainable parameters.
## 5 Experimental Evaluation 5.1 Experimental Setup
Settings We evaluate PreQuant on several popular PLMs including BERT (Devlin et al., 2018),
RoBERTa (Liu et al., 2019) and T5 (Raffel et al., 2020). For RoBERTa, we test on both RoBERTabase and RoBERTalarge. For T5, we employ PreQuant to the encoder of T53b, denoted as T5 Encoder. We use a fixed set of hyper-parameters for all the GLUE tasks. For each layer, we set the bit-width option b for weights as 4. Besides, we apply 8-bit min-max uniform quantization to activations and embeddings. Experimental results of more bit-width options are listed in Appendix A.4.
Datasets The GLUE benchmark contains a variety of natural language understanding tasks, including textual entailment (RTE), natural language inference (MNLI, QNLI), paraphrase (MRPC, QQP,
| Models | Methods | Bits Trainable | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | Avg. | |
|------------------|-----------|------------------|--------|--------|--------|--------|-------|-------|---------|---------|--------|------|
| Params | | | | | | | | | | | | |
| FT | 32 | 85M | 57.3 | 84.4 | 88.3 | 91.6 | 89.8 | 71.0 | 93.0 | 89.4 | 83.1 | |
| PTQ | 4 | 85M | 43.1 | 68.2 | 84.9 | 79.7 | 79.4 | 50.2 | 90.8 | 83.1 | 72.4 | |
| QAT | 4 | 85M | 57.2 | 83.7 | 87.8 | 91.3 | 89.6 | 70.0 | 92.3 | 89.1 | 82.6 | |
| PreQuant | 4 | 0.55M | 54.6 | 83.5 | 88.0 | 90.7 | 88.6 | 68.7 | 92.3 | 88.9 | 81.9 | |
| BERTbase | FT | 32 | 85M | 63.6 | 87.6 | 90.2 | 92.8 | 91.9 | 78.7 | 94.8 | 91.2 | 86.4 |
| PTQ | 4 | 85M | 46.3 | 74.5 | 85.5 | 81.8 | 84.3 | 56.9 | 92.1 | 84.5 | 75.7 | |
| QAT | 4 | 85M | 61.9 | 86.9 | 88.9 | 91.7 | 91.3 | 76.5 | 94.4 | 90.5 | 85.3 | |
| PreQuant | 4 | 0.55M | 61.5 | 86.2 | 89.0 | 91.6 | 90.9 | 76.0 | 94.0 | 90.1 | 84.9 | |
| RoBERTabase | FT | 32 | 302M | 68.0 | 90.2 | 90.9 | 94.7 | 92.2 | 86.6 | 96.4 | 92.4 | 88.9 |
| PTQ | 4 | 302M | 46.6 | 79.5 | 86.6 | 82.2 | 84.6 | 56.4 | 92.6 | 85.0 | 76.7 | |
| RoBERTalarge QAT | 4 | 302M | 66.5 | 89.4 | 88.8 | 93.8 | 91.4 | 86.6 | 95.8 | 91.4 | 87.9 | |
| PreQuant | 4 | 1.47M | 67.3 | 89.4 | 89.0 | 93.2 | 91.1 | 84.7 | 95.4 | 90.8 | 87.6 | |
| FT | 32 | 1.2B | 67.6 | 91.2 | 90.9 | 95.4 | 91.9 | 87.1 | 97.2 | 92.3 | 89.2 | |
| PTQ | 4 | 1.2B | 50.6 | 82.4 | 86.5 | 84.6 | 85.7 | 59.1 | 92.0 | 87.5 | 78.6 | |
| T5 Encoder | QAT | 4 | 1.2B | 66.5 | 90.4 | 90.2 | 95.3 | 91.6 | 86.6 | 96.7 | 91.6 | 88.6 |
| PreQuant | 4 | 11.80M | 66.4 | 90.7 | 90.0 | 95.1 | 92.0 | 85.1 | 96.9 | 91.6 | 88.5 | |
Table 2: Results of parameter-efficient PLM quantization methods on RoBERTalarge. Full results are supplemented in Appendix A.5.
| Methods | Bits Params GLUE | | |
|---------------------------------|--------------------|-------|------|
| FT | 32 | 302M | 88.9 |
| Qadapter (Park et al., 2022) | 8 | 0.29M | 85.1 |
| AlphaTuning (Kwon et al., 2022) | 4 | 1.18M | 86.3 |
| PreQuant-α | 4 | 0.29M | 86.6 |
| PreQuant | 4 | 1.47M | 87.6 |
STS-B), sentiment analysis (SST-2) and linguistic acceptability (CoLA) (Wang et al., 2018). The evaluation metrics are Matthews correlation for CoLA,
Spearman correlation for STS-B, and Accuracy for the other tasks. We supplement fine-tuning details in Appendix A.1.
Baselines Classical quantization methods including PTQ and QAT are adopted as baselines. For PTQ, we adopt the implementation by Bondarenko et al. (2021), which introduces the group-wise granularity to reduce the quantization error. For QAT,
we also implement a group-wise granularity variant.
Results of the vanilla QAT that utilizes straightthrough estimator (STE) (Bengio et al., 2013) to spread gradients are listed in Apppendix A.4. We include Qadapter (Park et al., 2022) and AlphaTuning (Kwon et al., 2022) that jointly employ the quantization and the parameter-efficient fine-tuning
## For Further Comparison. 5.2 Main Results Comparison With Quantization Methods. The
main comparison results are reported in Table 1. Due to the precision reduction, all quantization methods inevitably lead to performance degradation in comparison to the full-precision fine-tuned model (FT). There is a considerable performance gap between 4-bit PTQ and 32-bit FT, although they are both tuning with a modest amount of calibration data. QAT outperforms PTQ on all tasks, demonstrating the benefit of a hybrid approach of quantization and task-specific fine-tuning. PreQuant is comparable in performance to QAT, but with much fewer trainable parameters. In order to evaluate the scalability and robustness of PreQuant, we report the results for different scale PLMs, ranging from 110M parameters to 1.5B parameters. As the model size increases, PreQuant performs consistently and stably. Take T51.5b as an example, PreQuant could achieve 99.21% performance of FT with only tuning 0.10% trainable parameters.
## Comparison With Parameter-Efficient Plm Quantization Methods. Comparisons With
Qadapter and AlphaTuning are reported in Table 2.
For Qadapter, we adopt uniform asymmetric 8-bit channel-wise quantization for both activation functions and weights as described in the original paper. We implement AlphaTuning with 4-bit BCQ quantization to make a fair comparison. Overall, PreQuant achieves the best performance among these parameter-efficient PLM quantization methods, while maintaining a comparable compression ratio. Inspired by AlphaTuning, we also implement PreQuant-α, a variant of PreQuant that only tuning the scaling factors of the uniform quantization, to estimate the value of AlphaTuning technique. PreQuant outperforms PreQuant-α by 1 point, indicating the advantage of updating the model parameters over updating the quantization parameters.
## 5.3 Comparison Of Quantization Strategies
In this section, we replace the *outlier-aware quantization* with alternative quantization strategies to see how different strategies affect the performance.
We evaluate three different strategies (i.e., *min-max*,
MSE, and *Row-wise quantization* in Section 4.2)
on 4-bit quantization for RoBERTalarge. The differences of these strategies are listed in Table 3. As the disturbance of weights after quantization indicates the induced quantization error, we compute the L2 distance between quantized weights and full-precision weights as the measurement of the quantization error. As the bottom block of Table 3 reveals, the induced quantization error is highly
| Min- | Outlier- MSE Row | | | |
|-------------------------|-------|------|------|------|
| max | aware | wise | | |
| Layer-wise | ✔ | ✔ | ✔ | ✘ |
| Granularity Statistical | ✔ | ✔ | ✘ | ✔ |
| Strategy Quantization | 163.1 | 59.5 | 42.6 | 41.7 |
| Error (L2 Dist) CoLA | 15.6 | 67.3 | 67.8 | 68.0 |
| MRPC | 77.6 | 89.0 | 89.8 | 90.0 |
| STS-B | 84.5 | 90.8 | 90.9 | 91.3 |
| MNLI | 79.6 | 89.4 | 89.5 | 89.7 |
![6_image_0.png](6_image_0.png)
correlated to the performance on downstream tasks.
The less the error, the better the performance. The min-max quantization strategy performs worst due to the negative influence of outlier weights. Meanwhile, outlier-aware, MSE, and *row-wise* strategies achieve comparable performance on four tasks as well as similar quantization error. The *MSE quantization* strategy achieve slightly better performance since it directly optimizes the L2 distance, which is more complicated than statistical strategies. *rowwise quantization* perform slightly better than layerwise strategies at the cost of a more expensive computational graph. Above all, the *outlier-aware* strategy reaches the best trade-off between performance and complexity.
## 5.4 Analysis Of Outlier-Aware Fine-Tuning
In this section, we discuss the effect of parameterefficient fine-tuning on PreQuant.
Does outlier-aware tuning really work? PreQuant appoints the trainable subnetwork by detecting outlier dimensions, shorted as *Outlier*. It is important to show that the outlier dimension really matters for fine-tuning performance. To this end, we introduce two variants: 1) *Random*: We randomly choose the same amount of trainable parameters as our method; 2) *Ticket*: This is a task-agnostic subnetwork for parameter-efficient fine-tuning proposed in Gong et al. (2022). The experimental results on four datasets are shown in Fig. 3. Random selection of trainable parameters leads to a significant drop in performance, suggesting that outlier information does help in finding suitable trainable subnetworks. *Outlier* and *Ticket*
Models Size # Trainable QNLI MRPC Ratio
| RoBERTalarge T5 Encoder |
|---------------------------|
FT 100% 94.7 90.9
r = 1024 100% 92.9 90.3
r = 512 50% 92.8 90.2
r = 20 1.95% 93.6 89.4
r = 10 0.98% 93.3 89.1
r = 5 0.49% 93.2 89.0
r = 3 0.29% 93.2 88.8
r = 1 0.10% 86.5 80.5
FT 100% 95.4 90.9
r = 1024 100% 94.1 90.4 r = 512 50% 94.2 90.4 r = 20 1.95% 95.1 90.2 r = 10 0.98% 95.1 90.0
r = 5 0.49% 94.6 88.9
r = 3 0.29% 92.3 86.7
r = 1 0.10% 87.5 79.4
achieve comparable performance, and both are very close to the upper-bound performance by the FT.
This suggests that our *outlier-aware* fine-tuning is a promising strategy to efficiently adapt PLMs to downstream tasks while reducing quantization errors. Noting that *Outlier* and *Ticket* have similar performance, we further calculate the subnetwork overlap ratio of the two methods using the Jaccard similarity coefficient. As we expected, *Outlier* and Ticket have non-negligible overlap (Jaccard similarity coefficient is 0.57.).
What is the optimal size of the trainable subnetwork? As stated in Section 4.3, we use hyperparameter r to control the size of the trainable high-precision parameters. We then focus on the effect of r on model performance. We conduct empirical experiments with various values of r in
{1, 3, 5, 10, 20, 512, 1024}. Smaller value of r indicates fewer trainable parameters, which inevitably leads to performance degradation. We expect that more trainable parameters will lead to higher performance. The results are reported in Table 4. We find that a relatively small r, e.g., 3 or 5, is good enough to adapt PreQuant to downstream tasks.
Note that r = 512 sets half of the model parameters trainable, and r = 1024 denotes that the whole model is trainable. From Table 4, we can see that setting r as 1024 cannot fully recovers the performance which is reasonable because the induced quantization error between high-precision and lowprecision representations could not be completely eliminated. Setting r to a larger value than 10 brings limited performance improvements but requiring more high-precision computational cost.
## Does Other Parameter-Efficient Fine-Tuning
methods work with PreQuant ? Following Ding et al. (2022), we consider three types of parameter-efficient techniques: additionbased methods, specification-based methods, and reparameterization-based methods. Addition-based methods, such as adapter and prefix-tuning, involve introducing extra trainable modules or parameters that cannot be directly applied to PreQuant. On the other hand, specification-based methods specify certain parameters in the original model as trainable parameters, which work well with PreQuantas discussed in Figure 3. Our outlier-aware fine-tuning falls into this category. Reparameterization-based methods, such as low-rank adaptation (LoRA) (Hu et al., 2021), reparameterizes linear layers. LoRA updates all parameters in the weight matrix by adding a low-rank matrix. In our scenario, the original weight matrix is in low-precision while the update matrix is in high-precision. The addition of a high-precision matrix to a low-precision matrix results in a high-precision matrix, thus nullifying the quantization effect.
## 5.5 Extending To Layer-Wise Mixed-Precision Quantization
Previous work has shown that allocating different bit-widths to different layers leads to a better accuracy-efficiency trade-off, since not all layers are equally sensitive to quantization (Tang et al., 2022). PreQuant can be conveniently extended to a layer-wise mix-precision variant by assigning customized bit-widths to each transformer layer.
We implement a pilot mix-precision quantization paradigm that assigns 2-bits to bottom layers and 4-bits to top layers or vise versa. As can be seen in Table 5, all mixed-precision methods exhibit performance degradation due to the hybrid quantization setting. An overall conclusion is that top layers are less sensitive to quantization than bottom layers.
Allocating 2-bits to the top third of layers resulted in an average loss of less than 3 points, which is very impressive. Meanwhile, assigning 2-bits to the bottom one-third of the layers suffers from more than 10 points of performance loss. These insightful findings could be beneficial to the development
| Methods | Layers | QNLI | STS-B | | |
|-------------------|----------|--------|---------|------|------|
| 1-8 | 9-16 | 17-24 | | | |
| FT | 32 | 32 | 32 | 95.4 | 92.3 |
| All 4-bits | 4 | 4 | 4 | 95.1 | 91.6 |
| Bottom one-third | 2 | 4 | 4 | 84.9 | 75.0 |
| Bottom two-thirds | 2 | 2 | 4 | 82.4 | 59.5 |
| Top one-third | 4 | 4 | 2 | 92.3 | 89.6 |
| Top two-thirds | 4 | 2 | 2 | 84.7 | 85.4 |
Table 5: Layer-wise mixed-precision quantization results for **T5 Encoder** on QNLI and STS-B. For the model with 24 layers, we quantize top (or bottom)
one(or two)-third(s) layers to 2-bits while keeping the rest of the model in 4-bits.
of better mixed-precision quantization techniques.
## 6 Conclusions
As the scale of pre-trained language models increases, model compression becomes a prerequisite prior to model deployment in resource-limited scenarios. Quantization is an effective and promising technique to compress large PLMs. Existing quantization methods including PTQ and QAT perform quantizations either during or after task-specific fine-tuning process. Since these approaches are highly task-specific, it's hard to transfer them to different tasks with low cost. In this paper, we propose a "quantizing the PLM first, then finetuning" framework, PreQuant, which includes a task-agnostic quantization stage and an outlieraware parameter-efficient fine-tuning stage. We compress widely used PLMs with PreQuant, including BERT, RoBERTa and T5 variants. The experimental results on the GLUE benchmark are reported to demonstrate the effectiveness of PreQuant. We also reveal that PreQuant is more flexible and efficient than its competitive counterparts.
An elaborate empirical study is conducted on the workflow of PreQuant, we hope the findings could shed some light on the quantization research of PLMs.
## Limitations
Although the proposed PreQuant achieves promising results especially in reducing the storage and computational resources, we discuss some limitations of our work in this section. In our experiments, we observe that the performance of PreQuant is highly correlated with the data size. When fine-tuning with very limited data, PreQuant may not meet expectation to preserve the performance of PLMs. Moreover, our model performance also depends on the number of parameters (i.e. outliers) restored in the fine-tuning stage. This hyperparameter controls the trade-off between model performance and parameter efficiency. The optimal choice of the hyper-parameter for different tasks requires further investigation. Additional discussion and experimental results are provided in Appendix A.2.
## Acknowledgments
This work is supported by Ministry of Science and Technology Key R&D Program (2030 Artificial Intelligence) (No. 2020AAA0106600) and National Natural Science Foundation of China (NSFC Grant No. 62122089). We sincerely thank all reviewers for their valuable comments and suggestions, which are crucial for improving our work.
## References
Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jin Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King.
2021. BinaryBERT: Pushing the limit of BERT quantization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 4334–4348, Online. Association for Computational Linguistics.
Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel.
2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 1–9, Dublin, Ireland. Association for Computational Linguistics.
Yoshua Bengio, Nicholas Léonard, and Aaron Courville.
2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432.
Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. 2021. Understanding and overcoming the challenges of efficient transformer quantization.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7947–7969, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yoni Choukroun, Eli Kravchik, Fan Yang, and Pavel Kisilev. 2019. Low-bit quantization of neural networks for efficient inference. In 2019 IEEE/CVF
International Conference on Computer Vision Workshop (ICCVW), pages 3009–3018. IEEE.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. 2022. Delta tuning:
A comprehensive study of parameter efficient methods for pre-trained language models. arXiv preprint arXiv:2203.06904.
Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Hassan Sajjad, Preslav Nakov, Deming Chen, and Marianne Winslett. 2021. Compressing large-scale transformer-based models: A case study on BERT. Transactions of the Association for Computational Linguistics, 9:1061–1080.
Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. 2021. A
survey of quantization methods for efficient neural network inference. *arXiv preprint arXiv:2103.13630*.
Zhuocheng Gong, Di He, Yelong Shen, Tie-Yan Liu, Weizhu Chen, Dongyan Zhao, Ji-Rong Wen, and Rui Yan. 2022. Finding the dominant winning ticket in pre-trained language models. In *Findings of the Association for Computational Linguistics: ACL 2022*,
pages 1459–1472, Dublin, Ireland. Association for Computational Linguistics.
Mitchell Gordon, Kevin Duh, and Nicholas Andrews.
2020. Compressing BERT: Studying the effects of weight pruning on transfer learning. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 143–155, Online. Association for Computational Linguistics.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. 2015. Deep learning with limited numerical precision. In *Proceedings of the* 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1737–1746, Lille, France. PMLR.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings* of Machine Learning Research, pages 2790–2799.
PMLR.
Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. Lora: Low-rank adaptation of large language models. In *International Conference on Learning Representations*.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020.
TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4163–
4174, Online. Association for Computational Linguistics.
Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W.
Mahoney, and Kurt Keutzer. 2021. I-bert: Integeronly bert quantization. In *Proceedings of the 38th* International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pages 5506–5518. PMLR.
Olga Kovaleva, Saurabh Kulshreshtha, Anna Rogers, and Anna Rumshisky. 2021. BERT busters: Outlier dimensions that disrupt transformers. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 3392–3405, Online. Association for Computational Linguistics.
Se Jung Kwon, Jeonghoon Kim, Jeongin Bae, Kang Min Yoo, Jin-Hwa Kim, Baeseong Park, Byeongwook Kim, Jung-Woo Ha, Nako Sung, and Dongsoo Lee.
2022. Alphatuning: Quantization-aware parameterefficient adaptation of large-scale pre-trained language models. *arXiv preprint arXiv:2210.03858*.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2019. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning Representations*.
Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2021.
Differentiable subset pruning of transformer heads.
Transactions of the Association for Computational Linguistics, 9:1442–1459.
Chen Liang, Simiao Zuo, Minshuo Chen, Haoming Jiang, Xiaodong Liu, Pengcheng He, Tuo Zhao, and Weizhu Chen. 2021. Super tickets in pre-trained language models: From model compression to improving generalization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 6524–6538, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Zhenhua Liu, Yunhe Wang, Kai Han, Wei Zhang, Siwei Ma, and Wen Gao. 2021. Post-training quantization for vision transformer. In Advances in Neural Information Processing Systems, volume 34, pages 28092–28103. Curran Associates, Inc.
Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Eunhyeok Park, Dongyoung Kim, and Sungjoo Yoo.
2018. Energy-efficient neural network accelerator based on outlier-aware low-precision computation.
In *Proceedings of the 45th Annual International Symposium on Computer Architecture*, ISCA '18, page 688–698. IEEE Press.
Minseop Park, Jaeseong You, Markus Nagel, and Simyung Chang. 2022. Quadapter: Adapter for gpt-2 quantization. *arXiv preprint arXiv:2211.16912*.
Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, and Qun Liu. 2021. Alp-kd: Attention-based layer projection for knowledge distillation. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 35, pages 13657–13665.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108.
Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-bert: Hessian based ultra low precision quantization of bert. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 34, pages 8815–8821.
Sungho Shin, Kyuyeon Hwang, and Wonyong Sung.
2016. Fixed-point performance analysis of recurrent neural networks. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pages 976–980. IEEE.
Chen Tang, Kai Ouyang, Zhi Wang, Yifei Zhu, Wen Ji, Yaowei Wang, and Wenwu Zhu. 2022. Mixedprecision neural network quantization via learned layer-wise importance. In *Computer Vision –*
ECCV 2022, pages 259–275, Cham. Springer Nature Switzerland.
Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, and Ngai Wong. 2022.
Compression of generative pre-trained language models via quantization. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 4821–
4836, Dublin, Ireland. Association for Computational Linguistics.
Vincent Vanhoucke, Andrew Senior, and Mark Z Mao.
2011. Improving the speed of neural networks on cpus.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018.
Glue: A multi-task benchmark and analysis platform for natural language understanding. *arXiv preprint* arXiv:1804.07461.
Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2021. MiniLMv2: Multi-head selfattention relation distillation for compressing pretrained transformers. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2140–2151, Online. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Xiaoxia Wu, Zhewei Yao, Minjia Zhang, Conglong Li, and Yuxiong He. 2022. Extreme compression for pre-trained transformers made simple and efficient.
arXiv preprint arXiv:2206.01859.
Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, and Yuxiong He. 2022.
Zeroquant: Efficient and affordable post-training quantization for large-scale transformers. arXiv preprint arXiv:2206.01861.
Ali Hadi Zadeh, Isak Edo, Omar Mohamed Awad, and Andreas Moshovos. 2020. Gobo: Quantizing attention-based nlp models for low latency and energy efficient inference. In *2020 53rd Annual* IEEE/ACM International Symposium on Microarchitecture (MICRO), pages 811–824. IEEE.
Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8bert: Quantized 8bit bert. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition
(EMC2-NIPS), pages 36–39. IEEE.
Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu. 2020. TernaryBERT:
Distillation-aware ultra-low bit BERT. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 509–521, Online. Association for Computational Linguistics.
Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Chris De Sa, and Zhiru Zhang. 2019. Improving neural network quantization without retraining using outlier channel splitting. In *Proceedings of the 36th International* Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pages 7543–
7552. PMLR.
## A Appendix
![11_Image_0.Png](11_Image_0.Png) A.1 Training Details
For all the tasks, we adopt AdamW (Loshchilov and Hutter, 2018) as the optimizer and search batch size in {16, 32}. For full-parameter finetuning baselines, the learning rate for PreQuant is searched within {1e-5, 2e-5, 3e-5, 4e-5} for BERTbase, RoBERTabase, and RoBERTalarge and
{1e-4, 2e-4, 3e-4} for T5 Encoder. For PreQuant, the learning rate is searched within {1e-4, 3e-4, 5e4, 7e-4, 9e-4}. We set the dropout rate to 0.1 and weight decay to 0.01. For all tasks, the model is trained for 10 epochs at maximum and the best performance on the validation set is reported. Experiments are conducted upon the Huggingface Transformers library (Wolf et al., 2020).
## A.2 Results In Low-Resource Scenarios
| Dataset Size | 2k | 4k | 6k | 8k | full |
|-------------------|------|------|------|------|--------|
| Full-ft | 86.6 | 87.3 | 88.2 | 88.4 | 91.2 |
| PreQuant (4-bits) | 83.2 | 85.9 | 87.4 | 87.8 | 90.7 |
| Diff | -3.4 | -1.4 | -0.8 | -0.6 | -0.5 |
more challenging in small datasets. We further explore the effect of data size on quantization and finetuning. To this end, we randomly sample MNLI
training set to {2k, 4k, 6k, 8k} examples and finetune **T5 Encoder** on them. As seen in Table 6, smaller data size leads to larger performance gap between the full-precision model and the quantized one.
## A.3 Visualization Of Quantization Error
Fig. 4 is an example of quantization error induced by uniform quantization. Several outlier dimensions tend to have larger error after quantization due to their large value.
## A.4 Scaling To Other Bit-Widths
![11_Image_1.Png](11_Image_1.Png)
As shown in Fig. 5, when the number of bits for weight is 8, the performance of all quantization methods is close. However, when the bit-width decreases to 4, performance disparities between various approaches start to become apparent. PTQ
fails to predict reasonable answers on 4-bit quantization, indicating that the quantization error is too strong to be minimized with a modest amount of calibration data. QAT and PreQuant still remain an acceptable performance for 4-bit quantization.
## A.5 Detailed Results Of More Quantization Methods
During investigation, we find quantization is
| Methods | Bits Trainable | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | Avg. | |
|---------------|------------------|--------|--------|--------|--------|-------|-------|---------|---------|--------|------|
| Params | | | | | | | | | | | |
| FT | 32 | 302M | 68.0 | 90.2 | 90.9 | 94.7 | 92.2 | 86.6 | 96.4 | 92.4 | 88.9 |
| QAT-vanilla | 4 | 302M | 66.8 | 89.2 | 89.0 | 83.5 | 91.1 | 86.4 | 95.6 | 91.0 | 86.6 |
| Qadapter | 8 | 0.29M | 55.4 | 87.8 | 86.7 | 91.9 | 90.5 | 84.4 | 93.6 | 90.7 | 85.1 |
| AlphaTuning 4 | 1.18M | 57.8 | 88.7 | 88.6 | 93.2 | 91.2 | 84.8 | 95.2 | 91.2 | 86.3 | |
| PreQuant | 4 | 1.47M | 67.3 | 89.4 | 89.0 | 93.2 | 91.1 | 84.7 | 95.4 | 90.8 | 87.6 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
We use widely adopted open-source data in our paper. We believe there is no possibility of causing any risks.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4
✓ B1. Did you cite the creators of artifacts you used?
4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
4
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
4
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
GLUE benchmark is widely adopted in the NLP community and we get the data from trusted sources.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
GLUE benchmark is widely adopted in the NLP community. The documentation can be easily found on the Internet.
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 and Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
he-etal-2023-synthetic | Synthetic Pre-Training Tasks for Neural Machine Translation | https://aclanthology.org/2023.findings-acl.512 | Pre-training models with large crawled corpora can lead to issues such as toxicity and bias, as well as copyright and privacy concerns. A promising way of alleviating such concerns is to conduct pre-training with synthetic tasks and data, since no real-world information is ingested by the model. Our goal in this paper is to understand the factors that contribute to the effectiveness of pre-training models when using synthetic resources, particularly in the context of neural machine translation. We propose several novel approaches to pre-training translation models that involve different levels of lexical and structural knowledge, including: 1) generating obfuscated data from a large parallel corpus 2) concatenating phrase pairs extracted from a small word-aligned corpus, and 3) generating synthetic parallel data without real human language corpora. Our experiments on multiple language pairs reveal that pre-training benefits can be realized even with high levels of obfuscation or purely synthetic parallel data. We hope the findings from our comprehensive empirical analysis will shed light on understanding what matters for NMT pre-training, as well as pave the way for the development of more efficient and less toxic models. | # Synthetic Pre-Training Tasks For Neural Machine Translation
Zexue He1∗, Graeme Blackwood2∗**, Rameswar Panda**2, Julian McAuley1**, Rogerio Feris**2 1University of California, San Diego 2MIT-IBM Watson AI Lab, IBM Research 1{zehe,jmcauley}@ucsd.edu 2{blackwood,rpanda,rsferis}@us.ibm.com
## Abstract
Pre-training models with large crawled corpora can lead to issues such as toxicity and bias, as well as copyright and privacy concerns. A
promising way of alleviating such concerns is to conduct pre-training with synthetic tasks and data, since no real-world information is ingested by the model. Our goal in this paper is to understand the factors that contribute to the effectiveness of pre-training models when using synthetic resources, particularly in the context of neural machine translation. We propose several novel approaches to pre-training translation models that involve different levels of lexical and structural knowledge, including: 1)
generating obfuscated data from a large parallel corpus 2) concatenating phrase pairs extracted from a small word-aligned corpus, and 3) generating synthetic parallel data without real human language corpora. Our experiments on multiple language pairs reveal that pre-training benefits can be realized even with high levels of obfuscation or purely synthetic parallel data. We hope the findings from our comprehensive empirical analysis will shed light on understanding what matters for NMT pre-training, as well as pave the way for the development of more efficient and less toxic models.
## 1 Introduction And Motivation
Neural Machine Translation (NMT) models depend on large quantities of aligned training data (Aharoni et al., 2019; Fan et al., 2021; NLLB Team et al., 2022). For many language pairs of interest, however, high quality parallel data is either unavailable or exists only in limited quantities. Training robust NMT systems with such limited data can be a significant challenge.
Even for high-resource language pairs, parallel data can be noisy and frequently contains toxic speech or biased language. Such problems are particularly acute for comparable corpora crawled automatically from the web (Kreutzer et al., 2022)
*Equal contribution Figure 1: A comparison of the extent to which the synthetic data generation methods described in Section 3
![0_image_0.png](0_image_0.png)
encode lexical and/or structural translation knowledge.
The vertical axis compares methods with respect to lexical knowledge. The horizontal axis compares structural knowledge. BLEU scores correspond to the Indonesianto-English translation task described in Section 4.
since it can cause catastrophic mistranslations
(Costa-jussà et al., 2022) or hallucinated toxicity.
It is preferable to avoid exposing the model to such data in order to prevent accidental generation of offensive content or egregiously embarrassing translations. Crawled data can also present problematic copyright, attribution, and privacy issues. As an example, the JW300 corpus of Jehovah's Witnesses publications (Agic and Vuli ´ c´, 2019) was recently withdrawn due to a copyright infringement claim.
Our primary motivation is to investigate how knowledge transfer from NMT pre-training can help to avoid or minimize the data issues described above. We study the impact of pre-training and transfer learning on translation tasks by comparing various procedural approaches to synthetic data generation. Each approach has varying degrees of inherited or artificially constructed lexical and structural translation knowledge. The degree to which each method encodes lexical and/or structural translation knowledge is plotted in abstract form in Figure 1. We describe each of our synthetic data generation methods in Section 3.
Our first approach (§3.1) studies the extent to which the transfer benefits of regular pre-training can be realized when using obfuscated or encrypted data. Our obfuscated corpus is derived from real parallel data by mapping the original words to a vocabulary of 'nonsense' tokens. Experiments on six different language pairs show that obfuscated pretraining is able to capture much of the transferable knowledge: pre-training with an obfuscation ratio as high as 75% is still able to achieve BLEU scores close to those obtained by a model pre-trained on the original un-obfuscated parallel data.
Our second approach (§3.2) seeks to maximize the benefit that can be derived from a specific limited quantity of fine-tuning data. We do this by pretraining on newly constructed artificial sentence pairs synthesized directly from the fine-tuning corpus. The synthetic sentence pairs are created by concatenating randomly sampled aligned phrase pairs extracted from the fine-tuning corpus. Although the sentence-level fluency and grammaticality of sentences constructed using this technique are both quite poor, they do retain word- and phraselevel correspondences and local reordering information that can greatly improve translation quality and robustness compared to models trained using only the original fine-tuning data.
Our third approach (§3.3) explores the pretraining impact of important translation phenomena such as alignments and reordering. We pre-train models on procedurally generated synthetic parallel data that does not derive from any real human language corpus. We design three simple synthetic sequence-to-sequence translation tasks and associated data sets. Since our data is procedurally generated, problems of toxicity, attribution and copyright can be avoided. We evaluate the effectiveness of pre-training and transfer for our synthetic tasks in the context of low-resource NMT. Our results show that - to a surprising degree - the transfer benefits of pre-training can be realized even with purely synthetic tasks and data. Our analysis shows that structure, in the form of aligned sub-trees, matters in synthetic pre-training for NMT.
We empirically evaluate the impact of each of our proposed synthetic pre-training methods in lowresource MT settings (§4), followed by a discussion and analysis explaining our insights into what makes for a good pre-trained model (§5). We also consider the question of model toxicity. We measure the extent of hallucinated toxicity in each synthetic data generation method, showing that synthetic methods can result in substantially reduced toxicity compared to models pre-trained on real parallel corpora.
The primary **contributions** of our paper are as follows: (i) we propose several novel synthetic pre-training tasks, that encode varying degrees of structural and lexical knowledge, in order to gain insights into what makes for a good pre-trained NMT model; (ii) we conduct a comprehensive empirical evaluation of knowledge transfer in NMT from synthetic data pre-training, considering metrics of both translation quality and toxicity; and (iii)
we demonstrate that synthetic data is a promising stepping stone towards relieving the data burden in low-resource translation and building more accurate and trustworthy NMT systems.
## 2 Related Work
Transferring knowledge from pre-trained language models (Devlin et al., 2018; Raffel et al., 2019; Brown et al., 2020) is a common technique for ensuring robust NLP downstream task performance.
Early work by Zoph et al. (2016) explored transfer learning for NMT from a model pre-trained on a single language pair. More recently, methods that transfer from large-scale multilingual pre-trained models (Conneau et al., 2019; Liu et al., 2020; Goyal et al., 2022; NLLB Team et al., 2022) have achieved improved translation performance across a wide range of language pairs. Aji et al. (2020)
conducted a study on pre-training and transfer for low-resource NMT. These works depend on real human language for pre-training and therefore inherit data issues such as toxicity and bias. In contrast, our work studies NMT pre-training and transfer from synthetic data based on 'nonsense' words.
Only a few methods have addressed the problem of pre-training from synthetic data in NLP. Krishna et al. (2021) proposed pre-training for summarization using synthetic article and summary pairs derived from manually curated tasks and a vocabulary of nonsense symbols. Sinha et al. (2021) have shown that masked language model pre-training with limited word-order information can be almost as effective as regular pre-training. Chiang and Lee (2020, 2021) show that non-human language data and artificial datasets (e.g. nested sequences of parentheses), can still demonstrate knowledge transfer to downstream NLP tasks. Wu et al. (2022)
compare the effect of pre-training on many simple synthetic tasks. Our work in this paper represents the first empirical evaluation of synthetic pre-training for neural machine translation. To the best of our knowledge, our proposed synthetic tasks have not been explored in previous work.
The quality of a pre-trained model should not be measured purely by performance. We should also consider trustworthiness (He et al., 2022; Xu et al., 2022; He et al., 2021). Recent works have noted that translation systems pre-trained on webscale corpora are prone to produce toxic (Costajussà et al., 2022) or biased outputs (Prates et al.,
2020; Cho et al., 2021; Costa-jussà et al., 2020), and/or present privacy issues (Prates et al., 2020; Kamocki and O'Regan, 2016), which reduces user trustworthiness. Bias mitigation for NMT has been well-investigated while privacy and toxicity issues for translation are still not extensively explored
(Costa-jussà et al., 2022). Wang et al. (2021) propose federated neural machine translation to protect privacy such as commercial leakage or copyright. (Costa-jussà et al., 2022) mitigate toxicity by filtering training data that matches pre-defined multilingual toxic word lists.
## 3 Synthetic Pre-Training For Nmt
Pre-training followed by fine-tuning is a common approach to training robust NMT models (Conneau et al., 2019; Liu et al., 2020). Our motivation is to understand the extent to which the transfer benefits of pre-training can be replicated using synthetic tasks and data. In this section, we describe three approaches to the programmatic generation of synthetic data: (i) pre-training with obfuscated parallel data that implicitly preserves certain language properties such as distributional frequencies,
(ii) pre-training with synthetic data created by concatenating aligned phrases, and (iii) pre-training with synthetic tasks designed to encourage transfer learning of important translation properties such as long-distance reordering.
## 3.1 **Pre-Training On Obfuscated Parallel Data**
In order to gain insight into what makes a good pre-trained model, we design an obfuscated pretraining experiment in which the model learns to translate obfuscated source sequences to obfuscated target sequences. The synthetic training data for this experiment is created by obfuscating words in the original parallel data. We define separate 1-to-1 nonsense token vocabulary mappings for the set of all words that occur in the source and target sides of the data: each source word si and target word tj has a corresponding obfuscated nonsense source token Osi and target token Otj
. The synthetic pre-training corpus is created by replacing, with probability R, each source and target word with its corresponding obfuscated nonsense token.
R thus determines the proportion of obfuscated tokens, allowing us to evaluate the extent to which pre-training knowledge transfer occurs with different obfuscation ratios. This method of obfuscation can be viewed as a trivial form of encrypted training. Although the original word identities are obscured, a great deal of useful information such as distributional frequencies, word order, dependency relations, alignments, and grammatical structure remain implicit in the obfuscated data. An example German-English parallel sentence pair and obfuscations at R = 0.25 and R = 1.00 (i.e. all tokens obfuscated) are shown below:
\begin{tabular}{|l|l|l|l|l|l|} \hline $R=0.00$ & $\mathrm{ssc}$ & Meine zweite Bemerkung ist etwas ernsthaftter. \\ \hline $R=0.25$ & $\mathrm{ssc}$ & $\mathrm{wfnzc}$ & $\mathrm{smeit}$ & $\mathrm{Benefkung}$ & $\mathrm{ist}$ & $\mathrm{etwas}$ & $\mathrm{ernsthatftter}$ \\ \hline $R=1.00$ & $\mathrm{ssc}$ & $\mathrm{wfnzc}$ & $\mathrm{smeit}$ & $\mathrm{smeit}$ & $\mathrm{smeit}$ & $\mathrm{smeit}$ & $\mathrm{smeit}$ \\ \hline $R=1.00$ & $\mathrm{ssc}$ & $\mathrm{wfnzc}$ & $\mathrm{smeit}$ & $\mathrm{smeit}$ & $\mathrm{smeit}$ & $\mathrm{smeit}$ & $\mathrm{smeit}$ \\ \hline \end
trg UKVFB IJODB XRWOB SZEIA AHBNB LATAA MCSDA ETFJA
## 3.2 Pre-Training On Concatenated Phrases
In this section, we propose pre-training an NMT
model with synthetic parallel data formed by concatenating aligned phrases. The main advantage of aligned phrases is that they are extracted from real parallel data and thus encode both lexical and structural translation knowledge. Lexical knowledge is defined by the word- and phrase-level correspondences between the source and target language.
Structural knowledge, encoded by local reordering within aligned phrases, can also be leveraged.
We first extract a collection of aligned phrases P
using the standard recipe implemented in the Moses SMT Toolkit (Koehn et al., 2007). The accuracy of the aligned phrases depends on the size and quality of the parallel data: we target low-resource MT and assume there is only a limited quantity of parallel data available. We generate synthetic parallel sentence pairs by first sampling a normally distributed phrase length P. We sample each phrase position p = 1 *. . . P* uniformly at random from P. The source and target sentences thus consist of concatenated source and target phrases. The word order within each sampled phrase is fluent and local reordering may also be captured. The boundaries between phrases, however, typically do not respect natural word order or grammar. In spite of these limitations, we show in Section 4.3 that this simple method of data augmentation can significantly improve the quality of an NMT model when training data is limited. An example Indonesian-to-English synthetic sentence pair, with phrase boundaries indicated by parentheses, is shown below:
$$\begin{array}{r l}{\operatorname{src}}&{{}{\left\lceil\operatorname{sejak}\ W\right\rceil}\\ {}}&{{}\left\lceil5\theta\ j\theta\right\rceil}\\ {\operatorname{trg}}&{{}{\left\lceil\operatorname{from}\ W\right\rceil}}\\ {}&{{}\left\lceil5\theta\ m\right\rceil}\end{array}$$
src[sejak Wright] [sambil seringkali] [kami]
[50 juta mengingat]
trg [from Wright] [in most times] [we]
[50 millions as]
## 3.3 **Pre-Training On Synthetic Tasks And Data**
In this section, we define three completely synthetic task variants that can be used for NMT pre-training:
(1) the identity operation, (2) case-mapping, and
(3) permuted binary trees. All three tasks are based on a procedural data generation model and can thus be used to generate arbitrary quantities of synthetic data. Procedural generation of synthetic parallel sentence pairs allows for complete control over the alignments, length distribution, token frequency distribution, and level of noise in the data.
All three synthetic tasks are based on a 1-to-1 paired dictionary of source and target synthetic tokens: S for source and T for target. We define a pairwise mapping between the two vocabularies such that each synthetic source token Siis paired with a corresponding synthetic target token Ti for each i ∈ 1 *. . . N*, where N is the size of the paired vocabulary. In the examples below, the source vocabulary consists of all 263 = 17576 three-character synthetic tokens that can be created using the lowercase English letters {*a, . . . , z*}.
## 3.3.1 Synthetic Task 1: Identity Operation
The simplest of the pre-training tasks we consider is the identity operation, which has been previously proposed by Wu et al. (2022) as a synthetic task for language model pre-training. For this task, the source and target sentences are identical. We include it not because we believe it to be in any way a proxy for the true translation task, but instead to serve as the simplest possible baseline sequenceto-sequence synthetic task. We generate parallel sentence pairs by first sampling a sentence length L from the normal distribution. Each source token si for i = 1 *. . . L* is sampled uniformly from the source vocabulary S. The target sentence is simply a copy of the source:
## 3.3.2 Synthetic Task 2: Case-Mapping
Our second pre-training task defines a casemapping operation. Each synthetic parallel sentence pair consists of the same sequence of tokens but the source sentence is lowercase and the target sentence is uppercase. We also design an extension of this task that includes insertions and deletions.
Source and target tokens can be deleted with fixed probability ds (for source) and dt (for target). Random insertions and deletions are added to avoid having identical source and target lengths for every sentence pair, which might entrench the tendency of the model to mimic such behavior even at the fine-tuning stage where it is likely inappropriate.
From the perspective of the translation task, a sentence pair with a missing target token corresponds to a deletion, while a missing source token corresponds to an insertion. The following example shows a parallel sentence pair for the case-mapping task with fixed source and target deletion probabilities ds = dt = 0.15:
src qdo zwj iub uxj pls nsn igk mrz ojw trg QDO ZWJ IUB KWP UXJ PLS NSN IGK MRZ OJW
## 3.3.3 Synthetic Task 3: Permuted Trees
The third of our synthetic pre-training tasks is designed to reflect some aspects of the reordering process that occurs during natural language translation.
We first generate random sentences with normally distributed lengths and uniformly distributed synthetic tokens, as for tasks 1 and 2. We then induce an artificial binary tree over the source sentence by picking a random point at which to split the sentence, and recursively repeat this process for the left and right sub-strings. The resulting binary tree structure allows us to generate synthetic parallel data with reordering that preserves the alignment of contiguous source-to-target token spans.
The target tree is generated as a permutation of the source tree: we randomly swap left and right sub-trees with some fixed probability r. Generating synthetic sentence pairs in this way implies the existence of lexicalised synchronous context free grammar (SCFG) rules (Chiang, 2007) that could be used to generate the sentence pair as a parallel derivation. The example below shows a synthetic sentence pair generated using this method:
Parentheses indicating the tree structure are shown for clarity. During pre-training, however, only the source and target synthetic token sequences are actually seen by the model. In this example, the source token 'ktp' was reordered with respect to the sub-tree containing the tokens 'hme nmc'. Figure 2 shows the token-level alignment and reordering operations encoded by this parallel sentence pair.
## 4 Experimental Framework
We evalute our synthetic pre-training data generation methods for NMT using using both Englishcentric and non-English-centric language pairs.
## 4.1 Experiment Setup
English-Centric Language Pairs For Englishcentric translation directions, we use fine-tuning data sets similar to Aji et al. (2020). For GermanEnglish, we use the official data from the WMT
2014 News Translation Task. For MyanmarEnglish, the fine-tuning data consists of 18.0k parallel sentence pairs in the news domain collected for the Asian Language Treebank (ALT) project (Ding et al., 2018). We use the original train, dev and test split. For Indonesian-English, we use a filtered set of 24.6k parallel sentence pairs from the IDENTIC
v1.0 corpus (Larasati, 2012) which covers various genres. We randomly divide the original corpus into distinct train (90%), dev (5%), and test (5%)
sets. For Turkish-English, we use data from the WMT 2017 News Translation Task (Yepes et al.,
2017). The training set includes 207.7k parallel sentence pairs. We use the WMT newsdev2016 set for validation, and report results on newstest2017.
![4_image_0.png](4_image_0.png)
$$\mathbf{\partial}\mathbf{\partial}[\mathbf{\partial}]$$
Non-English-Centric Language Pairs For nonEnglish-centric directions, we simulate lowresource translation conditions by sampling data from OPUS NLP (Tiedemann, 2012). The nonEnglish-centric language pairs we evaluate are as follows: Indonesian-Myanmar, IndonesianTurkish, Indonesian-Tagalog, Myanmar-Turkish, Myanmar-Tagalog, Tagalog-Turkish, GermanIndonesian, and German-Myanmar. For each pair, we simulate low-resource conditions by creating fine-tuning sets of size 10k, 25k, 50k, and 100k via sampling from the set of all parallel corpora for that language pair on OPUS NLP. Minimal filtering is applied to our parallel data sets: we remove duplicates, discard sentences with extreme length ratios, and keep only sentence pairs for which the fasttext (Joulin et al., 2016) language ID matches the stated source and target.
Evaluation Following the evaluation setting of large-scale multilingual models such as FLORES101 (Goyal et al., 2022), we score our translation hypotheses using sentencepiece BLEU (Papineni et al., 2002) (spBLEU). This avoids the need for custom post-processing for individual languages with unusual scripts and/or complex morphology such as Burmese.
Model Training Strategy Our experiments consist of a pre-training stage followed by a finetuning stage. We use the transformer sequenceto-sequence 'base' model architecture (Vaswani et al., 2017) for all translation experiments. Since our goal is to gain insight into the relative importance of various aspects of synthetic pre-training, our baseline models are created by fine-tuning randomly initialized models using only the downstream task parallel data.
We use fairseq (Ott et al., 2019) to train our models with the Adam (Kingma and Ba, 2014) optimizer. We reset the learning rate scheduler and optimizer before starting the fine-tuning stage. Pretraining and fine-tuning continue until the BLEU
score on the validation set converges. Further implementation details can be found in Appendix B.
## 4.2 Pre-Training With Obfuscated Data
Following previous work that showed German-toEnglish to be a good pre-training direction for several language pairs (Aji et al., 2020), we also use German-to-English (de-en) for pre-training and randomly sample two million pairs from its training corpus to use as obfuscated parallel data. We
![5_image_0.png](5_image_0.png)
vary the obfuscation ratio R from 0% to 100% in 25% increments. After pre-training, we fine-tune the models on the real-world parallel training corpus (described in Section 4.1) for each downstream language pair. We also investigate the scaling effect of different fine-tuning set sizes and show the results in Appendix A.1.
We report spBLEU scores on the test set for each language pair in Figure 3. We find that, surprisingly, even when as much as 75% of the pretraining data is obfuscated, the models are still able to achieve high or even comparable spBLEU
scores to real-world pre-trained models (i.e., those with 0% obfuscation). Additionally, most of the models pre-trained on obfuscated data performed better than those trained from scratch on real-world fine-tuning data, even when the pre-training data was 100% obfuscated (e.g., 100% in id-en, my-en, and my-tl). This suggests that a small proportion of real-world data can provide the majority of the benefits of large-scale regular pre-training, implying a promising research direction for efficient pre-training or improving low-resource NMT.
## 4.3 Pre-Training With Phrase Concatenation
The translation decoding results in Table 1 show substantial transfer learning benefits from pretraining with 2m sentence pairs of synthetic data generated by concatenating uniformly sampled aligned phrase pairs (phrase-cat). Compared to a model with no pre-training, i.e. one that trains from random initialization using only the fine-tuning data (random-init), we observe large gains of up to +9.9 spBLEU for language pairs with less than 25k of fine-tuning data (my↔en and id↔en). The gains of +1.4 to +2.1 for tr↔en are smaller: this pair has more fine-tuning data (>200k pairs) so the improved coverage and robustness of synthetic pretraining is less critical for good performance. It is important to note that this method does not utilize any additional real parallel or monolingual data, but instead derives new data directly from the existing fine-tuning corpus. Our synthetic pre-training corpus, although unnatural at the sentence-level, contains many phrase-level alignments and reordering information which reinforces the translation knowledge captured by the model. Any destructive effect from presenting to the model during pre-training sentence pairs with unnatural word order or bad grammar, can be rectified in the fine-tuning stage by showing the model the original fluent source and target sentences.
## 4.4 Pre-Training With Synthetic Data
We pre-train transformer (Vaswani et al., 2017)
models using 2m sentence pairs of synthetic parallel data to match the data size used in our obfuscation experiments. We further explore the effect of scaling the synthetic pre-training data size in Appendix A.4. Separate synthetic training sets were generated for each of the three task variants described in Section 3.3. Additional sets of 4000 synthetic pairs were generated as validation data.
Each pre-trained model is subsequently fine-tuned with real parallel data for a specific language pair:
my↔en, id↔en, and tr↔en. In Table 1, we report sentencepiece BLEU (spBLEU) (Goyal et al.,
2022) scores for our three synthetic pre-training task variants. For comparison purposes, we also show the scores obtained without pre-training - i.e.
a randomly initialized model trained on only the fine-tuning data.
Our first observation is that synthetic pretraining with the identity operation task (§3.3.1)
does not perform well. For all three language pairs it is slightly worse than simply fine-tuning from a randomly initialized model. This is to be expected since the pre-training task is too crude: a simple copy operation from source to target with identical lengths. Pre-training with the case-mapping synthetic task (§3.3.2) and deletion probability ds = dt = 0 improves the scores, with gains of +1.0 to +5.0 spBLEU over the identity operation on our test set. Although the case-mapping pre-training task is still quite crude, it is able to beat fine-tuning from a randomly initialized model
| my-en | id-en | tr-en | en-my | en-id | en-tr | | | | | | | |
|------------|---------|---------|---------|---------|---------|------|--------|------|--------|------|--------|------|
| Test | Flores | Test | Flores | Test | Flores | Test | Flores | Test | Flores | Test | Flores | |
| scratch | 4.1 | 1.8 | 18.2 | 7.2 | 14.7 | 17.7 | 16.2 | 6.3 | 19.1 | 8.3 | 17.0 | 16.4 |
| identity | 3.2 | 1.1 | 16.8 | 7.6 | 12.4 | 13.8 | 12.7 | 4.5 | 18.1 | 9.7 | 13.8 | 13.5 |
| case-map | 6.7 | 1.6 | 21.8 | 12.1 | 13.4 | 15.1 | 16.4 | 6.0 | 22.9 | 13.8 | 15.6 | 15.2 |
| pb-trees | 11.4 | 2.5 | 23.1 | 12.2 | 14.4 | 16.9 | 18.9 | 7.0 | 23.8 | 14.4 | 16.6 | 16.3 |
| phrase-cat | 14.0 | 3.9 | 27.3 | 14.4 | 16.5 | 19.1 | 23.0 | 8.6 | 28.1 | 17.0 | 18.4 | 18.5 |
for both Myanmar-to-English and Indonesian-toEnglish. Our best performing synthetic task is pbtrees (§3.3.3) with a node reordering probability r = 0.15. This model shows that transfer learning from synthetic pre-training to real-world tasks can be substantial, with scores as high as +7.3 spBLEU over the baseline for Myanmar-to-English and +4.9 for Indonesian-to-English. We do not see gains for Turkish-to-English for any of our purely synthetic pre-training tasks. The fine-tuning data for this language pair is much larger than that of the other language pairs. As the fine-tuning data size increases, the benefits of transfer learning from pre-training diminish.
We also evaluate the strongest of our three purely synthetic pre-training tasks, pb-trees, on additional non-English-centric language pairs. Table 8 in Appendix A.7 shows spBLEU decoding results for these additional pairs. We compare performance over a range of different fine-tuning set sizes. On both OPUS-Test and FLORES-devtest, and for the majority of fine-tuning set sizes, synthetic pre-training with the pb-trees task typically outperforms fine-tuning from a randomly initialized baseline.
## 5 Analysis And Discussion 5.1 Synthetic Knowledge Transfer
In this section, we discuss what kind of useful representations are actually learned by the model when pre-training with purely synthetic tasks and data.
Our empirical study has shown that pre-training on synthetic data can result in improved translation quality after fine-tuning for a specific language pair.
Even though the pre-training data is entirely synthetic, the model must have successfully learned representations and structures relevant for translation that can be leveraged via transfer learning to the downstream task.
In Table 2, we show the word piece overlap between our tokenized synthetic pre-training corpus and the real human language corpus for three finetuning language pairs. Our vocabulary consists of 263 paired lowercase-uppercase synthetic tokens, but after tokenization the number of unique word pieces is much lower. For example, there are only 3,541 unique source and 2,405 unique target word pieces after tokenizing a corpus of 2M synthetic parallel sentence pairs. The fine-tuning data, although much smaller, has a far greater token diversity for English, Indonesian, and Turkish. Myanmar is the exception: it is aggressively segmented by the XLMR sentencepiece model which results in far fewer unique word pieces.
We compute the intersection between the set of word pieces in the synthetic pre-training data and those in the fine-tuning data in the last column of Table 2. We observe low word piece overlap. For example, only 35 of the 3541 word pieces that occur in the source side of the synthetic corpus also occur in the source side of the my-en corpus. This number is low because the Myanmar script is so different from English. But overlap remains low even for languages such as Indonesian and Turkish which have similar alphabets to English. Low levels of overlap were also observed in our obfuscated pre-training experiments (Table 6). The low word piece overlap means that most of the word embeddings learned during pre-training have little relevance to the fine-tuning or inference stages. We conclude that any transfer learning benefit exhibited by the model on the downstream task must be captured in the inner layers of the transformer.
## 5.2 Lexical And Structural Knowledge
The results in Table 1 show phrase-cat to be an effective pre-training strategy for low-resource NMT.
Both lexical and structural knowledge is captured in the aligned phrases. However, since the phrases are sampled from the uniform distribution, long-
Pair PT/FT |VP T | |VF T | Overlap
my-ensrc: lc/my 3,541 1,598 35
trg: uc/en 2,405 18,514 740
id-en src: lc/id 3,541 18,095 1,377
trg: uc/en 2,405 18,167 740
tr-en src: lc/tr 3,541 24,616 1,938
trg: uc/en 2,405 26,236 1,358
distance structure is ignored and only local reordering information is captured. The pb-trees method also allows us to encode structural knowledge into our synthetic data since it is possible to generate sentence pairs that reorder sub-trees over long distances. Comparing the effectiveness of both methods shows that surprising gains in translation quality are possible even for synthetic data generation methods such as phrase-cat that encode only very local structural knowledge. This insight, that it is mainly collocations (especially, for NMT, parallel collocations) agrees with the conclusions about the relative lack of importance of word order to LM
pre-training in Sinha et al. (2021).
## 5.3 Translation Quality Vs. Toxicity
To evaluate model toxicity, we consider catastrophic mistranslations (Costa-jussà et al., 2022).
These errors occur when a model hallucinates toxic terms in the translated text, even though no such terms occur in the source text. Following the toxicity measurement setup of Goyal et al. (2022), we use the FLORES Toxicity-2001 word lists to calculate the toxicity rate of translations produced by a model. The lists cover 200 languages and contain frequently used profanities, insults, and hate speech terms. We consider a sentence toxic if it contains words that match entries in these lists. The toxicity rate for each model is defined as the proportion of sentences with hallucinated toxicity in translations of the test set and a larger set of 100k monolingual sentences randomly sampled from CC-100 (Wenzek et al., 2020; Conneau et al., 2019). We compare BLEU scores and toxicity rates for various models including current state-of-the-art large pre-trained multilingual translation models in Table 3.
Results and Analysis We first observe that models pre-trained on synthetic data obtain signifi1http://github.com/facebookresearch/flores/
tree/main/toxicity cantly higher BLEU scores than baselines trained from scratch using only the fine-tuning data. This confirms that our proposed synthetic tasks indeed capture useful knowledge that can be applied through transfer learning to low-resource NMT
tasks. When compared to the multilingual translation models FLORES-101 (615M parameters) and M2M-100 (1.2B parameters), we note that models pre-trained on synthetic data obtain comparable performance for languages my-en and even outperform multilingual models by a large margin on de-my, id-en, and my-tl, though with inferior translation quality on de-id. It should be noted that some of these language pairs represent zero-shot directions for M2M-100. We compare our synthetic methods with the standard NMT data augmentation technique of back-translation in Appendix A.3.
While these results are quite promising, we note that our goal in this paper is not to surpass the state-of-the-art in translation quality achieved by large-scale massively multilingual models on lowresource NMT. Instead, we seek to further understand which properties of pre-training based on synthetic tasks and data - along the structural and lexical knowledge axes of Figure 1 - enhance transfer learning performance, while minimizing toxicity and other data issues inherent in models that rely on large-scale pre-training using real data.
Analyzing toxicity, we observe the presence of catastrophic mistranslations in all models, but less frequently when training from scratch in most cases. This is because the low-resource fine-tuning data contains very little toxic content. On the other hand, as noted above, the BLEU scores when training models from scratch are very low. We see that the FLORES-101 and M2M-100 models both exhibit toxicity, since they were pre-trained on realworld corpora that can include toxic content. Our results show that synthetic pre-training can produce models with comparable BLEU scores while significantly reducing catastrophic mistranslations. We observe that parallel data generated from permuted binary trees has the lowest toxicity among the three synthetic pre-training methods, since it relies on purely synthetic data. This may indicate that patterns in the data can still trigger toxic terms, even after the words have been obfuscated or phrases have been shuffled. We include additional toxicity results and analysis in Appendix A.5.
| Model | de-id | de-my | id-en | my-en | my-tl | | | | | | |
|--------------------|------------|---------|----------|---------|----------|------|----------|------|----------|------|------|
| BLEU | Toxicity | BLEU | Toxicity | BLEU | Toxicity | BLEU | Toxicity | BLEU | Toxicity | | |
| Baseline | scratch | 6.6 | 0.68 | 15.2 | 0.01 | 18.2 | 0.05 | 4.1 | 0.02 | 16.4 | 0.04 |
| Large Pretrained | M2M-100 | 32.9 | 0.68 | 9.1 | 0.03 | 30.2 | 0.28 | 1.8 | 0.15 | 14.2 | 0.06 |
| Multilingual Model | FLORES-101 | 30.0 | 0.63 | 12.3 | 0.03 | 26.0 | 0.23 | 4.6 | 0.18 | 12.8 | 0.08 |
| obfuscation | 18.2 | 0.34 | 22.4 | 0.01 | 29.0 | 0.11 | 16.4 | 0.08 | 23.6 | 0.04 | |
| Synthetic | phrase-cat | 14.7 | 0.50 | 19.6 | 0.02 | 27.3 | 0.10 | 14.0 | 0.02 | 22.5 | 0.03 |
| Pre-training | pb-trees | 11.7 | 0.45 | 12.3 | 0.01 | 23.1 | 0.10 | 11.4 | 0.01 | 20.7 | 0.02 |
## 6 Conclusion
Our study of synthetic pre-training tasks for NMT
showed that pre-training benefits can still be achieved even when using synthetic or obfuscated data. Additionally, we have shown that synthetic data has the potential to reduce model toxicity compared to models trained on web-scale crawled corpora. Our research provides insights into what types of knowledge transfer make for a good pretrained model. We believe that synthetic data augmentation techniques based on synthetic tasks and procedurally generated data are a promising solution for addressing pre-training data concerns, and can lead to efficient, accurate, and trustworthy NMT. In future work, we plan to further investigate synthetic pre-training by exploring more advanced data generation models and directly optimizing the parameters for specific downstream fine-tuning tasks. Increasing the effectiveness of synthetic data at different data scales is also worthy of further exploration.
## 7 Limitations
Our work seeks to gain insight into what pretraining knowledge is transferred and useful for downstream fine-tuning in NMT using synthetic tasks and data. We note that changes in the data generation methods do require re-running the pretraining stage, which is computationally expensive compared to the fine-tuning stage.
Our current synthetic data generation methods are somewhat crude. Although they are designed to encode varying degrees of lexical and structural translation knowledge, they do so in a rather simplistic way. For example, sampling phrases from the normal distribution ignores distributional frequencies which represent information that is likely useful for the synthetic data generation task. In this paper we have presented some interesting initial findings regarding the suitability of synthetic pre-training for NMT. We plan to explore more sophisticated data generation models in future work.
We acknowledge that synthetic pre-training is unlikely to surpass the quality of real-world massively multilingual pre-trained models in performance, especially if synthetic data is the only data used for pre-training. However, good performance can probably be achieved by combining synthetic pretraining and real-data pre-training. Of course, this risks exposing the model to toxic and sensitive or private content. Therefore, concerns of both model quality and data quality should be considered when evaluating the impact and benefits of synthetic pretraining. We view synthetic pre-training as a complimentary approach to finding an optimal balance rather than as a replacement for previous state-ofthe-art NMT pre-training methods.
## 8 Ethics Statement
All of the training data used in our experiments are official releases of publicly available benchmarks.
In addition, the toxic word lists used to measure toxicity are obtained from the public FLORES repository which requires a password to access, thus reducing the risk of hacking by a malicious user or adversarial bot. In addition, as for the issue of hallucinated toxicity discussed previously, we note that our work also has the potential to address other problematic translation behaviors, such as hallucinated bias.
## 9 Acknowledgements
This material is based upon work supported by the Defense Advanced Research Projects Agency under Contract No. FA8750-19-C-1001. Disclaimer:
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency. Zexue He is supported by an IBM Ph.D.
Fellowship and is independent of the Defense Advanced Research Projects Agency.
cross-lingual representation learning at scale. *CoRR*,
abs/1911.02116.
## References
Željko Agic and Ivan Vuli ´ c. 2019. ´ JW300: A widecoverage parallel corpus for low-resource languages.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3204–
3210, Florence, Italy. Association for Computational Linguistics.
Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019.
Massively multilingual neural machine translation. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics.
Alham Fikri Aji, Nikolay Bogoychev, Kenneth Heafield, and Rico Sennrich. 2020. In neural machine translation, what does transfer learning transfer? In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7701–
7710, Online. Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. *CoRR*,
abs/2005.14165.
Cheng-Han Chiang and Hung-yi Lee. 2020. Pretraining a language model without human language.
arXiv preprint arXiv:2012.11995.
David Chiang. 2007. Hierarchical phrase-based translation. *computational linguistics*, 33(2):201–228.
David Cheng-Han Chiang and Hung-yi Lee. 2021.
On the transferability of pre-trained language models: A study from artificial datasets. *CoRR*,
abs/2109.03537.
Won Ik Cho, Jiwon Kim, Jaeyeong Yang, and Nam Soo Kim. 2021. Towards cross-lingual generalization of translation gender bias. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 449–457.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. *arXiv preprint* arXiv:2207.04672.
Marta R Costa-jussà, Carlos Escolano, Christine Basta, Javier Ferrando, Roser Batlle, and Ksenia Kharitonova. 2020. Gender bias in multilingual neural machine translation: The architecture matters.
arXiv preprint arXiv:2012.13176.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805.
Chenchen Ding, Masao Utiyama, and Eiichiro Sumita.
2018. Nova: A feasible and flexible annotation system for joint tokenization and part-of-speech tagging.
ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 18(2):1–18.
Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. *J. Mach. Learn. Res.*,
22(107):1–48.
Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzman, and Angela Fan. 2022. The flores-101 evaluation benchmark for low-resource and multilingual machine translation. *Transactions of the Association for* Computational Linguistics, 10:522–538.
Zexue He, Bodhisattwa Prasad Majumder, and Julian McAuley. 2021. Detect and perturb: Neutral rewriting of biased and sensitive text via gradient-based decoding. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 4173–
4181, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Zexue He, Yu Wang, Julian McAuley, and Bodhisattwa Prasad Majumder. 2022. Controlling bias exposure for fair interpretable predictions. In Findings of the Association for Computational Linguistics:
EMNLP 2022, pages 5854–5866, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov.
2016. Fasttext.zip: Compressing text classification models. *arXiv preprint arXiv:1612.03651*.
Paweł Kamocki and Jim O'Regan. 2016. Privacy issues in online machine translation services-european perspective. In *Proceedings of the Tenth International*
Conference on Language Resources and Evaluation
(LREC'16), pages 4458–4462.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. *arXiv preprint* arXiv:1412.6980.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In *Proceedings of the 45th annual meeting of the association for computational linguistics companion volume* proceedings of the demo and poster sessions, pages 177–180.
Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2022. Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. *Transactions of the Association for Computational Linguistics*, 10:50–72.
Kundan Krishna, Jeffrey Bigham, and Zachary C
Lipton. 2021. Does pretraining for summarization require knowledge transfer? *arXiv preprint* arXiv:2109.04953.
Septina Dian Larasati. 2012. Identic corpus: Morphologically enriched indonesian-english parallel corpus.
In *LREC*, pages 902–906.
Klas Leino, Emily Black, Matt Fredrikson, Shayak Sen, and Anupam Datta. 2018. Feature-wise bias amplification. In International Conference on Learning Representations.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. *CoRR*,
abs/2001.08210.
NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti,
John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
2022. No language left behind: Scaling humancentered machine translation.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. *CoRR*, abs/1904.01038.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Marcelo OR Prates, Pedro H Avelar, and Luís C Lamb.
2020. Assessing gender bias in machine translation: a case study with google translate. *Neural Computing* and Applications, 32(10):6363–6381.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *CoRR*, abs/1910.10683.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Improving neural machine translation models with monolingual data. In *Proceedings of the 54th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96.
Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021.
Masked language modeling and the distributional hypothesis: Order word matters pre-training for little.
arXiv preprint arXiv:2104.06644.
Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In *Lrec*, volume 2012, pages 2214–
2218. Citeseer.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *CoRR*, abs/1706.03762.
Jianzong Wang, Zhangcheng Huang, Lingwei Kong, Denghao Li, and Jing Xiao. 2021. Modeling without sharing privacy: Federated neural machine translation. In *International Conference on Web Information Systems Engineering*, pages 216–223. Springer.
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet:
Extracting high quality monolingual datasets from web crawl data. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 4003–4012, Marseille, France. European Language Resources Association.
Yuhuai Wu, Felix Li, and Percy Liang. 2022. Insights into pre-training via simpler synthetic tasks. *arXiv* preprint arXiv:2206.10139.
Canwen Xu, Zexue He, Zhankui He, and Julian McAuley. 2022. Leashing the inner demons: Selfdetoxification for language models. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 36, pages 11530–11537.
Antonio Jimeno Yepes, Aurélie Névéol, Mariana Neves, Karin Verspoor, Ondˇrej Bojar, Arthur Boyer, Cristian Grozea, Barry Haddow, Madeleine Kittner, Yvonne Lichtblau, et al. 2017. Findings of the wmt 2017 biomedical translation shared task. In Proceedings of the Second Conference on Machine Translation, pages 234–247.
Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568–1575, Austin, Texas.
Association for Computational Linguistics.
## A Supplementary Results A.1 **Scaling Effect Of Obfuscated Pre-Training**
We first evaluate the performance of regular pretraining and fine-tuning with various quantities of real-world German-to-English data. The results in Figure 4 show that the highest BLEU scores are obtained by using regular real-world parallel data (i.e.
0% obfuscation). We compare vs. models trained solely on the fine-tuning data ('Scratch'): the resulting BLEU scores are quite low when the training data size is small, confirming the importance and benefits of pre-training for NMT.
## A.2 Flores Obfuscated Pre-Training Results
We show additional decoding results for the matched (with source and target fine-tuning languages that are the same as the pre-training languages: de-en) vs. unmatched (with source or target fine-tuning languages that differ from the pre-training languages: de-id, de-my, id-en, my-en, my-tl) conditions of obfuscated pretraining on the FLORES devtest set in Figure 5.
## A.3 Back-Translation Comparison
Back-translation (Sennrich et al., 2016) is an effective technique for improving the quality of machine translation. It works by creating new parallel sentence pairs by translating target-side monolingual data into the source language using an inverse direction MT system. The new sentence pairs consist of a (possibly noisy) back-translated source sentence paired with a high-quality target sentence. We compare our synthetic training methods to an NMT
![12_image_0.png](12_image_0.png)
| my-en | en-my | | | |
|------------------|---------|--------|------|--------|
| Model | Test | Flores | Test | Flores |
| scratch | 4.1 | 1.8 | 16.2 | 6.3 |
| back-translation | 10.7 | 2.0 | 11.1 | 4.1 |
| phase-cat | 14.0 | 3.9 | 23.0 | 8.6 |
| pb-trees | 11.4 | 2.5 | 18.9 | 7.0 |
system that has been trained on an augmented data set that includes back-translated parallel data. We use our baseline models for my-en and en-my to produce the back-translated sentences. For each direction my-en and en-my, we generate an additional set of 2m back-translated sentences. The results are shown in Table 4. We note that backtranslation provides only limited improvements vs.
the baseline model trained from scratch for my-en and actually hurts for en-my. This is because backtranslation requires a good quality model in the target-to-source direction in order to produce accurate and relevant translations. The my-en baseline model is not of sufficiently high quality to produce useful back-translations. Our synthetic methods significantly outperform back-translation for both translation directions, confirming our expectation about the limitations of back-translation in low-resource conditions, and further illustrating the potential of our proposed synthetic approaches.
## A.4 Synthetic Pre-Training Data Scaling
Figure 6 shows the data scaling behavior of the pb-trees and phrase-cat synthetic pre-training methods. We pre-train each model with proper subsets of varying sizes sampled from the full 2m pairs used in the rest of our experiments. For pb-trees, the scaling is mostly flat. The BLEU scores, while consistently higher than the baseline (which uses no pre-training at all), increase only gradually with additional synthetic training data. The BLEU gains over the baseline are therefore a result of priming the model for the task of translation, rather than learning any useful real-world lexical relationships between the source and target languages. For phrase-cat, the data scaling curve is much more pronounced. For all three tasks, we observe steadily increasing BLEU scores with larger synthetic training set sizes, reaching a plateau at around 1m pairs.
The phrase-cat method benefits from additional samples and combinations of real phrase pairs since
![13_image_1.png](13_image_1.png)
![13_image_0.png](13_image_0.png)
the synthetic pairs provide additional coverage of possible word orders and translation relationships that can aid the subsequent fine-tuning and decoding of the testset.
## A.5 Further Analysis Of Toxicity
We further analyze the toxicity of our models by comparing the toxicity rate of source language sentences and their translations. Firstly, we test de-en translation systems with obfuscated pre-training on WMT test, as shown in Table 5. We observe that training with real-world data (i.e. obfuscation ratio R = 0%) generates translations that contain toxic terms more frequently than they occur in the source. This indicates a toxicity amplification effect, a problem highlighted previously for toxicity
(Costa-jussà et al., 2022) and bias (Leino et al.,
2018). Pre-training with obfuscated data, however, is a promising way of mitigating this phenomenon, as shown by the big reduction in toxicity rate as the obfuscation ratio is increased. We observe a similar pattern for CC-100 data as well. The sentences in the CC-100 corpus are more toxic than those in the WMT testset (0.57% > 0.43%).
## A.6 Word-Piece Overlap Statistics For Obfuscated Pre-Training
Similar to Section 5.1, we also report the token overlap between completely encrypted pre-training data (both source and target corpus) and real-world fine-tuning data, on de-en as shown in Table 5 and other language directions id-en, my-tn, and tr-en in Table 7. In de-en translation, we notice that the overlap is just 0.08% on the source language and 0.04% on the target language, with the largest size of the fine-tuning set (1M). On low-resource language pairs, we can see there is almost no overlap between pre-training and fine-tuning on both source and target sides, as shown in Table 7. This strong evidence supports the conclusion mentioned in Section 5.1 - most of the representations in the first layers are not touched during pre-training, and the benefits from pre-training may come from the inner layers which capture the transferable highlevel knowledge for downstream tasks.
## A.7 Synthetic Pre-Training: Additional Language Pairs
Table 8 shows translation decoding results (spBLEU) for additional non-English-centric language pairs. We compare synthetic pre-training on permuted binary trees vs. fine-tuning from a randomly initialized model as a function of the finetuning set size. Cells marked 'n/a' indicate there was insufficient parallel data to create a fine-tuning set of the specified size.
## B Implementation Details
This section describes implementation details for facilitating the reproduction of our work.
## B.1 Model Architectures
All translation models described in our experiments are based on the sequence-to-sequence transformer
'base' architecture (Vaswani et al., 2017) as implemented in fairseq (Ott et al., 2019). The models have six encoder layers, six decoder layers, and eight attention heads. The word embedding size is 512, and the feed-forward layers have 2048 dimensions. All BLEU scores are computed using SacreBLEU (Post, 2018) with sentencepiece tokenization (Goyal et al., 2022). Our SacreBLEU
| Obfuscation Ratio | | | | | | |
|----------------------|------|------|------|------|------|-------------------|
| Fine-Tuning Set Size | 0% | 25% | 50% | 75% | 100% | |
| 20k | 0.57 | 0.40 | 0.43 | 0.37 | 0.00 | |
| 50k | 0.43 | 0.53 | 0.47 | 0.40 | 0.03 | |
| 100k | 0.53 | 0.33 | 0.40 | 0.27 | 0.07 | |
| 500k | 0.50 | 0.50 | 0.33 | 0.33 | 0.40 | |
| 1M | 0.57 | 0.47 | 0.40 | 0.37 | 0.37 | Obfuscation Ratio |
| Fine-Tuning Set Size | 0% | 25% | 50% | 75% | 100% | |
| 20k | 0.37 | 0.33 | 0.33 | 0.21 | 0.01 | |
| 50k | 0.37 | 0.35 | 0.37 | 0.26 | 0.05 | |
| 100k | 0.43 | 0.32 | 0.30 | 0.23 | 0.17 | |
| 500k | 0.36 | 0.38 | 0.36 | 0.32 | 0.27 | |
| 1M | 0.38 | 0.45 | 0.36 | 0.35 | 0.33 | |
![14_image_0.png](14_image_0.png)
pre-training and fine-tuning stages. We choose the best model during training by maximizing the tokenized BLEU score on the validation set. For both pre-training and fine-tuning, we allow training to continue until the BLEU score has fully converged.
scoring signature2indicates that both source and reference are sentencepiece tokenized prior to scoring.
## B.2 Hyper-Parameters And Training Configuration
Table 9 shows the hyper-parameters and training settings used for our experiments. We found different warm-up schedules were appropriate for the 2BLEU+case.mixed+numrefs.1+smooth.exp
+tok.spm+version.1.5.1
| Model | FT size | PT/FT Language | |VP T | | |VF T | | Overlap |
|-------------------------|---------------------|---------------------|-----------|-----------|-----------|
| 20k | src: nonsense-de/de | 1,289,374 | 77,284 | 119 | |
| trg: nonsense-en/en | 680,221 | 56,339 | 15 | | |
| 50k | src: nonsense-de/de | 1,289,374 | 148,282 | 215 | |
| trg: nonsense-en/en | 680,221 | 102,900 | 33 | | |
| Obfuscated Pre-training | 100k | src: nonsense-de/de | 1,289,374 | 241,617 | 270 |
| trg: nonsense-en/en | 680,221 | 163,105 | 50 | | |
| 500k | src: nonsense-de/de | 1,289,374 | 729,937 | 651 | |
| trg: nonsense-en/en | 680,221 | 466,678 | 164 | | |
| 1m | src: nonsense-de/de | 1,289,374 | 1,170,435 | 950 | |
| trg: nonsense-en/en | 680,221 | 730,119 | 271 | | |
| 20k | src: de/de | 1,861,801 | 77,284 | 65,006 | |
| trg: en/en | 1137,015 | 56,339 | 49,295 | | |
| 50k | src: de/de | 1,861,801 | 148,282 | 117,827 | |
| trg: en/en | 1,137,015 | 102,900 | 85,111 | | |
| Regular | | | | | |
| Pre-training | 100k | src: de/de | 1,861,801 | 241,617 | 180,708 |
| trg: en/en | 1,137,015 | 163,105 | 126,278 | | |
| 500k | src: de/de | 1,861,801 | 729,937 | 435,333 | |
| trg: en/en | 1,137,015 | 466,678 | 291,138 | | |
| 1m | src: de/de | 1,861,801 | 1,170 | 600,922 | |
| trg: en/en | 1,137,015 | 730,119 | 394,598 | | |
Table 6: Tokenized pre-training (PT) and fine-tuning (FT) word piece counts and overlap statistics comparing obfuscated pre-training (upper part) vs. regular pre-training (lower-part) for German-to-English parallel data with various fine-tuning data set sizes.
Model Language Pair PT/FT Language |VP T | |VF T | **Overlap**
id-en src: nonsense-de/id 1,289,374 18,095 112
trg: nonsense-en/en 680,221 18,167 0
| Obfuscated Pre-training Regular Pre-training |
|------------------------------------------------|
my-ensrc: nonsense-de/my 1,289,374 1,598 1
trg: nonsense-en/en 680,221 18,514 0
tr-ensrc: nonsense-de/tr 1,289,374 24,616 270
trg: nonsense-en/en 680,221 26,236 0
id-en src: de/id 1,861,801 18,095 3,722
trg: en/en 1,137,015 26,236 6,483
my-ensrc: de/my 1,861,801 1,598 97
trg: en/en 1,137,015 18,514 4,407
tr-ensrc: de/tr 1,861,801 24,616 5,569
trg: en/en 1,137,015 26,236 6,483
Table 7: Tokenized pre-training (PT) and fine-tuning (FT) word piece counts and overlap statistics comparing obfuscated pre-training (upper part) vs. regular pre-training (lower-part) for additional language directions.
| OPUS-Test | FLORES-devtest | | | | | | | | |
|---------------|------------------|------|------|------|------|-----|------|------|------|
| Language Pair | Model | 10k | 25k | 50k | 100k | 10k | 25k | 50k | 100k |
| de-id | random-init | 5.6 | 6.6 | 10.1 | 16.0 | 1.8 | 4.2 | 7.1 | 12.5 |
| pb-trees | 6.4 | 11.7 | 16.0 | 19.8 | 4.1 | 8.7 | 12.4 | 16.3 | |
| de-my | random-init | 10.7 | 15.2 | 19.6 | 23.6 | 1.4 | 2.7 | 4.2 | 5.9 |
| pb-trees | 12.3 | 18.3 | 24.2 | 28.3 | 2.1 | 4.2 | 6.2 | 7.8 | |
| id-my | random-init | 11.8 | 16.3 | 18.9 | n/a | 1.5 | 2.5 | 3.4 | n/a |
| pb-trees | 11.8 | 17.0 | 20.2 | 1.6 | 3.4 | 5.0 | | | |
| id-tl | random-init | 15.2 | 17.6 | 20.9 | 23.5 | 0.2 | 0.3 | 0.4 | 0.6 |
| pb-trees | 16.7 | 18.5 | 21.8 | 24.8 | 0.5 | 0.9 | 1.5 | 2.9 | |
| id-tr | random-init | 4.1 | 6.2 | 8.0 | 11.5 | 0.9 | 1.7 | 3.0 | 5.7 |
| pb-trees | 4.5 | 8.1 | 12.3 | 16.3 | 1.1 | 3.5 | 6.8 | 10.5 | |
| my-tl | random-init | 11.9 | 16.4 | 21.6 | n/a | 2.0 | 2.8 | 3.7 | n/a |
| pb-trees | 12.8 | 19.6 | 27.0 | 2.4 | 4.3 | 5.8 | | | |
| my-tr | random-init | 5.1 | 6.5 | 8.0 | 7.7 | 0.2 | 0.4 | 0.3 | 0.3 |
| pb-trees | 5.7 | 8.1 | 11.4 | 14.7 | 0.2 | 0.5 | 1.2 | 1.8 | |
| tl-tr | random-init | 2.2 | 3.1 | 3.8 | 5.0 | 0.3 | 0.7 | 1.1 | 1.8 |
| pb-trees | 2.0 | 3.5 | 4.9 | 4.9 | 0.4 | 1.0 | 2.1 | 2.1 | |
| Training Settings | |
|-------------------------------|------------------------------|
| Optimizer | Adam |
| Learning Rate | 5e-4 |
| Weight Decay | 1e-4 |
| Criterion | label_smoothed_cross_entropy |
| Label Smoothing | 0.1 |
| Learning Rate Scheduler | Inverse sqrt |
| Warmup Updates (Pre-Training) | 4000 |
| Warmup-Updates (Fine-Tuning) | 100 |
| Max Token Number | 2048 |
| Decoding Strategy | Beam Search |
| Beam size | 5 |
| Max Length a | 1.2 |
| Max Length b | 10 |
Table 9: Summary of pre-training and fine-tuning parameters for our experiments.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✓ A2. Did you discuss any potential risks of your work?
Section 8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix C1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix C2 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 and Appendix C
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
xu-etal-2023-idol | {IDOL}: Indicator-oriented Logic Pre-training for Logical Reasoning | https://aclanthology.org/2023.findings-acl.513 | In the field of machine reading comprehension (MRC), existing systems have surpassed the average performance of human beings in many tasks like SQuAD. However, there is still a long way to go when it comes to logical reasoning. Although some methods for it have been put forward, they either are designed in a quite complicated way or rely too much on external structures. In this paper, we proposed IDOL (InDicator-Oriented Logic Pre-training), an easy-to-understand but highly effective further pre-training task which logically strengthens the pre-trained models with the help of 6 types of logical indicators and a logically rich dataset LoGic Pre-training (LGP). IDOL achieves state-of-the-art performance on ReClor and LogiQA, the two most representative benchmarks in logical reasoning MRC, and is proven to be capable of generalizing to different pre-trained models and other types of MRC benchmarks like RACE and SQuAD 2.0 while keeping competitive general language understanding ability through testing on tasks in GLUE. Besides, at the beginning of the era of large language models, we take several of them like ChatGPT into comparison and find that IDOL still shows its advantage. | # Idol: Indicator-Oriented Logic Pre-Training For Logical Reasoning
Zihang Xu†, Ziqing Yang†, Yiming Cui‡†**, Shijin Wang**†§
†State Key Laboratory of Cognitive Intelligence, iFLYTEK Research, China
‡Research Center for SCIR, Harbin Institute of Technology, Harbin, China
§iFLYTEK AI Research (Central China), Wuhan, China
†{zhxu13,zqyang5,ymcui,sjwang3}@iflytek.com
‡[email protected]
## Abstract
In the field of machine reading comprehension
(MRC), existing systems have surpassed the average performance of human beings in many tasks like SQuAD. However, there is still a long way to go when it comes to logical reasoning. Although some methods for it have been put forward, they either are designed in a quite complicated way or rely too much on external structures. In this paper, we proposed IDOL (InDicator-Oriented Logic Pre-training),
an easy-to-understand but highly effective further pre-training task which logically strengthens the pre-trained models with the help of 6 types of logical indicators and a logically rich dataset LoGic Pre-training (LGP). IDOL
achieves state-of-the-art performance on ReClor and LogiQA, the two most representative benchmarks in logical reasoning MRC, and is proven to be capable of generalizing to different pre-trained models and other types of MRC benchmarks like RACE and SQuAD 2.0 while keeping competitive general language understanding ability through testing on tasks in GLUE. Besides, at the beginning of the era of large language models, we take several of them like ChatGPT into comparison and find that IDOL still shows its advantage.1
## 1 Introduction
With the development of pre-trained language models, a large number of tasks in the field of natural language understanding have been dealt with quite well. However, those tasks emphasize more on assessing basic abilities like word-pattern recognition of the models while caring less about advanced abilities like reasoning over texts (Helwe et al., 2021).
In recent years, an increasing number of challenging tasks have been brought forward gradually.
At sentence-level reasoning, there is a great variety of benchmarks for natural language inference like 1Please refer to https://github.com/GeekDream-x/
IDOL for relevant resources including datasets, models, and codes.
QNLI (Demszky et al., 2018) and MNLI (Williams et al., 2018). Although the construction processes are different, nearly all these datasets evaluate models with binary or three-way classification tasks which need reasoning based on two sentences. At passage-level reasoning, the most difficult benchmarks are generally recognized as the ones related to logical reasoning MRC which requires questionanswering systems to fully understand the whole passage, extract information related to the question and reason among different text spans to generate new conclusions in the logical aspect. In this area, the most representative benchmarks are some machine reading comprehension datasets like ReClor
(Yu et al., 2020) and LogiQA (Liu et al., 2020).
Considering that there are quite few optimization strategies for the pre-training stage and that it is difficult for other researchers to follow and extend the existing methods which are designed in rather complex ways, we propose an easy-to-understand but highly effective pre-training task named IDOL
which helps to strengthen the pre-trained models in terms of logical reasoning. We apply it with our customized dataset LGP which is full of logical information. Moreover, we experimented with various pre-trained models and plenty of different downstream tasks and proved that IDOL is competitive while keeping models and tasks agnostic.
Recently, ChatGPT attracts a lot of attention all over the world due to its amazing performance in question answering. Thus, we also arranged an experiment to let IDOL compete with a series of LLMs (large language models) including it.
The contributions of this paper are summarized as follows:
- Put forward the definitions of 5 different types of logical indicators. Based on these we construct the dataset LGP for logical pre-training and we probe the impact of different types of logical indicators through a series of ablation experiments.
![1_image_0.png](1_image_0.png)
- Design an indicator-oriented further pretraining method named IDOL, which aims to enhance the logical reasoning ability of pretrained models. It achieves state-of-the-art performance in logical reasoning MRC and shows progress in general MRC and general understanding ability evaluation.
- The first to provide a pilot test about the comparison between fine-tuning traditional pretrained models and prompting LLMs in the field of logical reasoning MRC.
## 2 Related Work 2.1 Logical Reasoning
In order to help reasoning systems perform better on reading comprehension tasks focusing on logical reasoning, there have been a great many methods put forward by research institutions from all over the world. Unsurprisingly, the majority of the optimization approaches put forward revolve around the fine-tuning phase while there are far fewer methods designed for further pre-training.
In the aspect of pre-training, to the best of our knowledge, there are only two approaches presented in published papers called MERIt and LogiGAN. MERIt team generated a dataset from the one provided by Qin et al. (2021) which contains passages from Wikipedia with annotations about entities and relations. And then optimize the model on that with the help of contrastive learning (Jiao et al., 2022). The researchers behind LogiGAN
use a task about statement recovery to enhance the logic understanding ability of generative pretrained language models like T5 (Pi et al., 2022).
For optimizing models at the fine-tuning phase, there are dozens of methods proposed as far as we know. For example, LReasoner put forward a context extension framework with the help of logical equivalence laws including contraposition and transitive laws (Wang et al., 2022a). Another example is Logiformer which introduced a twostream architecture containing a syntax branch and a logical branch to better model the relationships among distant logical units (Xu et al., 2022).
| Type | Library | Example |
|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|
| PMI | given that, seeing that, for the reason that, owing to, as indicated by, on the grounds that, on account of, considering, because of, due to, now that, may be inferred from, by virtue of, in view of, for the sake of, thanks to, as long as, based on that, as a result of, considering that, inasmuch as, if and only if, according to, in that, only if, because, depend on, rely on | |
| CLI | conclude that, entail that, infer that, that is why, therefore, | |
CLI conclude that, entail that, infer that, that is why, therefore, thereby, wherefore, accordingly, hence, thus, consequently, whence, so that, it follows that, imply that, as a result, suggest that, prove that, as a conclusion, conclusively, for this reason, as a consequence, on that account, in conclusion, to that end, because of this, that being so, ergo, in this way, in this manner, by such means, as it turns out, result in, in order that, show that, eventually NTI not, neither, none of, unable, few, little, hardly, merely, seldom, without, never, nobody, nothing, nowhere, rarely, scarcely, barely, no longer, isn't, aren't, wasn't, weren't, can't, cannot, couldn't, won't, wouldn't, don't, doesn't, didn't, haven't, hasn't In the United States, each bushel of corn produced might result in the loss of as much as two bushels of topsoil. Moreover, in the last 100 years, the topsoil in many states, which once was about fourteen inches thick, has been eroded to only six or eight inches.
This advantage accruing to the sentinel does not
mean that its watchful behavior is entirely selfinterested. On the contrary , the sentinel's behavior is an example of animal behavior motivated at least in part by altruism.
CNI and, or, nor, also, moreover, in addition, on the other hand,
meanwhile, further, afterward, next, besides, additionally, meantime, furthermore, as well, simultaneously, either, both, similarly,
likewise
| A high degree of creativity and a high level of artistic skill are seldom combined in the creation of a work of art. This advantage accruing to the sentinel does not mean that its watchful behavior is entirely selfinterested. On the contrary , the sentinel's behavior is an example of animal behavior motivated at least in part by altruism. A graduate degree in policymaking is necessary to serve in the presidential cabinet. In addition , everyone in the cabinet must pass a security clearance. |
|---|
| The real world contains no political entity exercising literally total control over even one such aspect. This is because any system of control is inefficient, and, therefore, its degree of control is partial. |
|---|
ATI although, though, but, nevertheless, however, instead of, nonetheless, yet, rather, whereas, otherwise, conversely, on the contrary, even, nevertheless, despite, in spite of, in contrast, even if, even though, unless, regardless of, reckless of
Table 1: Libraries and examples of all types of logical indicators.
## 2.2 Pre-Training Tasks
As NLP enters the era of pre-training, more and more researchers are diving into the design of pre-training tasks, especially about different masking strategies. For instance, in Cui et al. (2020),
the authors apply Whole Word Masking (WWM)
on Chinese BERT and achieved great progress.
WWM changes the masking strategy in the original masked language modeling (MLM) into masking all the tokens which constitute a word with complete meaning instead of just one single token. In addition, Lample and Conneau (2019) extends MLM to parallel data as Translation Language Modeling (TLM) which randomly masks tokens in both source and target sentences in different languages simultaneously. The results show that TLM is beneficial to improve the alignment among different languages.
## 3 Preliminary 3.1 Text Logical Unit
It is admitted that a single word is the most basic unit of a piece of text but its meaning varies with different contexts. In Xu et al. (2022), the authors refer logical units to the split sentence spans that contain independent and complete semantics.
In this paper, since much more abundant logical indicators with different types that link not only clauses but also more fine-grained text spans are introduced, we extend this definition to those shorter text pieces like entities.
## 3.2 Logical Indicators
By analyzing the passages in logical reasoning MRC and reasoning-related materials like debate scripts, we found that the relations between logic units (like entities or events) can be summarized into 5 main categories as follows and all these relations are usually expressed via a series of logical indicators. After consulting some previous work like Pi et al. (2022) and Penn Discourse TreeBank 2.0 (PDTB 2.0) (Prasad et al., 2008), we managed to construct an indicator library for each category.
As for the examples of indicators we used in detail, please refer to Table 1.
- **Premise/Conclusion Indicator (PMI/CLI)**
The first two types of logical indicators pertain to premises and conclusions. These indicators
signal the logical relationship between statements. For instance, premise expressions such
![3_image_0.png](3_image_0.png)
as "due to" indicate that the logic unit following the keyword serves as the reason or explanation for the unit preceding it. Conversely, conclusion phrases like "result in" suggest an inverse relationship, implying that the logic unit after the keyword is a consequence or outcome of the preceding unit.
- **Negative Indicator (NTI)** Negative indicators, such as "no longer", play a crucial role in text logic by negating affirmative logic units.
They have the power to significantly alter the meaning of a statement. For example, consider the sentences "Tom likes hamburgers."
and "Tom no longer likes hamburgers." These two sentences have nearly opposite meanings, solely due to the presence of the indicator "no longer".
- **Adversative Indicator (ATI)** Certain expressions, such as "however", are commonly employed between sentences to signify a shift or change in the narrative. They serve as valuable tools for indicating the alteration or consequence of a preceding event, which helps to cover this frequent kind of relation among logic units.
- **Coordinating Indicator (CNI)** The coordinating relation is undoubtedly the most prevalent type of relationship between any two logic units. Coordinating indicators are used to convey that the units surrounding them possess the same logical status or hold equal importance. These indicators effectively demonstrate the coordination or parallelism between the connected logic units.
## 4 Methodology 4.1 Lgp Dataset Construction
For the sake of further pre-training models with IDOL, we constructed the dataset LGP (LoGic Pretraining) based on the most popular unannotated corpus English Wikipedia.2 We first split the articles into paragraphs and abandoned those whose lengths (after tokenization) were no longer than 5.
In order to provide as much logical information as possible, we used the logical indicators listed in Table 1 to filter the Wiki paragraphs. During 2https://dumps.wikimedia.org/
this procedure, we temporarily removed those indicators with extremely high frequency like "and",
otherwise, there would be too many paragraphs whose logical density was unacceptably low. Then, we iterated every logical keyword and replaced it with our customized special token [LGMASK] under the probability of 70%.
For the purpose of modeling the ability to distinguish whether a certain masked place is logicrelated or not, we introduced the sixth logical indicator type - Logic Unrelated Indicator (LUI). Based on this, we then randomly replaced 0.6% tokens other than logical indicators with [LGMASK]. Afterward, the labels for the logical category prediction
(LCP) task were generated based on the corresponding logic types of all the [LGMASK]s. In the end, take RoBERTa (Liu et al., 2019) for example, our logic dataset LGP contains over 6.1 million samples and as for the quantities of logical indicators in each type please refer to Figure 2.
## 4.2 Idol Pre-Training 4.2.1 Logical Category Prediction
As introduced in section 3.2 and section 4.1, we defined a logic-related special mask token [LGMASK]
and it will take the place of 6 types of logical indicators - PMI, CLI, NTI, ATI, CNI, and LUI. During the forward process of fine-tuning the pre-trained models, the corresponding logical categories need to be predicted by them like what will be done in the token classification task of the standard Masked Language Modeling (MLM) (Devlin et al., 2019).
When the models are trying to predict the correct logical type of a certain [LGMASK], they will learn to analyze the relationship among the logical units around the current special token and whether there is some kind of logical relations with the help of the whole context. Therefore, the pre-trained models will be equipped with a stronger ability of reasoning over texts gradually.
Moreover, we use Cross-Entropy Loss (CELoss)
to evaluate the performance of predicting the logical categories. The loss function for LCP is as described in Equation (1) where n is the number of samples, m is the number of [LGMASK] in the ith sample, yi,j indicates the model prediction result for the jth [LGMASK] in the ith sample and yˆi,j denote the corresponding ground truth value.
$${\mathcal{L}}_{\mathrm{LCP}}=\sum_{i=1}^{n}{\frac{1}{m}}\sum_{j=1}^{m}\mathrm{CELoss}(y_{i,j},{\hat{y}}_{i,j})\quad\quad(1)$$
## 4.2.2 Idol
To avoid catastrophic forgetting, we combine the classic MLM task with the LCP introduced above to become IDOL, a multi-task learning pre-training method for enhancing the logical reasoning ability of pre-trained models. For the purpose of balancing the effects of the two pre-training tasks, we introduced a hyper-parameter λ as the weight of the loss of LCP (the proper λ depends on the pre-trained language model used and the empirical range is between 0.7 and 0.9). Thus, for the IDOL pre-training loss function, please refer to Equation (2). Figure 3 presented an example of IDOL pre-training where predicting tokens and the classes of logical indicators simultaneously.
$${\mathcal{L}}_{\mathrm{{IDOL}}}=\lambda\cdot{\mathcal{L}}_{\mathrm{{LCP}}}+(1-\lambda)\cdot{\mathcal{L}}_{\mathrm{{MLM}}}$$
## 5 Experiments 5.1 Baselines
With the rapid development of pre-training technology these years, we have various choices for backbone models. In this paper, we decide to apply IDOL on BERT-large (Devlin et al.,
2019), RoBERTa-large (Liu et al., 2019), ALBERTxxlarge (Lan et al., 2020) and DeBERTa-v2-xxlarge
(He et al., 2021) and will evaluate the models in Figure 3: An example of pre-training with IDOL. The
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
model needs to recover the tokens replaced by [MASK]
(MLM) and predict the category of each logical indicator masked by [LGMASK] (LCP) in the meantime.
the following three different aspects in section 5.4 to better verify the performance of IDOL.3 In terms of logical reasoning MRC, we will compare IDOL with several previous but still competitive methods for logical reasoning MRC including DAGN (Huang et al., 2021), AdaLoGN (Li et al.,
2022), LReasoner (Wang et al., 2022b), Logiformer
(Xu et al., 2022) and MERIt (Jiao et al., 2022).
Much more interesting, we let IDOL compete with ChatGPT in a small setting.
## 5.2 Datasets
$$(2)$$
First and foremost, the aim of IDOL is to improve the logical reasoning ability of pre-trained models, thus, the two most representative benchmarks -
ReClor and LogiQA will act as the primary examiners.
Following this, RACE (Lai et al., 2017) and SQuAD 2.0 (Rajpurkar et al., 2018), two classic machine reading comprehension datasets that are not targeted at assessing reasoning ability, will come on stage, which will be beneficial to conclude whether IDOL helps with other types of reading comprehension abilities.
Last but not least, we also tested the models pre-trained with IDOL on MNLI (Williams et al.,
3In the following sections, we refer these baseline models to BERT, RoBERTa, ALBERT and DeBERTa respectively for simplicity.
2018) and STS-B (Cer et al., 2017), two tasks of GLUE (Wang et al., 2018), to make sure that the general language understanding abilities are retained to a great extent during the process of logical enhancement. The evaluation metrics on STS-B
are the Pearson correlation coefficient (Pear.) and Spearman's rank correlation coefficient (Spear.) on the development set. And we use the accuracy of MNLI-m and MNLI-mm development sets for evaluation on MNLI.
ReClor The problems in this dataset are collected from two American standardized tests -
LSAT and GMAT, which guarantee the difficulty of answering the questions. Moreover, ReClor covers 17 classes of logical reasoning including main idea inference, reasoning flaws detection, sufficient but unnecessary conditions, and so forth. Each problem consists of a passage, a question, and four answer candidates, like the one shown in the green section of Figure 1. There are 4638, 500, and 1000 data points in the training set, development set, and test set respectively. The accuracy is used to evaluate the system's performance.
LogiQA The main difference compared with ReClor is that the problems in LogiQA are generated based on the National Civil Servants Examination of China. Besides, it incorporates 5 main reasoning types such as categorical reasoning and disjunctive reasoning. And 7376, 651, and 651 samples are gathered for the training set, development set, and test set individually.
## 5.3 Implementation Detail 5.3.1 Idol
During the process of pre-training with IDOL, we implemented the experiments on 8 Nvidia A100 GPUs. Since IDOL was applied on multiple different pre-trained models, we provide a range for some main hyperparameters. The whole training process consists of 10k~20k steps while the warmup rate keeps 0.1. The learning rate is warmed up to a peak value between 5e-6~3e-5 for different models, and then linearly decayed. As for batch size, we found that 1024 or 2048 is more appropriate for most models. Additionally, we use AdamW
(Loshchilov and Hutter, 2017) as our optimizer with a weight decay of around 1e-3. For the software packages we used in detail, please see Appendix.
With respect to the hyperparameters for finetuning models on downstream tasks, we follow the
Models **ReClor LogiQA**
Dev Test Dev Test
BERT 53.8 49.8 35.3♠ 33.0♠
IDOL **56.8 53.3 36.9 34.3** RoBERTa 62.6 55.6 37.0♠ 36.6♠
DAGN 65.2 58.2 35.5 38.7
AdaLoGN 65.2 60.2 39.9 40.7 LReasoner 66.2 62.4 38.1 40.6 MERIt 67.8 60.7 42.4 41.5
Logiformer 68.4 63.5 42.2 **42.6**
IDOL **70.2 63.9 42.5** 41.8
ALBERT 70.4 67.3 41.2♠ 41.3♠
LReasoner 73.2 70.7 41.6 41.2
MERIT 73.2 **71.1** 43.9 **45.3**
IDOL **74.6** 70.9 **44.7** 43.8
configurations provided in the original paper of either the corresponding model or the dataset. 5.3.2 LLM
For the purpose of comparing IDOL with LLMs, we randomly sampled 30 pieces of data in the development sets of ReClor and LogiQA separately
(named Dev-30). As for models, we choose GPT3.54, ChatGPT5and GLM-130B (Zeng et al., 2022)
for this pilot test.
To better evaluate the performance of LLMs, we tested them in the following three settings: zeroshot prompting, few-shot prompting, and chain-ofthought prompting. For zero-shot prompting, we designed the following template to wrap up the MRC problem.
The passage is [PASSAGE]*. The question is*
[QUESTION]. Here are 4 choices for it and they are [CHOICES]*. Which one should I choose?*
Thanks.
As for few-shot prompting, we insert 3 examples in the same template but with correct answers ahead of the target question. When testing with chain-of-thought prompting, the template is similar to the one presented above. But there is only one example ahead and sentences describing the 4The exact version is text-davinci-003.
5Tested on February 13th, 2023.
| Models | ReClor | | |
|------------------|----------|--------|------|
| Test | Test-E | Test-H | |
| DeBERTa♡ | 75.3 | 84.0 | 68.4 |
| LReasoner♣ | 76.1 | 87.1 | 67.5 |
| Knowledge Model♣ | 79.2 | 91.8 | 69.3 |
| MERIt♣ | 79.3 | 85.2 | 74.6 |
| AMR-LE♣ | 80.0 | 87.7 | 73.9 |
| IDOL | 80.6 | 87.7 | 75.0 |
![6_image_0.png](6_image_0.png)
process of the way how humans reason to solve the problem are provided before giving the right answer to the example. For more details about the templates and the test example, please refer to Table 6 and Figure 4.
## 5.4 Main Results 5.4.1 Logical Reasoning Mrc
Fine-tuning To evaluate the model performance on logical reasoning MRC, we experimented with the baseline models mentioned above on ReClor and LogiQA, the two most representative benchmarks in this field. The majority of previous researchers focus on applying their method to RoBERTa, IDOL meets the most competitors in this setting as shown in Table 2. In spite of this, IDOL surpassed all the existing strong systems by an obvious margin in nearly every evaluation metric except the accuracy on the LogiQA test set. Apparently, from the results on BERT and ALBERT in Table 2 and results on DeBERTa in Table 3, we can see that IDOL has significant advantages over other opponents as well.
In summary, IDOL is highly effective in logical reasoning MRC with state-of-the-art performance and this benefit can be generalized to different pretrained models even to the recent large-scale and strong ones.
Prompting Although the scale of Dev-30 for the pilot test on LLM is small, the results displayed in Table 5 inspired us to some extent. Generally, IDOL is still competitive in the era of LLM. On ReClor, it achieved an accuracy of 80% while the best result from LLMs is 70% (ChatGPT with Chain-ofThought prompting). Even though GLM-130B realizes an accuracy of 50% on LogiQA in Zero-Shot setting surprisingly (slightly higher than 43.3% by IDOL), IDOL has an obvious advantage compared with other settings and other LLMs. Additionally, there is an interesting phenomenon that chain-ofthought prompting brings negative effects on LLMs except for ChatGPT on ReClor, which is not consistent with the findings in Wei et al. (2022).
## 5.4.2 Other Mrc Datasets
For testing whether IDOL could also benefit on types of MRC tasks or maintain the original abilities, we conducted a series of experiments based on RoBERTa as the backbone model. The results are displayed in the middle part of Table 4 where we compare the original model, the model further pre-trained with only MLM on LGP and the model further pre-trained with IDOL. We evaluate the models in each setting with 4 different seeds and report the average value. It is apparent that IDOL performs better on both RACE and SQuAD 2.0 in each evaluation metric (although the effects are not as big as those on ReClor or LogiQA), which implies that IDOL indeed helps on general MRC
tasks while achieving significant improvement in logical reasoning ability.
## 5.4.3 General Understanding Ability
Following the experiment configuration in section 5.4.2, we planned to find out what kind of effect would IDOL have on other types of natural lan-
| Models | ReClor | LogiQA | RACE | SQuAD 2.0 | STS-B | MNLI | | | | | | |
|-------------|----------|----------|--------|-------------|---------|--------|------|-------|--------|------|------|------|
| Dev | Test | Dev | Test | Dev | Test | F1 | EM | Pear. | Spear. | m | mm | |
| RoBERTa | 62.7 | 55.2 | 36.2 | 37.1 | 85.2 | 84.4 | 89.0 | 86.1 | 92.6 | 92.5 | 89.5 | 89.3 |
| +MLM | 65.0 | 58.4 | 37.9 | 36.6 | 85.4 | 84.5 | 89.0 | 86.1 | 92.2 | 92.1 | 89.5 | 89.5 |
| +LCP (IDOL) | 66.8 | 60.6 | 39.4 | 38.8 | 85.6 | 84.8 | 89.2 | 86.2 | 92.3 | 92.2 | 89.7 | 89.5 |
Table 4: Results of RoBERTa with different pre-training tasks on logical reasoning MRC, other types of MRC and other types of NLU tasks.
| Setting | | | | |
|----------------|------|------|------|------|
| Models | ZS | FS | CoT | FT |
| ReClor GPT-3.5 | 56.7 | 50.0 | 46.7 | - |
| ChatGPT | 63.3 | 63.3 | 70.0 | - |
| GLM-130B | 46.7 | 40.0 | 23.3 | - |
| IDOL | - | - | - | 80.0 |
| LogiQA GPT-3.5 | 30.0 | 10.0 | 13.3 | - |
| ChatGPT | 33.3 | 36.7 | 23.3 | - |
| GLM-130B | 50.0 | 36.7 | 26.6 | - |
| IDOL | - | - | - | 43.3 |
guage understanding tasks which help to reflect the general understanding ability of pre-trained language models. We evaluate the models in each setting with 4 different seeds and report the average value. From the results presented in the right part of Table 4, we can easily find that although IDOL falls behind on MNLI and exceeds the other two competitors on STS-B, the differences in all the evaluation metrics are quite small. Therefore, we could conclude that IDOL retains the general language understanding ability from the original pre-trained model successfully during the process of becoming stronger in logical reasoning.
## 6 Ablation Study
In this section, we conducted a series of ablation experiments about the multiple logical indicators we used in both fine-tuning and pre-training phases.
We evaluate the models based on RoBERTa with 4 different seeds and report the average value.
## 6.1 Indicators In Fine-Tuning
As introduced in section 3.2, we defined 5 classes of logical indicators that reflect various logical relations among text logical units and we make use of all of them in IDOL. To figure out whether the 5 types are of equal importance in logical reasoning MRC, we conducted a set of controlled experiments where certain types of indicators are removed from the ReClor train set as the fine-tuning train dataset in each setting.
From the results displayed in Table 7, it is obvious from the last column that logical indicators indeed play an important role in logical reasoningrelated text understanding since the loss of all indicators decreases accuracy by 4 to 7 points. In detail, we can conclude that the negative and adversative indicators influence the most by comparing the gaps between pre-training on the original LGP
and the dataset without individual types of indicators.
## 6.2 Indicators In Pre-Training
Now that logical indicators have been proven to be effective in fine-tuning stage, we believe they also help with the pre-training stage. Therefore, we arranged a series of experiments on gradually incorporating more logical indicators from not leveraging any indicators (MLM), only making use of PMI and CLI (LCP-2), adding LUI to LCP-2 (LCP3), to taking advantage of all 6 types of logical indicators (LCP).
From the lines displayed in Figure 5, it is clear that models perform better while leveraging a greater variety of logical indicators since the red line (IDOL) is positioned significantly higher than green and yellow lines representing pre-training tasks that utilize fewer types of logical indicators.
According to the results in Table 7, PMI and CLI
brought the least difference in the model performance on ReClor. The LCP-2 and LCP-3 mainly rely on the two types, and introducing a new special
| Setting | Template |
|-----------|--------------------------------------------------------------------------------------------------------------------------------------------|
| Zero-Shot | The passage is [PASSAGE]. The question is [QUESTION]. Here are 4 choices for it and they are [CHOICES]. Which one should I choose? Thanks. |
| Few-Shot | [Example A] [Example B] [Example C] The passage is [PASSAGE]. The question is [QUESTION]. Here are 4 choices for it and they are [CHOICES]. Which one should I choose? Thanks. The passage is [PASSAGE]. The question is [QUESTION]. Here are 4 choices for it and they are [CHOICES]. You can analyze like this, [Thought Process]. So the answer is [Answer]. The passage is [PASSAGE]. The question is [QUESTION]. Here are 4 choices for it and they are [CHOICES]. Which one should I choose? Thanks. |
| Models | ReClor Train Set | | | | | |
|----------|--------------------|------|------|------|------|------|
| - | PMI&CLI | NTI | ATI | CNI | ALL | |
| RoBERTa | 62.7 | 64.0 | 59.7 | 61.7 | 63.7 | 59.1 |
| + MLM | 65.0 | 64.9 | 61.8 | 61.5 | 64.5 | 59.9 |
| + LCP | 66.8 | 63.8 | 62.7 | 63.4 | 64.2 | 60.5 |
token [LGMASK] inevitably brings noise during model training and further widens the gap between pre-training and down-stream tasks, so that they perform even not better than the original MLM.
Additionally, in the aspect of overall trends, the model pre-trained with IDOL is becoming stronger gradually during the process of pre-training, which certifies the effectiveness of our designed task targeted at logical indicators.
## 7 Conclusion And Future Work
In this paper, we proposed an easy-to-understand further pre-training method IDOL which fully exploits the logical information provided by 6 types of logical indicators and is proven effective on different pre-trained language models while keeping them competitive on many other kinds of downstream tasks. Particularly, IDOL achieves state-ofthe-art performance on logical reasoning machine reading comprehension tasks.
With respect to future work, we plan to leverage the sentence-level or passage-level logical features in the meantime and integrate it with IDOL to generate a stronger multi-task further pre-training method for improving the logical reasoning ability of pre-trained language models. Moreover, we de-
Table 6: Templates and examples for LLM prompting in different settings.
![8_image_0.png](8_image_0.png)
cide to redesign the IDOL task and find out whether logical indicators also play an important role in those generative pre-trained models as well. Furthermore, we will explore the way of combining IDOL with prompting to find a better method to elicit the reasoning abilities of LLMs.
## 8 Limitations
First of all, IDOL relies on a customized dataset that is filtered out from Wikipedia pages with the help of many pre-defined logical indicators. Inevitably, this will introduce a certain amount of artificial bias. If an automatic method for logical indicator extraction based on something like hidden representations from neural network models is put forward, it would be beneficial to narrow the gap between the dataset preparation and logical pre-training.
In addition, in the field of pre-training task design, there have been a lot of different but effective approaches proposed. For example, in Cui et al.
(2022), the authors presented a pre-training task named PERT which requires the models to recover the original token sequences under the background of that different token permutation within a certain range would not affect Chinese text understanding. This method only depends on the original texts, but IDOL introduces one more special token, which widens the gap between pre-training and fine-tuning to some extent.
## References
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *Proceedings* of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 657–668, Online. Association for Computational Linguistics.
Yiming Cui, Ziqing Yang, and Ting Liu. 2022. Pert:
Pre-training bert with permuted language model.
Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018.
Transforming question answering datasets into natural language inference datasets.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations.
Chadi Helwe, Chloé Clavel, and Fabian M. Suchanek.
2021. Reasoning with transformer-based models:
Deep learning, but shallow reasoning. In *3rd Conference on Automated Knowledge Base Construction*.
Yinya Huang, Meng Fang, Yu Cao, Liwei Wang, and Xiaodan Liang. 2021. DAGN: Discourse-aware graph network for logical reasoning. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5848–5855, Online. Association for Computational Linguistics.
Fangkai Jiao, Yangyang Guo, Xuemeng Song, and Liqiang Nie. 2022. MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3496–3509, Dublin, Ireland.
Association for Computational Linguistics.
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785–
794, Copenhagen, Denmark. Association for Computational Linguistics.
Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. *Advances in* Neural Information Processing Systems (NeurIPS).
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning Representations*.
Xiao Li, Gong Cheng, Ziheng Chen, Yawei Sun, and Yuzhong Qu. 2022. AdaLoGN: Adaptive logic graph network for reasoning-based machine reading comprehension. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 7147–7161, Dublin, Ireland. Association for Computational Linguistics.
Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. In Proceedings of the TwentyNinth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 3622–3628. International Joint Conferences on Artificial Intelligence Organization. Main track.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Xinyu Pi, Wanjun Zhong, Yan Gao, Nan Duan, and Jian-Guang Lou. 2022. Logigan: Learning logical reasoning via adversarial pre-training. *ArXiv*,
abs/2205.08794.
Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0.
In *Proceedings of the Sixth International Conference* on Language Resources and Evaluation (LREC'08),
Marrakech, Morocco. European Language Resources Association (ELRA).
Yujia Qin, Yankai Lin, Ryuichi Takanobu, Zhiyuan Liu, Peng Li, Heng Ji, Minlie Huang, Maosong Sun, and Jie Zhou. 2021. ERICA: Improving entity and relation understanding for pre-trained language models via contrastive learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 3350–3363, Online. Association for Computational Linguistics.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you don't know: Unanswerable questions for SQuAD. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the* 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, and Nan Duan. 2022a. Logic-driven context extension and data augmentation for logical reasoning of text. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1619–1629, Dublin, Ireland. Association for Computational Linguistics.
Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, and Nan Duan. 2022b. Logic-driven context extension and data augmentation for logical reasoning of text. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1619–1629, Dublin, Ireland. Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Fangzhi Xu, Jun Liu, Qika Lin, Yudai Pan, and Lingling Zhang. 2022. Logiformer: A two-branch graph transformer network for interpretable logical reasoning. In *Proceedings of the 45th International ACM*
SIGIR Conference on Research and Development in
Information Retrieval, SIGIR '22, page 1055–1065, New York, NY, USA. Association for Computing Machinery.
Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng.
2020. Reclor: A reading comprehension dataset requiring logical reasoning. In *International Conference on Learning Representations (ICLR)*.
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b:
An open bilingual pre-trained model. *arXiv preprint* arXiv:2210.02414.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
8
✓ A2. Did you discuss any potential risks of your work?
8
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
We just use Grammarly to do spell checks.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4.1
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We will put forward the terms for using the artifact we created when we publish it, only for research purposes.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We will put forward the terms for using the artifact we created when we publish it, only for research purposes.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
As far as we know, something like names is safe in the Wikipedia corpus and there is nearly no offensive content in it, so we didn't plan to filter out those texts like names or offensive content.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
4.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5.3
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5.3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5.4 and 6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5.3
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
bhat-etal-2023-adversarial | Adversarial Training for Low-Resource Disfluency Correction | https://aclanthology.org/2023.findings-acl.514 | Disfluencies commonly occur in conversational speech. Speech with disfluencies can result in noisy Automatic Speech Recognition (ASR) transcripts, which affects downstream tasks like machine translation. In this paper, we propose an adversarially-trained sequence-tagging model for Disfluency Correction (DC) that utilizes a small amount of labeled real disfluent data in conjunction with a large amount of unlabeled data. We show the benefit of our proposed technique, which crucially depends on synthetically generated disfluent data, by evaluating it for DC in three Indian languages- Bengali, Hindi, and Marathi (all from the Indo-Aryan family). Our technique also performs well in removing stuttering disfluencies in ASR transcripts introduced by speech impairments. We achieve an average 6.15 points improvement in F1-score over competitive baselines across all three languages mentioned. To the best of our knowledge, we are the first to utilize adversarial training for DC and use it to correct stuttering disfluencies in English, establishing a new benchmark for this task. | # Adversarial Training For Low-Resource Disfluency Correction
## Vineet Bhat, Preethi Jyothi, Pushpak Bhattacharyya
Indian Institute of Technology Bombay, India [email protected], [email protected], [email protected]
## Abstract
Disfluencies commonly occur in conversational speech. Speech with disfluencies can result in noisy Automatic Speech Recognition
(ASR) transcripts, which affects downstream tasks like machine translation. In this paper, we propose an adversarially-trained sequencetagging model for Disfluency Correction (DC)
that utilizes a small amount of labeled real disfluent data in conjunction with a large amount of unlabeled data. We show the benefit of our proposed technique, which crucially depends on synthetically generated disfluent data, by evaluating it for DC in three Indian languages-*Bengali, Hindi*, and *Marathi* (all from the IndoAryan family). Our technique also performs well in removing *stuttering disfluencies* in ASR transcripts introduced by speech impairments. We achieve an average 6.15 points improvement in F1-score over competitive baselines across all three languages mentioned. To the best of our knowledge, we are the first to utilize adversarial training for DC and use it to correct stuttering disfluencies in English, establishing a new benchmark for this task.
## 1 Introduction
Disfluencies are words that are part of spoken utterances but do not add meaning to the sentence. Disfluency Correction (DC) is an essential preprocessing step to clean disfluent sentences before passing the text through downstream tasks like machine translation (Rao et al., 2007; Wang et al.,
2010). Disfluencies can be introduced in utterances due to two main reasons: the conversational nature of speech and/or speech impairments such as stuttering. In real-life conversations, humans frequently deviate from their speech plan, which can introduce disfluencies in a sentence (Dell et al., 1997). Stuttering speech consists of involuntary repetitions or prolongations of syllables which disturbs the fluency of speech.
Conversational disfluencies occur once every 17 words (Bortfeld et al., 2001) whereas a 2017 US
study1shows that roughly 1% of the population stutters and predominantly consists of children.
One out of every four children continues to suffer from this disorder lifelong. When such speech passes through an ASR system, readability of the generated transcript deteriorates due to the presence of disfluencies in speech (Jones et al., 2003).
Shriberg (1994) defines the surface structure of disfluent utterances as a combination of reparandum, interregnum and repair. The reparandum consists of the words incorrectly uttered by the speaker that needs correction or complete removal.
The interregnum acknowledges that the previous utterance may not be correct, while repair contains the words spoken to correct earlier errors.
| Type | Example |
|----------------|--------------------------------------|
| Conversational | Well, you know, this is a good plan. |
| Stuttering | Um it was quite fu funny |
Table 1: Examples and surface structure of disfluent utterances in conversational speech and stuttering. Red
- Reparandum, Blue - Interregnum, Orange - Repair Data in DC is limited because of the time and resources needed to annotate data for training
(**Appendix** A). Through this work2, we provide a method to create high-quality DC systems in low resource settings. Our main contributions are:
1. Improving the state-of-the-art in DC in Indian languages like Bengali, Hindi and Marathi by 9.19, 5.85 and 3.40 points in F1 scores, respectively, using a deep learning framework with adversarial training on real, synthetic and unlabeled data.
2. Creating an open-source stuttering English DC corpus comprising 250 parallel sentences 3. Demonstrating that our adversarial DC model can be used for textual stuttering correction 1https://www.nidcd.nih.gov/health/stuttering 2https://github.com/vineet2104/
AdversarialTrainingForDisfluencyCorrection
## 2 Related Work
Approaches in DC can be categorized into noisy channel-based, parsing-based, and sequence tagging-based approaches. Noisy channel-based approaches rely on the following principle: a disfluent sentence Y can be obtained from a fluent sentence X by adding some noise. These models try to predict the fluent sentence X given the disfluent sentence Y (Honal and Schultz, 2004; Jamshid Lou and Johnson, 2017; Johnson and Charniak, 2004). Parsing-based approaches jointly predict the syntactic structure of the disfluent sentence along with its disfluent elements
(Honnibal and Johnson, 2014; Jamshid Lou and Johnson, 2020; Rasooli and Tetreault, 2013; Wu et al., 2015; Yoshikawa et al., 2016). Sequence tagging-based approaches work on the following hypothesis: every word in a disfluent sentence can be marked as fluent/disfluent. These methods work best for shorter utterances and perform optimally for real-life conversational DC (Hough and Schlangen, 2015; Ostendorf and Hahn, 2013; Zayats et al., 2016). Moreover, sequence-tagging based methods require far less labeled data to perform well, compared to the other two methods.
Our approach to DC focuses on treating it as a sequence tagging problem rather than a machine translation task. The objective is to accurately classify each word as either disfluent or fluent, and create fluent sentences by retaining only the fluent words. The lack of labeled data for DC in low-resource languages has prompted the use of semi-supervised methods and self-supervised techniques (Wang et al., 2018; Wang et al., 2021). DC
has also been studied as a component in speech translation systems, and thus its effect has been analyzed in improving the accuracies of machine translation models (Rao et al., 2007; Wang et al.,
2010). Synthetic data generation for DC has also received attention recently. These methods infuse disfluent elements in fluent sentences to create parallel data for training (Passali et al., 2022; Saini et al., 2020). Our work is an extension of Kundu et al. (2022), which creates the first dataset for DC in Bengali, Hindi and Marathi. We use this dataset to train our adversarial model to improve over the state-of-the-art in these languages. To the best of our knowledge, we are the first to model DC to correct stuttering ASR transcripts.
## 3 Types Of Disfluencies
There are six broad types of disfluencies encountered in real life - Filled Pause, Interjection, Discourse Marker, Repetition or Correction, False Start and Edit. Although these are common in conversational speech, stuttering speech consists mainly of Filled Pauses and Repetitions. This section describes each type of disfluency and gives some examples in English.
1. **Filled Pauses** consist of utterances that have no semantic meaning.
Example - What about the uh event?
2. **Interjections** are similar to filled pauses, but their inclusion in sentences indicates affirmation or negation.
Example - Ugh, what a day it has been!
3. **Discourse Markers** help the speaker begin a conversation or keep turn while speaking.
These words do not add semantic meaning to the sentence.
Example - **Well**, we are going to the event.
4. **Repetition or Correction** covers the repetition of certain words in the sentence and correcting words that were incorrectly uttered.
Example - If I **can't** don't go to the event today, it is not going to look good.
5. **False Start** occurs when previous chain of thought is abandoned, and new idea is begun.
Example - **Mondays dont work for me**, how about Tuesday?
6. **Edit** refers to the set of words that are uttered to correct previous statements.
Example - We need **three tickets, I'm sorry**,
four tickets for the flight to California.
## 4 Architecture
The lack of labeled data for DC is a significant hurdle to developing state-of-the-art DC systems for low-resource languages. Passali et al. (2022),
Saini et al. (2020) and Kundu et al. (2022) introduced data augmentation by synthesizing disfluencies in fluent sentences to generate parallel data.
In this work, we propose a deep learning architecture that uses adversarial training to improve a BERT-based model's token classification accuracy of whether a token is disfluent or not. Our proposed architecture uses real, synthetic and unlabeled data to improve classification performance.
Our model, Seq-GAN-BERT, is inspired by Croce et al. (2020), who first used a similar model for sentence classification. It consists of three
![2_image_0.png](2_image_0.png)
modules: a BERT-based encoder (Devlin et al.,
2019), discriminator and generator. The encoder converts the input sequence X = (X1, X2*, ...X*n)
into encoded vector representations (Hreal). Simultaneously, the generator creates fake representations (Hfake) from Gaussian random noise (Z),
mimicking the real data that passes through the encoder. The discriminator aims to solve a twopronged objective: i) predicting every word in the sentence to be disfluent or fluent and ii) determining whether the input from the generator comes from real or fake data.
## 4.1 Adversarial Training
The discriminator loss comprises two loss terms.
The first loss is supervised by the token classification task, while the second loss is defined by the real/fake data identification task. Such adversarial training also allows the model to use unlabeled data during training. For unlabeled samples, only the real/fake data identification task is executed. The generator continuously improves during training and produces fake representations that resemble actual data. The competing tasks of the generator (to create better representations to fool the discriminator) and the discriminator (to perform token classification for labeled sentences and real/fake identification) compels the MuRIL
encoder to generate better representations of input sentences. The resulting high-quality representations allow the discriminator to identify disfluent words with a high accuracy.
## 5 Task 1: Few Shot Dc In Indian Languages
To test our proposed architecture, we train the model on the few-shot DC task for Indian languages. The current state-of-the-art performance in Bengali, Hindi and Marathi DC is obtained by training a large multilingual transformer model using synthetic data created by injecting disfluencies in fluent sentences using rules (Kundu et al., 2022).
We train our Seq-GAN-BERT model using the authors' multilingual real and synthetic data.
## 5.1 Dataset
Our dataset consists of parallel disfluent-fluent sentences in three Indian languages. We use 300, 150 and 250 real disfluent sentences in Bengali, Hindi and Marathi, respectively and generate 1000 synthetic disfluent sentences in Bengali and 500 synthetic disfluent sentences each in Hindi and Marathi each by infusing disfluent elements in fluent transcriptions using a rule-based approach
(Kundu et al., 2022). The synthetic data was created such that the percentage of disfluent words across 3 languages remains constant.
## 5.2 Text Processing And Training Details
Text pre-processing is performed by removing punctuations, lower-casing and creating wordlevel tokens for parallel sentences. The SeqGAN-BERT model uses a combination of labeled and unlabeled data comprising real and synthetically generated disfluent sentences in different languages. We try different combinations of monolingual and multilingual data. Our experiments show that the best model for Bengali uses real and synthetic Bengali sentences as labeled data and disfluent Hindi sentences as unlabeled data.
The best model for Hindi uses real and synthetic Hindi sentences as labeled data and disfluent Bengali sentences as unlabeled data. The best model for Marathi uses real and synthetic Marathi sentences as labeled data and disfluent Bengali sentences as unlabeled data. The BERT-based transformer that we use as an encoder is the MuRIL
model pretrained on English and many Indian lan-
| Lang | Input | Transliteration | Gloss | Translation | ZS Output | FS Output | | | | | | |
|--------------------------------------------------------------|-------------------------------------|-------------------|-------------|---------------|-------------|-------------------|----|------|----|----|-------------|-------------|
| Bn | িবষয় সয্ার িবষয়টা সয্ার সয্ার আিম একটু ভুল বললাম | biShaya | syaara | | | | | | | | | |
| biShayaTaa syaara syaara aami ekaTu bhula balalaama | subject | sir | | | | | | | | | | |
| the_matter | sir | | | | | | | | | | | |
| sir I_am a_little wrong I_said | Subject | Sir | | | | | | | | | | |
| Subject Sir Sir I said a little wrong | িবষয়টা | আিম | িবষয়টা সয্ার আিম | | | | | | | | | |
| একটু ভুল বললাম | একটু ভুল বললাম | | | | | | | | | | | |
| Hi | तो यह है अ स्कु ल | to | yaha | hai | a | so it is a school | so | this | is | uh | यह है अ स्कु ल | तो यह है स्कु ल |
| skula | school | | | | | | | | | | | |
| Mr | देशातील | प्रत्येक | | | | | | | | | | |
| शहरात | प्रत्येक | | | | | | | | | | | |
| गावात ही स्वǵता मोहीम सुरू आहे | deshaatiila pratyeka | sha | | | | | | | | | | |
| haraata pratyeka gaavaata hii svachChataa mohiima suruu aahe | in_the_country each in_the_city each in_the_village this cleanliness campaign continue is | This | cleanli | | | | | | | | | |
| ness | drive | is | | | | | | | | | | |
| going | on | in | | | | | | | | | | |
| every | city | in | | | | | | | | | | |
| every village of the country | देशातील | प्रत्येक | | | | | | | | | | |
| गावात ही स्वǵता मोहीम सुरू आहे | देशातील | प्रत्येक | | | | | | | | | | |
| गावात ही स्वǵता मोहीम सुरू आहे | | | | | | | | | | | | |
guages (Khanuja et al., 2021). MuRIL representations for Indian languages are of superior quality compared to other multilingual Transformer-based models like mBERT (Devlin et al., 2019).
## 5.3 Evaluation
To evaluate our model, we train baselines for DC
in zero-shot and few-shot settings. *ZeroShot* is based on Kundu et al. (2022). *FewShot* is based on training MuRIL on all real and synthetic data available in the chosen language, along with labeled data in a related Indian language (for Bengali, either Hindi or Marathi can act as a related Indian language). *FewShotAdv* is the Seq-GAN-BERT
model without any unlabeled data. Although models like BiLSTM-CRF have been as alternatives to transformers for sequence tagging, direct finetuning often performs better (Ghosh et al., 2022). Performance of DC systems is usually measured with F1 scores (Ferguson et al., 2015; Honnibal and Johnson, 2014; Jamshid Lou and Johnson, 2017).
Table 3 shows the comparison of various baselines against our model.
Our model, Seq-GAN-BERT with unlabeled sentences, performs better than the other baselines and establishes a new state-of-the-art for DC in Bengali, Hindi and Marathi. Our model benefits from adversarial training using both unlabeled data and multilingual training. Comparison of our model's output with respect to the ZeroShot baseline is discussed in Table 2 (for more examples, refer to Appendix B). The observed precision and recall scores of these models during testing show that without adversarial training, the model performs with high precision but low recall. However, with adversarial training, the model improves its recall without compromising much on precision.
Lang Model P R F1
Bn ZeroShot 93.06 62.18 74.55
FewShot 66.37 68.20 67.27
FewShotAdv 84.00 78.93 81.39
Our model 87.57 80.23 **83.74**
Hi ZeroShot 85.38 79.41 82.29
FewShot 82.99 81.33 82.15
FewShotAdv 88.15 83.14 85.57
Our model 89.83 86.51 **88.14**
Mr ZeroShot 87.39 61.26 72.03
FewShot 82.00 60.00 69.30
FewShotAdv 84.21 64.21 72.86
Our model 85.34 67.58 **75.43**
The zero-shot model (without adversarial training)
classifies less words as disfluent but at a high accuracy, whereas the few-shot model (with adversarial training) correctly classifies more words as disfluent.
## 6 Task 2: Stuttering Dc In English
We have already shown how our proposed architecture learns better semantic representations for DC using small amounts of manually annotated labeled data. In this section, we present a similar experiment in Stuttering DC (SDC). We define SDC as the task of removing disfluent elements in spoken utterances that are caused by stuttering speech impairment. Since this is the first attempt to model stuttering correction as disfluency removal, we make our version of the existing dataset for stuttering publicly available for research purposes and provide various baseline comparisons. We show that our model generalizes well for this task and is able to remove disfluent elements in stuttering speech.
## 6.1 Dataset
The UCLASS dataset is created by transcribing audio interviews of 14 anonymous teenagers who stutter and consists of two released versions (Howell et al., 2004). Both versions of this corpus are available for free download and research. We create 250 disfluent-fluent parallel sentences from the available transcripts of such utterances. The dataset is released here3.
## 6.2 Processing & Training
We follow the same steps as before (section 5.2).
Stuttered syllables are represented in the text, separated by a space delimiter and treated as a disfluent term. This gold-standard dataset is split into 150 sentences for training and 100 sentences for testing. The training sentences are used as labeled data for the model and unlabeled data from Switchboard (Godfrey et al., 1992) or Kundu et al.
(2022) is used to facilitate multilingual training.
Our model performs best when we use synthetic Bengali disfluent sentences as unlabeled data.
## 6.3 Evaluation
We use five baselines to evaluate our model's performance. *SupervisedGold* uses the gold standard data and trains the MuRIL model for token classification. *SupervisedGoldSWBD and SupervisedGoldLARD* uses a combination of the gold standard dataset along with 1000 disfluent sentences from the Switchboard corpus and LARD dataset
(Passali et al., 2021). *AdversarialSWBD and AdversarialLARD* uses the Seq-GAN-BERT to train on a combination of labeled sentences from gold standard corpus and unlabeled sentences from the Switchboard corpus and LARD dataset. Table 4 displays our results averaged over multiple seeds.
Our model outperforms all baselines. Improvement over *AdversarialLARD* shows the benefit of multilingual training. We also used synthetic Hindi or Marathi data while training, but achieved lower scores than the *AdversarialLARD* baseline.
| Model | P | R | F1 |
|--------------------|-------|-------|-------|
| SupervisedGold | 89.11 | 78.08 | 83.23 |
| SupervisedGoldSWBD | 87.34 | 86.50 | 86.92 |
| SupervisedGoldLARD | 74.58 | 86.33 | 80.02 |
| AdversarialSWBD | 85.76 | 84.17 | 84.96 |
| AdversarialLARD | 86.21 | 84.82 | 85.51 |
| Our model | 87.26 | 88.10 | 87.68 |
Summary of results: In this paper, we evaluate our proposed architecture for low-resource DC using two tasks: 1) DC in Indian languages and 2)
Stuttering DC in English. Our model outperforms competitive baselines across both these tasks establishing a new state-of-the-art for Indian languages DC. The adversarial training in our model improves the representations of a BERT-based encoder for disfluent/fluent classification. We show that multilingual training benefits such tasks as the generator is trained to create better representations of fake data to fool the discriminator.
## 7 Conclusion
Adversarial training using unlabeled data can benefit disfluency correction when we have limited amounts of labeled data. Our proposed model can also be used to correct stuttering in ASR transcripts with high accuracy.
Future work lies in integrating speech recognition models like Whisper4 or wav2vec 2.0
(Baevski et al., 2020) to create end-to-end speechdriven DC models. It will also be insightful to see how this model transfers to other low-resource languages with different linguistic properties.
## 8 Acknowledgements
We would like to thank the anonymous reviewers and area chairs for their suggestions to strengthen the paper. This work was done as part of the Bahubhashak Pilot Project on Speech to Speech Machine Translation under the umbrella of National Language Technology Mission of Ministry of Electronics and IT, Govt. of India. We would 4https://cdn.openai.com/papers/whisper.pdf also like to thank Nikhil Saini for valuable discussions during the course of this project.
## 9 Limitations
There are two main limitations of our work.
Firstly, since there are no known baselines for Indian language DC except Kundu et al. (2022),
other architectures might perform better than our model. Our claim that Seq-GAN-BERT tries to maximize the information gained from unlabeled sentences is supported by superior performance over baselines defined in this work and other related models. Secondly, due to the lack of good quality labeled datasets, our test sets contained only 100 sentences. However, we believe that the consistency of our high-performing models across languages and multiple seeded experiments presents a positive sign for DC in low-resource settings.
## 10 Ethics Statement
The aim of our work was to design an adversarial training-enabled token classification system that is able to correctly remove disfluencies in text. The datasets used in this work are publicly available and we have cited the sources of all the datasets that we have used.
## References
Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. Wav2vec 2.0: A framework for self-supervised learning of speech representations. In *Proceedings of the 34th International* Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA. Curran Associates Inc.
Heather Bortfeld, Silvia D. Leon, Jonathan E. Bloom, Michael F. Schober, and Susan E. Brennan. 2001.
Disfluency Rates in Conversation: Effects of Age, Relationship, Topic, Role, and Gender. Language and Speech, 44(2):123–147.
Danilo Croce, Giuseppe Castellucci, and Roberto Basili. 2020. GAN-BERT: Generative adversarial learning for robust text classification with a bunch of labeled examples. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2114–2119, Online. Association for Computational Linguistics.
Gary S. Dell, Lisa K. Burger, and William R. Svec.
1997. Language production and serial order: A functional analysis and a model. *Psychological Review*,
104(1):123–147.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),
pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
James Ferguson, Greg Durrett, and Dan Klein. 2015.
Disfluency detection with a semi-Markov model and prosodic features. In *Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 257–262, Denver, Colorado. Association for Computational Linguistics.
Sreyan Ghosh, Sonal Kumar, Yaman Kumar Singla, Rajiv Ratn Shah, and Sharma Umesh. 2022. Span classification with structured information for disfluency detection in spoken utterances. In *Interspeech*.
John J. Godfrey, Edward Holliman, and J. McDaniel.
1992. Switchboard: telephone speech corpus for research and development. [Proceedings] ICASSP-92:
1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, 1:517–520 vol.1.
Matthias Honal and Tanja Schultz. 2004. Correction of disfluencies in spontaneous speech using a noisychannel approach.
Matthew Honnibal and Mark Johnson. 2014. Joint incremental disfluency detection and dependency parsing. *Transactions of the Association for Computational Linguistics*, 2:131–142.
Julian Hough and David Schlangen. 2015. Recurrent neural networks for incremental disfluency detection.
Peter Howell, Stephen Davis, Jon Bartrip, and Laura Wormald. 2004. Effectiveness of frequency shifted feedback at reducing disfluency for linguistically easy, and difficult, sections of speech (original audio recordings included). Stammering research : an on-line journal published by the British Stammering Association, 1(3):309–315.
Paria Jamshid Lou and Mark Johnson. 2017. Disfluency detection using a noisy channel model and a deep neural language model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 547–553, Vancouver, Canada. Association for Computational Linguistics.
Paria Jamshid Lou and Mark Johnson. 2020. Improving disfluency detection by self-training a selfattentive model. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 3754–3763, Online. Association for Computational Linguistics.
Mark Johnson and Eugene Charniak. 2004. A tagbased noisy channel model of speech repairs. In *Proceedings of the 42nd Annual Meeting on Association* for Computational Linguistics, ACL '04, page 33es, USA. Association for Computational Linguistics.
Douglas Jones, Florian Wolf, Edward Gibson, Elliott Williams, Evelina Fedorenko, Douglas Reynolds, and Marc Zissman. 2003. Measuring the readability of automatic speech-to-text transcripts.
Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, Shruti Gupta, Subhash Gali, Vish Subramanian, and Partha Talukdar. 2021. Muril: Multilingual representations for indian languages.
Rohit Kundu, Preethi Jyothi, and Pushpak Bhattacharyya. 2022. Zero-shot disfluency detection for Indian languages. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 4442–4454, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
M. Ostendorf and S. Hahn. 2013. A sequential repetition model for improved disfluency detection. *Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH*, pages 2624–2628.
Tatiana Passali, Alexios Gidiotis, Efstathios Chatzikyriakidis, and Grigorios Tsoumakas. 2021. Towards human-centered summarization: A case study on financial news. In Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing, pages 21–27, Online. Association for Computational Linguistics.
Tatiana Passali, Thanassis Mavropoulos, Grigorios Tsoumakas, Georgios Meditskos, and Stefanos Vrochidis. 2022. LARD: Large-scale artificial disfluency generation. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 2327–2336, Marseille, France. European Language Resources Association.
Sharath Rao, Ian Lane, and Tanja Schultz. 2007. Improving spoken language translation by automatic disfluency removal: evidence from conversational speech transcripts. In *Proceedings of Machine* Translation Summit XI: Papers, Copenhagen, Denmark.
Mohammad Sadegh Rasooli and Joel Tetreault. 2013.
Joint parsing and disfluency detection in linear time.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 124–129, Seattle, Washington, USA. Association for Computational Linguistics.
Nikhil Saini, Jyotsana Khatri, Preethi Jyothi, and Pushpak Bhattacharyya. 2020. Generating fluent translations from disfluent text without access to fluent
references: IIT Bombay@IWSLT2020. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 178–186, Online. Association for Computational Linguistics.
Elizabeth Shriberg. 1994. Preliminaries to a theory of speech disfluencies.
Feng Wang, Wei Chen, Zhen Yang, Qianqian Dong, Shuang Xu, and Bo Xu. 2018. Semi-supervised disfluency detection. In *Proceedings of the 27th International Conference on Computational Linguistics*,
pages 3529–3538, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Shaolei Wang, Zhongyuan Wang, Wanxiang Che, Sendong Zhao, and Ting Liu. 2021. Combining self-supervised learning and active learning for disfluency detection. ACM Trans. Asian Low-Resour.
Lang. Inf. Process., 21(3).
Wen Wang, Gokhan Tur, Jing Zheng, and Necip Fazil Ayan. 2010. Automatic disfluency removal for improving spoken language translation. In *2010 IEEE*
International Conference on Acoustics, Speech and Signal Processing, pages 5214–5217.
Shuangzhi Wu, Dongdong Zhang, Ming Zhou, and Tiejun Zhao. 2015. Efficient disfluency detection with transition-based parsing. In *Proceedings of* the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 495–503, Beijing, China. Association for Computational Linguistics.
Masashi Yoshikawa, Hiroyuki Shindo, and Yuji Matsumoto. 2016. Joint transition-based dependency parsing and disfluency detection for automatic speech recognition texts. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1036–1041, Austin, Texas.
Association for Computational Linguistics.
Vicky Zayats, Mari Ostendorf, and Hannaneh Hajishirzi. 2016. Disfluency detection using a bidirectional lstm. pages 2523–2527.
## A Challenges In Creating Data For Dc
There are three steps involved in creating data for DC - i) Transcribing the speech utterance, ii) Identifying disfluent elements in the transcript and iii) Creating the fluent sentence after removing disfluent utterances. Identifying disfluencies is not a straightforward task. Our observations show that, on average, it takes 2 minutes to create a pair of disfluent-fluent sentences for an average 15second speech utterance.
Collecting data for SDC comes with its challenges. Currently available datasets only focus on speaker details and record stuttered speech for analysis. Since SDC requires speech to be transcribed and annotated, creating parallel sentences for training is difficult. We derive our dataset from open-source resources. However, to create manual data at a large scale, an appropriate recording environment must be designed where speakers who stutter can interact with others over various topics with skilled annotators listening and transcribing the audio. Thus, creating data for DC is a challenging task (Section 1) and we hope that our contributed dataset can facilitate further research in stuttering correction.
## B Case Study: Analysing Differences In The Zero Shot And Few Shot Settings
In Indian languages DC, the *ZeroShot* baseline corresponds to a zero-shot method for DC, whereas our model is an adversarially trained few-shot method for DC. We perform qualitative comparisons across both these models to understand the difference through case studies from the test set.
Table 5 shows our results. Our few-shot model qualitatively performs better than the zero-shot baseline in most cases and thus strengthens the results mentioned in Section 5.3.
| Lang | Input | Transliteration | Gloss | Translation | ZS Output | FS Output |
|----------------------------------------------------------------------------------------|----------|-------------------|---------|---------------|-------------|-------------|
| Hi | बहत | तेज | | | | |
| चलाते थे और मैं अ क्या कहते है ह एɟनमलस ɟगनता था रास्ते मैं | bahata | teja | | | | |
| chalaate | the | | | | | |
| aura | mai.m | | | | | |
| a | kyaa | ka | | | | |
| hate | hai | ha | | | | |
| enimalasa ginataa thaa raaste mai.m | a_lot | quick | | | | |
| drive | were | | | | | |
| and I a what say is h animals count was way I | Used | to | | | | |
| drive | very | | | | | |
| fast | and | | | | | |
| I | used | to | | | | |
| count | the | | | | | |
| animals | on | | | | | |
| the way | बहत | तेज | | | | |
| चलाते थे और मैं अ क्या कहते है ह एɟनमलस ɟगनता था रास्ते मैं | बहत | तेज | | | | |
| चलाते थे और मैं ह एɟनमलस ɟगनता था रास्ते मैं | | | | | | |
| Mr | मी | आज | अं | | | |
| फु लांचे | जे | | | | | |
| प्रदशर्न | पाɟहले | | | | | |
| त्यात व्हटʓकल गाडर्नची संकल्पना पाहायला ɠमळाली | mii | aaja | | | | |
| a.m | phu | | | | | |
| laa.mche | je | | | | | |
| pradarshana paahile tyaata vharTiikala gaarDanachii sa.mkalpanaa paahaayalaa mildaalii | I | today | uh | | | |
| of_flowers j exhibition saw in_it vertical of_the_garden concept to_see received | The | concept | | | | |
| of | vertical | | | | | |
| garden | was | | | | | |
| seen | in | the | | | | |
| exhibition | I | | | | | |
| saw today | आज | अं | | | | |
| फु लांचे | जे | | | | | |
| प्रदशर्न | पाɟहले | | | | | |
| त्यात व्हटʓकल गाडर्नची संकल्पना पाहायला ɠमळाली | मी | आज | | | | |
| फु लांचे | जे | | | | | |
| प्रदशर्न | पाɟहले | | | | | |
| त्यात व्हटʓकल गाडर्नची संकल्पना पाहायला ɠमळाली | | | | | | |
Table 5: Some more examples of comparison between performance of Zero Shot DC Few Shot DC models, in addition to examples mentioned in Table 2.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations have been mentioned as section 8 of the paper submitted
A2. Did you discuss any potential risks of your work?
Not applicable. Since our paper is about disfluency correction through text, we do not anticipate any risks of our work or its potential use in other tasks.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Yes, abstract and introduction summarize the paper's main claims
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5.1 Describes The Data We Create
✓ B1. Did you cite the creators of artifacts you used?
The authors of the dataset we use have been cited in Section 4.1 and Section 5.1
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
It has been mentioned that the data we use is open source in sections 4.1 and 5.1
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
It has been mentioned that the data we use is open source and consistent with its intended use in sections 4.1 and 5.1
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
It has been mentioned in section 5.1 that the data we use and create is anonymous B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. The data we create is derived from an existing dataset that is open source and provides relevant documentation. We have cited the original dataset in section 4.1 and 5.1.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Relevant statistics have been mentioned in sections 4.2 and 5.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4.2 And Section 5.2
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Our model architecture does not compulsorily require any GPU support and thus is usable on many established frameworks
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Relevant details have been included in section 4.1, 4.2, 5.1 and 5.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Relevant details have been included in sections 4.3 and 5.3 and Appendix C
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Relevant details have been included in sections 4 and 5
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
cercas-curry-cercas-curry-2023-computer | Computer says {``}No{''}: The Case Against Empathetic Conversational {AI} | https://aclanthology.org/2023.findings-acl.515 | Emotions are an integral part of human cognition and they guide not only our understanding of the world but also our actions within it. As such, whether we soothe or flame an emotion is not inconsequential. Recent work in conversational AI has focused on responding empathetically to users, validating and soothing their emotions without a real basis. This AI-aided emotional regulation can have negative consequences for users and society, tending towards a one-noted happiness defined as only the absence of {``}negative{''} emotions. We argue that we must carefully consider whether and how to respond to users{'} emotions. | # Computer Says "No": The Case Against Empathetic Conversational Ai
Alba Curry School of Philosophy, Religion and History of Science University of Leeds [email protected]
## Abstract
Emotions are an integral part of human cognition and they guide not only our understanding of the world but also our actions within it. As such, whether we soothe or flame an emotion is not inconsequential. Recent work in conversational AI has focused on responding empathetically to users, validating and soothing their emotions without a real basis. This AI-aided emotional regulation can have negative consequences for users and society, tending towards a one-noted happiness defined as only the absence of "negative" emotions. We argue that we must carefully consider whether and how to respond to users' emotions.
## 1 Introduction
Recent work in conversational AI has focused on generating empathetic responses to users' emotional states (e.g., Ide and Kawahara, 2022; Svikhnushina et al., 2022; Zhu et al., 2022) as a way to increase or maintain engagement and rapport with the user and to simulate intelligence. However, these empathetic responses are problematic.
First, while a system might never claim to be human, responses simulating humanness prompt users to further behave as though the systems were
(Reeves and Nass, 1996). Empathy, like all emotions, is likely a uniquely human trait and systems that feign it are in effect feigning humanity. The ethical issues surrounding anthropomorphism have been discussed at length and are beyond the scope of this paper (Salles et al., 2020; Bryson, 2010).
Second, empathy requires an ability to both understand and share another's emotions. As such, responding empathetically assumes that the system is able to correctly *identify* the emotion, and that it is able to *feel* the emotion itself.1 Neither one of 1Correctly identifying an emotion is problematic for animals including human beings. However, reasons differ between conversation AI and human beings: Human beings vary in their capacity to identify emotions in part because we strug-
Amanda Cercas Curry MilaNLP
Department of Computing Sciences Bocconi University [email protected] these holds true for conversational AI (or in fact for any AI system).2 Third, even if conversational AI were to correctly identify the user's emotions, and perform empathy, we should ethically question the motives and outcomes behind such an enterprise. Svikhnushina et al. (2022) put forward a taxonomy of empathetic questions in social dialogues, paying special attention to the role questions play in regulating the interlocutor's emotions. They argue for the crucial role effective question asking plays in successful chatbots due to the fact that often questions are used to express "empathy" and attentiveness by the speaker. Here we highlight the ethical concerns that arise from questions that are characterised by their emotion-regulation functions targeted at the user's emotional state. It is important to note that our argument applies to any use of empathetic AI
(see also for example (Morris et al., 2018; De Carolis et al., 2017)). What happens if the chatbot gets it right? There may be instances where a chatbot correctly identifies that a given situation is worthy of praise and amplifies the pride of the user and the result is morally unproblematic. For example, when (Svikhnushina et al., 2022) use the example of amplifying pride in the context of fishing. What happens if it gets it wrong? It depends on the type of mistake: a) The chatbot fails to put into effect a question's intent, it would be ethically inconsequential; 3b) It amplifies or minimises an inappropriate emotion.This is the problem we will focus on, arguing that emotional regulation has no place in conversational AI and as such empathetic responses are deeply morally problematic. While humans will gle at times to identify our own or extend empathy to certain members of society, but we have the capability of identifying emotions. Furthermore, our ability to identify the emotions of others builds, at least in part, from our own emotions.
2Moreover, Barrett (2017) have already problematised the identification of human emotions using language or facial expressions in general.
3In fact, if the chatbot failed to be empathetic then it would simply not engage us in the intended ways.
necessarily show empathy for one another, conversational AI cannot understand the emotion and so cannot make an accurate judgement as to its appropriateness. This lack of understanding is key as we cannot predict the consequences of moderating an emotion we don't understand, and a dialogue system cannot be held accountable for them.
## 2 The Crucial Roles Of Emotions
What emotions are is still up for debate (Barrett, 2017; Scarantino and de Sousa, 2018). However, their significance for the individual and for society has received renewed interest (Greenspan, 1995; Bell, 2013; Cherry, 2021). Regardless of the emotion model one picks, emotions play important roles, both epistemic and *conative* ones (Curry, 2022). They perform at least three epistemic roles:
(1) They signal to the individual experiencing the emotion what she herself values and how she sees the world (e.g., if you envy your colleague's publications this tells you you value publications and deem yourself similar enough to your colleague that you can compare yourself (Protasi, 2021)); (2)
they signal to others how we see the world; and
(3) emotional interactions are invaluable sources of information for third-party observers since they tell us what the members of the interaction value. For example, (1) when you grieve, you signal to yourself and anyone observing that you deem to have lost something of value. It is conceivable that you were unaware up to that point that you valued what you lost—this is captured by the saying "you don't know what you have till it's gone." Furthermore,
(2) your friends and family may learn something about you by observing your grief. They too may not have known how much something meant for you. Finally, (3) an observer may also learn about the dynamics of grief (whether it is appropriate to express it for example) by observing whether or not your family validates your grief.
Furthermore, emotions play *conative* roles, meaning that they are involved in important ways with our motivation and desire to act in certain ways. In other words, not only do some emotions compel and motivate you to act, but also how you act is coloured by the emotion you are experiencing.
For example, your anger signals that you perceive that an injustice has occurred. If your boss fails to promote the person who deserves it because of their gender, your anger would motivate you to write a letter of complaint or speak to HR about it.
Importantly, all emotions, including the socalled "negative" emotions (e.g., anger, contempt, hatred, shame, envy, guilt, etc.) also share these functions. These emotions are not negative in the sense of being "bad", they are called negative because they are painful, and therefore they are emotions that we would tend to avoid for ourselves. A
world without injustice would certainly be ideal but we would not want a world of injustice where we were unequipped to notice or become motivated to fight it. Hence why it is imperative that we ask ourselves under which circumstances we ought to enhance or soothe emotions.
## 3 The Problem With Empathy
Literature discussing the value and power of empathy for conversational AI understands empathy as a tool to establish a common ground for meaningful communication and to appear more likeable to users. The authors of these studies understand empathy broadly as "the feeling by which one understands and shares another person's experiences and emotions" (De Carolis et al., 2017). Empathy facilitates engagement through the development of social relationships, affection, and familiarity. Furthermore, for Svikhnushina et al. (2022), empathy is required in order to enable chatbots to ask questions with emotion regulation intents. For example, questions may be used to amplify the user's pride or de-escalate the user's anger, or frustration.
Empathy, although a common phenomenon, is not a simple one. It enjoys a long history in various scholarly disciplines. Indeed, a lot of ink has been spilled (and still is), for example, over how to make sense of character engagement. How do we, human beings, care for fictional characters? How are we intrigued and moved by their adventures and respond to the emotions and affects expressed in their voices, bodies, and faces as well as imagine the situation they are in and wish them success, closure, or punishment? Empathy is taken to be a key element and yet the exact nature of how human beings are able to experience empathy for fictional characters is currently being debated (Tobón, 2019).
The reason for highlighting this diversity is that conversational AI would do well to engage seriously with the rich intellectual history of empathy.
The definition it tends to engage with lacks the level of complexity required to understand this complex phenomenon. Moreover, it tends to obfuscate the darker sides of empathy. Leaving aside the fact that defining empathy as the "reactions of one individual to the observed experiences of another"
(De Carolis et al., 2017) tells us very little about the process by which a human beings, let alone conversational AI, may do this, what we take issue with is what chatbots hope to do with that empathy.
In other words, if for the sake or argument, we presume that conversational AI is able to accurately identify our emotions, the issue of how we deploy empathy is of huge ethical relevance.
Here we offer a brief summary of three important views against empathy: Prinz (2011) argues against the common intuition that empathy is by and large a good thing and thus desirable. He raises several issues such as empathy being easily manipulated (such as during a trial), and empathy being partial (we are more empathetic towards people we perceive to be of our own race, for example). Both claims have been empirically verified. Thinking about how this might affect empathetic conversational AI for example in the case of using them for social assistive robots, we might worry if based on its empathetic reactions it chose to help certain people over others.
Taking the argument further, Bloom (2017) argues against empathy and for what he calls rational compassion. He contends that empathy is one of the leading motivators of inequality and immorality in society. Thus, far from helping us to improve the lives of others, empathy is a capricious and irrational emotion that appeals to our narrow prejudices; it muddles our judgement and, ironically, often leads to cruelty. Instead, we ought to draw upon a more distanced compassion.4 There are three lessons we can take from this:
(1) Given empathy's prejudices, we would need to think deeper about how to mitigate them in conversational AI; (2) Given that empathy is used not just know what brings people pleasure, but also what brings pain, we might want to question the general future uses of empathy in conversational AI; (3) if we buy Bloom's argument, then conversational AI
should consider not imitating human beings, but becoming agents of rational compassion.
Breithaupt (2019) also takes issue with empathy, arguing that we commit atrocities not out of a failure of empathy, but rather as a direct consequence of successful, even overly successful, empathy. He starts the book by reminding us that "[e]xtreme acts 4Assessing Bloom's argument with regards to rational compassion and whether it would be feasible for conversational AI
is beyond the scope of this paper although worthy of pursuit.
of cruelty require a high level of empathy."
The further lesson we can take from this is that while people generally assume that empathy leads to morally correct behaviour, and certainly there are many positive sides of empathy, we should not rely on an overly simple or glorified image of empathy.
However, our problem is not necessarily with empathy per se, but rather with the explicit functions conversational AI hopes to achieve with it, namely to enhance engagement, to inflate emotions deemed positive, and to soothe emotions deemed negative (e.g., Svikhnushina et al., 2022). Our claim is that we ought to think carefully about the consequences of soothing negative emotions only because they we have a bias against them. Not only is this approach based on a naive understanding of emotions, it fails to recognise the importance of human beings being allowed to experience and express the full spectrum of emotions. One ought to not experience negative emotions because there is nothing to be upset about, not because we have devised an emotional pacifier. In other words, the issue is that conversational AI lacks a sound value system for deciding why certain emotions are validated and others soothed. Furthermore, this AIaided emotional regulation can have negative consequences for users and society, tending towards a one-noted notion of happiness defined as only the absence of "negative" emotions.
## 4 When Emotions Get Things Wrong
There are two illustrative problems with the kinds of decisions behind amplifying and de-escalating emotions. One is the problem of what the ideal character might be. When you talk to a friend they will decide whether to soothe or amplify your emotions based not just on the situation but also on who they deem you to be. If they think you are someone who has a hard time standing up for yourself they will amplify your anger to encourage you to fight for yourself, but if they think you are someone who leans too much on arrogance, they will de-escalate your sense of pride—even if, all things being equal, your pride on that occasion was warranted. Hence, not only would a conversational AI require prior knowledge of the interlocutor in terms of her character, but furthermore it would have to decide what are desirable character traits.
The second question regards what an ideal emotion in a particular situation might be. We may all find it easy to say that negative emotions such as anger often get things wrong and lead to undesirable outcomes. However, positive emotions such as joy, hope, or pride which we may intuitively wish to amplify can also get things wrong.
We assess and criticise emotions along a number of distinct dimensions: Firstly, emotions may be criticised when they do not fit their targets. You may, for example, be open to criticism for feeling fear in the absence of danger. Unfitting emotions fail to correctly present the world. In the case of pride, would we want to amplify someone's pride if they either did not in fact achieve anything, or if their achievement was not merited? For example, if their nephew did very well in maths when in fact we know their nephew cheated? Second, an emotion may be open to criticism when it is not based on good evidence or is unreasonable.
Consider the person who suffers from hydrophobia: Given that in the vast majority of situations water is not dangerous, this person's fear is both unreasonable and unfitting. But even fitting emotions may be unreasonable. One may, for example, be terrified of tsunamis because one believes that they cause genetic mutations. In this case, one's fear is fitting—tsunamis are very dangerous—yet the fear is unreasonable since it is not based on good reasons. Third, an emotion may be criticised because it isn't prudent to feel. We might warn someone not to show anger when interacting with a person with a gun since they might get themselves killed; anger in this case may be reasonable and fitting given the gunman's actions and yet imprudent.
Finally, we may condemn emotions as morally nonvaluable because of the unacceptable way in which they present their targets, e.g., one may, argue that schadenfreude is morally objectionable because it presents the pain of another person as laughable.
Positive emotions may be unfitting, unreasonable, and imprudent, as well as morally condemnable just as negative emotions may well be fitting, reasonable, and prudent, as well as morally laudable. In other words, even if one is equipped with empathy there are crucial normative decisions involved in question intents aimed at emotional regulation.5 Amplifying and de-escalating emotion inappropriately can have devastating moral outcomes.
## 5 Empathy And Responsibility
Human beings, all things being equal, will inevitably experience empathy. A reasonable human being experiencing empathy for another is proof of the importance of someone else's emotional statefor better or for worse. This supports the idea that our emotions are important, as opposed to the notion that they hinder rationality and ought to be regulated. They tell us many things about our world.
Similarly to many NLP systems' understanding of language, the empathetic responses of conversational AI are only performative (Bender and Koller, 2020). Thus, they provide a false sense of validity or importance. What if someone is experiencing an unfitting, unreasonable, or morally reprehensible emotion? Should a chatbot still showcase empathy? We hope to have shown that such decisions are deeply morally problematic and complex.
Hence, another key problem is responsibility. A
human agent may choose to express their empathy
(even if they cannot choose feeling it) and they may choose to attempt to regulate someone else's emotions based on their knowledge of the situation and the speaker's character. If a human being wrongly regulates someone else's emotions, they will be morally responsible for the consequences. Who is morally responsible in the case of conversational AI agents? Who are they benefiting when they are not actually benefiting the human agent? This issue is further elaborated on by Véliz (2021).
## 6 Related Work
Our article sits at the intersection of emotion detection, response generation, and safety in conversational AI. We keep this section brief as we cite relevant work throughout the article. Several works have already focused on the issue of giving AI systems sentience, such as Bryson (2010). While this could make the systems truly empathetic, we agree that we have a duty not to create sentient machines.
Lahnala et al. (2022) problematise NLP's conceptualisation of empathy which, they argue, is poorly defined, leading to issues of data validity and missed opportunities for research. Instead, we argue that even a more specific definition of empathy presents ethical issues that cannot be overlooked or ignored and must carefully evaluated.
Dinan et al. (2022) provide a framework to classify and detect safety issues in end-to-end conversational systems. In particular, they point out systems that respond inappropriately to offensive content and safety-critical issues such as medical and emergency situations. We could apply their framework to empathetic responses where the system takes the role of an "impostor": empathetic responses require a system to pretend to understand the emotion. However, the extent to which emotions play a role in human cognition and what the consequences of regulating these emotions for the users are has not been discussed in the literature to the best of our knowledge.
## 7 Conclusion
In this position paper, we argued that emotional regulation has no place in conversational AI and as such empathetic responses are deeply morally problematic. While humans will necessarily show empathy for one another, conversational AI cannot understand the emotion and so cannot make an accurate judgement as to its reasonableness. This lack of understanding is key because we cannot predict the consequences of assuaging or aggravating an emotion, and a dialogue system cannot be held accountable for them. We hope to encourage reflection from future researchers and to initiate a discussion of the issue, not only in this particular case but also more reflection when it comes to pursuing seemingly positive goals such as bringing disagreeing parties towards agreement. Like with other ethically sensible topics, the community should come together to agree on a strategy that minimises harm.
## Limitations
While we strongly argue against empathetic conversational systems, there may be use cases - such as psychotherapy or educational chatbots - where validating a user's emotions is, if not required, helpful in terms of their goal. In addition, while a lot of the work on empathetic responses we have discussed is intentional, generative models like ChatGPT produce relatively uncontrolled responses that may well be unintentionally empathetic. As with toxic outputs, care should be taken to prevent these models from validating users' emotions that cannot be understood.
## Acknowledgements
We thank the anonymous reviewers and Gavin Abercrombie for their thorough and helpful comments. This project has partially received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 949944, INTEGRATOR). Amanda Cercas Curry is a member of the MilaNLP group and the Data and Marketing Insights Unit of the Bocconi Institute for Data Science and Analysis.
## References
Lisa Feldman Barrett. 2017. How emotions are made:
The secret life of the brain. Pan Macmillan.
Macalester Bell. 2013. *Hard feelings: The moral psychology of contempt*. Oxford University Press.
Emily M Bender and Alexander Koller. 2020. Climbing towards nlu: On meaning, form, and understanding in the age of data. In *Proceedings of the 58th annual meeting of the association for computational* linguistics, pages 5185–5198.
Paul Bloom. 2017. *Against empathy: The case for* rational compassion. Random House.
Fritz Breithaupt. 2019. *The dark sides of empathy*. Cornell University Press.
Joanna J Bryson. 2010. Robots should be slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues, 8:63–74.
Myisha Cherry. 2021. *The case for rage: Why anger is* essential to anti-racist struggle. Oxford University Press.
Alba Curry. 2022. *An Apologia for Anger With Reference to Early China and Ancient Greece*. Ph.D.
thesis, UC Riverside.
Berardina De Carolis, Stefano Ferilli, and Giuseppe Palestra. 2017. Simulating empathic behavior in a social assistive robot. *Multimedia Tools and Applications*, 76(4):5073–5094.
Emily Dinan, Gavin Abercrombie, A. Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, and Verena Rieser. 2022. SafetyKit: First aid for measuring safety in open-domain conversational systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 4113–4133, Dublin, Ireland. Association for Computational Linguistics.
Patricia S Greenspan. 1995. Practical guilt: Moral dilemmas, emotions, and social norms. Oxford University Press on Demand.
Tatsuya Ide and Daisuke Kawahara. 2022. Building a dialogue corpus annotated with expressed and experienced emotions. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 21–30, Dublin, Ireland. Association for Computational Linguistics.
Allison Lahnala, Charles Welch, David Jurgens, and Lucie Flek. 2022. A critical reflection and forward perspective on empathy and natural language processing. *arXiv preprint arXiv:2210.16604*.
Robert R Morris, Kareem Kouddous, Rohan Kshirsagar, and Stephen M Schueller. 2018. Towards an artificially empathic conversational agent for mental health applications: system design and user perceptions. *Journal of medical Internet research*,
20(6):e10148.
Jesse Prinz. 2011. Against empathy. *The Southern* Journal of Philosophy, 49:214–233.
Sara Protasi. 2021. *The philosophy of envy*. Cambridge University Press.
Byron Reeves and Clifford Nass. 1996. The media equation: How people treat computers, television, and new media like real people. *Cambridge, UK*,
10:236605.
Arleen Salles, Kathinka Evers, and Michele Farisco.
2020. Anthropomorphism in ai. *AJOB neuroscience*,
11(2):88–95.
A Scarantino and R de Sousa. 2018. Emotion, in "the stanford encyclopedia of philosophy"(winter 2018 edition). EN ZALTA (a cura di), URL: https://plato.
stanford. edu/archives/win2018/entries/emotion.
Laura Silva. 2021. The epistemic role of outlaw emotions. *Ergo*, 8(23).
Ekaterina Svikhnushina, Iuliana Voinea, Anuradha Welivita, and Pearl Pu. 2022. A taxonomy of empathetic questions in social dialogs. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2952–2973, Dublin, Ireland. Association for Computational Linguistics.
Daniel Jerónimo Tobón. 2019. Empathy and sympathy:
two contemporary models of character engagement.
In *The Palgrave handbook of the philosophy of film* and motion pictures, pages 865–891. Springer.
Carissa Véliz. 2021. Moral zombies: why algorithms are not moral agents. *AI & SOCIETY*, 36(2):487–
497.
Ling.Yu Zhu, Zhengkun Zhang, Jun Wang, Hongbin Wang, Haiying Wu, and Zhenglu Yang. 2022. Multiparty empathetic dialogue generation: A new task for dialog systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 298–307, Dublin, Ireland. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
✗ A2. Did you discuss any potential risks of your work?
The paper is a position paper warning against a particular application of NLP.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Not applicable. Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
serrano-etal-2023-stubborn | Stubborn Lexical Bias in Data and Models | https://aclanthology.org/2023.findings-acl.516 | In NLP, recent work has seen increased focus on spurious correlations between various features and labels in training data, and how these influence model behavior. However, the presence and effect of such correlations are typically examined feature by feature. We investigate the cumulative impact on a model of many such intersecting features. Using a new statistical method, we examine whether such spurious patterns in data appear in models trained on the data. We select two tasks{---} natural language inference and duplicate-question detection{---} for which any unigram feature on its own should ideally be uninformative, which gives us a large pool of automatically extracted features with which to experiment. The large size of this pool allows us to investigate the intersection of features spuriously associated with (potentially different) labels. We then apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations, and examine how doing so affects models trained on the reweighted data. Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models, including worsened bias for slightly more complex features (bigrams). We close with discussion about the implications of our results on what it means to {``}debias{''} training data, and how issues of data quality can affect model bias. | # Stubborn Lexical Bias In Data And Models
Sofia Serrano1 Jesse Dodge2 **Noah A. Smith**12 1Paul G. Allen School of Computer Science & Engineering, University of Washington 2Allen Institute for Artificial Intelligence [email protected], [email protected], [email protected]
## Abstract
In NLP, recent work has seen increased focus on spurious correlations between various features and labels in training data, and how these influence model behavior. However, the presence and effect of such correlations are typically examined feature by feature. We investigate the cumulative impact on a model of many such intersecting features. Using a new statistical method, we examine whether such spurious patterns in data appear in models trained on the data. We select two tasks—natural language inference and duplicate-question detection—for which any unigram feature on its own should ideally be uninformative, which gives us a large pool of automatically extracted features with which to experiment. The large size of this pool allows us to investigate the intersection of features spuriously associated with (potentially different) labels. We then apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations, and examine how doing so affects models trained on the reweighted data. Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models, including worsened bias for slightly more complex features (bigrams). We close with discussion about the implications of our results on what it means to "debias" training data, and how issues of data quality can affect model bias.
## 1 Introduction
Machine learning research today, including within NLP, is dominated by large datasets and expressive models that are able to take advantage of them.
At the same time, as the scale of training data has grown, this explosion of data has come at the expense of data *curation*; for many of the datasets currently in use today, human oversight of the full breadth of their contents has become unrealistic.
This makes it more likely that training datasets contain undesirable associations or shortcuts to learning intended tasks. Many cases are attested (e.g.,
Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018; McCoy et al., 2019; Rudinger et al.,
2018; Stanovsky et al., 2019; Davidson et al., 2019; Sap et al., 2019), and we suspect a vast number of these so-called "spurious correlations" remain undetected.
One question is whether these unintended biases in the training data propagate to models trained on that data. Recent work has found mixed results on this point (Steed et al., 2022; Joshi and He, 2022).
We begin by introducing an approach to testing for undesirable model biases that can operate using existing held-out data, even though that data might itself have spurious correlations. In particular, we repurpose the classic permutation test to examine whether observed differences in model performance between instances exhibiting more common feature-label pairings and those exhibiting less common feature-label pairings are statistically significant.
For our experiments, we focus on the simplest kind of feature-label association: correlations between lexical features and task labels. We select two tasks (natural language inference and duplicatequestion detection) for which any such lexical feature should be uninformative on its own. Finding strong evidence that models finetuned on three different datasets have at least some of the same lexical biases that exist in their training data, we then examine the extent to which those biases are mitigated by lessening biases in the training data. To do this, we apply an optimization-based approach to reweighting the training instances. The approach brings uneven label distributions closer to uniform for thousands of different intersecting lexical features, many more than we use for our model bias evaluation, and still manages to have a strong effect on the most initially biased features despite our reweighting approach not focusing on those 8131 in particular. We then finetune new models on those (reweighted) datasets. We find that although model bias lessens somewhat when we do this, we still find strong evidence of bias. Surprisingly, this holds even when we consider models that make use of no pretraining data.
We close with a discussion of possible factors contributing to these results. We first note that perhaps the continued relative lack of variety of minority-class examples containing certain features hinders the reweighted models' ability to generalize their recognition of those less-common featureclass pairs, even though the combined weight given to those few instances in the loss function is increased. However, when we examine the effect of our reweighting on higher-order features
(namely, bigrams), we see another problem: the same reweighting that mitigates associations between unigrams and any particular label actually strengthens associations between bigrams and certain labels in data. Based on this observation, we arrive at two conclusions: (1) simultaneously reducing bias across features of different levels of granularity for natural-language data is likely not feasible, and (2) even if we aim to mitigate model bias *only* with respect to simple features, if we do so by reweighting the data, the high-capacity models used in modern NLP are still capable of learning the spurious correlations of the original unweighted data through associations that remain encoded in more complex features even after reweighting. We conclude that bias reduction in NLP cannot be cast purely as a "data problem," and solutions may need to focus elsewhere (e.g., on models).
## 2 What Do We Mean By Bias?
The term "bias" is polysemous, having been adopted by different communities to mean different things, from historically rooted social inequity to skewed model evaluations (Mehrabi et al., 2021) to techniques that help with supervised class imbalance in labels (Chen et al., 2018). In our work, we use "bias" to mean correlations between individual input features and task labels. This framework is fairly general, but our focus in this work is natural language data. Therefore, as an example to illustrate our definition of bias, we will refer to correlations between the presence of individual word types in the input (unigrams) and a given label in a classification task.
More formally, consider a task of mapping inputs in X to labels in Y. We assume a training dataset D = ⟨(xi, yi)⟩
n i=1, each xi ∈ X and yi ∈ Y. We are particularly interested in a designated collection of d binary features on X , the jth of which is denoted fj : X → {0, 1}. For example, fj might be the presence of the word "nobody" in an instance.
Let fj,i be shorthand for fj (xi) (e.g., whether instance xi contains the word "nobody" (fj (xi) = 1)
or not (fj (xi) = 0)).
Introducing random variable notation, we can characterize D by its empirical conditional distribution over labels given each feature, such that for all y ∈ Y,
$${\hat{p}}(Y=y\mid F_{j}=1)={\frac{\sum_{i}\mathbf{1}\{f_{j,i}=1\wedge y_{i}=y\}}{\sum_{i}\mathbf{1}\{f_{j,i}=1\}}}.$$
If the conditional distribution of output labels given the presence of a particular lexical feature is very different from the overall label distribution in the data, we consider that feature to be biased in the training data.
## 3 Measuring Bias In Model Performance And Data
Recall that when pˆ(Y = y | Fj = 1) is close to 1, it means feature j is correlated with label y in a given dataset. Let us denote the set of examples that contain feature j and have the label most strongly associated with feature j in D by Uj , which we call the "usual-labels" set. Then, denote the examples that contain j but have a *different* label by Nj ,
which we call the "unusual-labels" set.
To build intuition, the accuracy of the model on instances which contain feature j is the accuracy over the union Uj ∪ Nj . However, to measure if the model is picking up bias from the data, we will measure accuracy over Uj and Nj separately. To maximize accuracy on Uj ∪ Nj the model would be justified in disproportionately labeling instances containing fj with y, so we can't use accuracy by itself to measure model bias. Instead, the key idea here will be to look for differences in error rates between instances whose labels align with features' training biases (the "usual-labels" set),
and instances whose labels do not.
If the model has learned a biased representation of the data, we expect it to have higher accuracy on the "usual-labels" set, Uj . On the other hand, if the model hasn't learned that bias, we would expect the correct predictions to be uniformly distributed between Uj and Nj . We use this as the basis for 8132 a hypothesis test: the null hypothesis H0 is that the accuracy of model is the same on both sets ACC(Uj ) = ACC(Nj ), and the alternative hypothesis H1 is that ACC(Uj ) > ACC(Nj ). That is, if the errors are distributed uniformly at random, how likely is it that Uj would have *at least* its observed number of correct instances?
## 3.1 Permutation Test
Given a model's accuracy on Uj and Nj , and the size of the two sets, we can calculate the p-value for this hypothesis test exactly using the permutation test (Phipson and Smyth, 2010). Our null hypothesis is that the errors are uniformly distributed between Uj and Nj , so the permutation test calls for randomly shuffling whether a given instance is correctly labeled, while not changing the number of instances in each category or the model's overall accuracy on the set union, both of which change the shape of the distribution of correct instances that we'd expect to see, but neither of which is the property for which we're testing. As there are finitely many ways to shuffle whether a given instance is correctly labeled, this test also has the benefit of having a closed form, giving us an exact p-value.1
## 3.2 Calculating Bias Over Multiple Features
In the previous section we described how we could use a permutation test for a single feature fj . Here we describe how to apply this to the full dataset. We define U as ∪jUj and N as ∪jNj for 50 features fj per distinct label (namely, those that demonstrate the highest association with that label in the training data), so 100 or roughly 150 features fj total depending on whether the dataset is 2- or 3class ("roughly" because some features are among the most associated for two classes in 3-way classification). Given that each example xiincludes multiple features (e.g., fj,i = 1∧fk,i = 1) it's possible for example xito have label y, which is the
"usual-labels" for fj but an "unusual-labels" for fk.
When this happens, we add it to both sets U and N ,
meaning that their intersection is not necessarily empty. Pooling examples in this way allows us to run a single hypothesis test for whether or not the model learns bias from the dataset, avoiding the multiple-comparisons issue of running one hypothesis test for each feature. This procedure is described in Figure 1.
## 4 Applying The Test
Here we shift our focus to particular tasks and datasets, in order to apply our test in practice.
## 4.1 Determining Biased Features (And Tasks)
For our experiments, we want a large volume of features that should ideally exhibit no correlation with labels. In order to get a large number of features, we'd like them to be simple and easy to automatically detect, so unigram features again come to mind, guiding our selection of tasks and datasets for experiments.
When is the association of unigram features with a particular label a problem? While previous work has argued that the presence of an individual word type in a given instance, by itself, does not provide enough information to predict the label for any ideal task that requires an understanding of natural language (Gardner et al., 2021), in this work we consider this argument only as it relates to two tasks where such a position is relatively uncontroversial:
natural language inference, and duplicate-question detection.
Consider the task of natural language inference (NLI), where the input consists of two sentences (premise and hypothesis), and the correct label is a human annotation indicating whether the premise entails the hypothesis, contradicts it, or neither. Continuing our example from section 2, if fj,i = 1, then the word "nobody" appears somewhere in example xi (premise, hypothesis, or both).
Given these definitions of the task and the features, fj,i = 1 by itself is uninformative for predicting yi
(intuitively, we don't learn any information about whether or not the premise entails the hypothesis by knowing that the word "nobody" appears somewhere in the input). However, it has been shown that in the SNLI dataset (Bowman et al., 2015)
fj = 1 almost perfectly predicts the label, in both the training and test sets (for example, in the training set, 2368 instances with fj = 1 have a label of
"contradiction" and only 13 don't). Thus, this is an example of a "spurious correlation" (or, bias in the data).
![3_image_0.png](3_image_0.png)
## Applying The Test To Models 4.2
We now apply the described permutation test to finetuned models. For each of SNLI (Bowman et al., 2015 ), QNLI (Wang et al., 2018 ), and QQP, 21 we finetune three pretrained RoBERTa-large models (Liu et al., 2019 ) with different random seeds on their training sets. We use a learning rate of 2 × 10 − 6 and finetune for 15 epochs using a single GPU with 12GB memory.
Following the argument by Gardner et al. (2021)
that unigram features for these kinds of theoretically complex tasks should ideally be uninformative in isolation, we use lexical types as our bias evaluation features. For the purpose of this calculation, each label will contribute the 50 features that have the strongest correlation with it (as calculated by z -score, again following Gardner et al.,
2021 ) in the lowercased training data, excluding stop words, since they tend to receive high z -scores due to appearing in such an overwhelming number of instances. 3 We then select all test instances with one or more of those types present as our evaluation set for our permutation test. For models finetuned on SNLI and QQP, we find p -values of at most 2.3 × 10− 17 (see "Trained on uniform" rows of Table 2 ), indicating very strong evidence thatas expected—these models reflect the bias associated with types with high z -scores in the training set. For QNLI, we see mixed results depending on our random seed, with p -values of 0.0057, 0.024, and 0.053 for our three finetuned models. (Worth noting is the fact that, as we will see later in Section 5.1 , QNLI has the lowest overall feature-label bias of any of these three datasets.) Still, we see enough of these models demonstrating bias to merit investigating why this occurs.
## 5 Where Does That Bias Come From?
Having established that there is often similar bias in the finetuning data and models trained on that data, we consider that the finetuning data is not necessarily the source of the bias in the model. For example, the bias could come from the pretraining data as well. With that in mind, how might we check the impact of the finetuning data specifically?
## 5.1 Intervening On The Data By Balancing It
Our strategy is to intervene on the data to lessen lexical bias.4 While modifying the data is only one family of approaches towards reducing eventual bias of a learned model (see, for example modelbased strategies such as those proposed by Clark et al., 2019, or Karimi Mahabadi et al., 2020), recall that our goal here is to investigate the effect of the finetuning data on the rest of the training setup, so for our purposes we keep the rest of the training procedure the same.
Prior work has explored different ways of intervening on data, such as manual data augmentation (Zhao et al., 2018; Zhang and Sang, 2020; Gowda et al., 2021; Lee et al., 2021), or occluding bias in the original data (Feldman et al., 2015), but along very few different axes of bias. Other work augments minority-class data for the purpose of addressing class imbalance (Chawla et al., 2002).
Yet others have taken the approach of generating new data to augment the existing data in ways that counteract certain biases (Wu et al., 2022). However, this last work relies on model-generated text, which, as Wu et al. (2022) themselves acknowledge, could differ from human-generated text in ways that aren't immediately obvious (Zellers et al.,
2019).
In order to avoid potential new artifacts introduced by using machine-generated training data, and to improve the label balance in aggregate for a large volume of features simultaneously, we reweight existing training data such that in expectation, the disproportionate association of lexical features with certain labels is decreased. Reweighting data to remove bias is not a new idea—Kamiran and Calders(2012) do this through downsamplingbut typically such approaches have considered at most a handful of different axes of bias. Some existing work, namely Byrd and Lipton (2018) and 4Note, we do not describe our approach as "removing bias,"
as natural language data in general is biased to some extent; see the argument made by Schwartz and Stanovsky (2022).
Zhai et al. (2023), has pointed out the limitations of approaches based on reweighting data, but again based on reweighting along comparatively few axes
(in the case of the former) or on simpler model architectures than we consider here (in the case of the latter), so in the absence of a viable alternative meeting our requirements, we proceed with reweighting as our form of intervention for our experiments.
Typically, training datasets like D are treated as i.i.d., representative samples from a larger population. Formally, we instead propose to *weight* the instances in D, assigning probability qito instance i, such that, ∀j, ∀y ∈ Y,
$${\frac{\sum_{i}q_{i}\cdot\mathbf{1}\{f_{j,i}=1\wedge y_{i}=y\}}{\sum_{i}q_{i}\cdot\mathbf{1}\{f_{j,i}=1\}}}={\frac{1}{|{\mathcal{Y}}|}}\qquad{\mathrm{(1)}}$$
From here on, we denote the lefthand side of Equation 1 as q(y | Fj = 1). Note that, for simplicity, we assume a uniform distribution over labels as the target, though our methods can be straightforwardly adapted to alternative targets.
Given an algorithm that produces a weighting q1*, . . . , q*n for dataset D, we quantify its absolute error with respect to Equation 1 as
$$\begin{array}{r l}{\operatorname{Err}(q)\ ={\frac{1}{(\mathrm{number~of~features})\cdot|{\mathcal{Y}}|}}\ .}\\ {\sum_{j}\sum_{y\in{\mathcal{Y}}}\left|q(y\mid F_{j}=1)-{\frac{1}{|{\mathcal{Y}}|}}\right|}\end{array}$$
How do we choose these qi values? We can state the general problem as a constrained optimization problem.5 We seek values q1*, . . . , q*n such that:
$$\sum_{i=1}^{n}q_{i}=1$$ $$q_{i}\geq0,\;\forall i$$ $$q(y\mid F_{j}=1)-{\frac{1}{|{\mathcal{Y}}|}}=0,\;\forall j,\forall y\in{\mathcal{Y}}$$
(The constraints in the last line are derived from Equation 1; strictly speaking one label's constraints are redundant and could be removed given the sumto-one constraints.)
Using this setup, we seek a vector q that satisfies the constraints. We do this by minimizing the 5The slightly simplified formulation we present here for ease of reading only takes into account cases where feature j appears somewhere in our data, but Equation 4 can be straightforwardly modified by multiplying it by the denominator of q(y | Fj = 1) to account for this.
8135 sum of squares of the left side of Equation 4; the approach is simplified by a reparameterization:
$$q_{i}={\frac{\exp z_{i}}{\sum_{i}\exp z_{i}}}$$
This is equivalent to optimizing with respect to unnormalized weights (zi) that are passed through a "softmax" operator, eliminating the need for the constraints in Equations 2 and 3. Once we have q, we multiply each xi's contribution to the loss during training by qi· |D|.
We apply this algorithm to reweight the following training datasets: SNLI (Bowman et al., 2015),
MNLI (Williams et al., 2018), QNLI (Wang et al.,
2018), and QQP. In contrast to the <200 features per dataset that we use for evaluation of bias in models, when reweighting data, we used all types that appeared at least 100 times in their corresponding training data as features, and we denoted an
"instance" as the concatenation of a paired premise and hypothesis (or, for QQP, the concatenation of the two questions). We removed features from consideration if they did not have at least one document in the dataset for each of their labels.6 We see in Table 1 that by solving for distributions q over the different datasets as described, we successfully reduce Err(q) compared to the initial uniform weighting for all datasets except MNLI.7 This leaves us with three successfully reweighted datasets with lessened unigram bias overall, and we can use these to investigate possible reduction of lexical bias compared to their original, uniformlyweighted counterparts. We confirm that for the high-z-score features used for model bias evaluation for each of these three, their label balance in the data either improves (often dramatically) or stays comparable as a result of our reweighting q. (Here and elsewhere, we use "label balance" of a feature to refer to the average absolute difference between its empirical label distribution in the training data and the overall label distribution of the training data, averaging elementwise over each possible label.) For example, see Figure 2 for the change that our reweighted q makes in improving the label distributions of our original high-z-score features from SNLI that we use for evaluation.
6This was not the case for any features in MNLI or QNLI,
but applied to the word "recess" for SNLI, and the words
"gobi" and "weakest" for QQP.
7MNLI is unusual among the datasets we studied in its remarkably low degree of lexical-feature bias to begin with, so it is perhaps not surprising that further lowering that bias across thousands of features proves difficult.
![5_image_0.png](5_image_0.png)
## 5.2 Impact When Finetuning On Reweighted Data
We now consider what happens when we finetune models on that data. We finetune RoBERTa-large models using new random seeds and all the same hyperparameters as before, only this time on training data reweighted using the new q distributions.
We see similar validation accuracies (a point or so of difference), indicating that this reweighting has a small effect on overall performance, even though the validation sets may contain similar biases to their corresponding training sets and therefore benefit models that leverage those biases.
The results of rerunning our model bias evaluation are listed in the top half of Table 2. While we do see an increase in p-values, indicating weaker evidence of bias than for models trained on the uniformly-weighted training data, for both SNLI
and QQP, we are still left with very strong evidence of bias (p-values of at most 1.2 × 10−5). A natural question that we might ask is whether we can attribute this remaining bias to the pretraining data.
To test whether we see the same patterns in the absence of any other training data, we also train two bidirectional three-layer LSTMs per dataset from scratch (i.e., no pretraining and no pretraining data), one using uniform weighting and the other using q-reweighted.8 As we can see in Table 2, 8To ensure no leaked signal from any other data, we initialized the word embeddings of the LSTMs to continuous
| |D| | # Features | |Y| | Err(Uniform) | (↓) | Err(Adjusted q) | (↓) |
|-------|--------------|-------|----------------|-------|-------------------|-------|
| SNLI | 549,367 | 3866 | 3 | 0.057 | 0.040 | |
| MNLI | 392,376 | 6854 | 3 | 0.022 | 0.084 | |
| QNLI | 104,743 | 3770 | 2 | 0.042 | 0.012 | |
| QQP | 363,831 | 4386 | 2 | 0.154 | 0.047 | |
Table 1: The average absolute difference between the empirical fraction of label y in instances with any particular unigram feature j and the total weight given to label y in the full training data, computed over all features and all their label values. Lower is better.
| p-value(s) for permutation test | | | |
|-----------------------------------|---------------------------------------|-----------------------------------------|-------------|
| SNLI | Trained on uniform | 1.9 × 10−35 , {1.1, 2.2} × 10−23 | |
| Trained on adjusted q | {1.2, 1.7, 3.2} × 10−14 | | |
| QNLI | Trained on uniform | 5.7 × 10−3 , {2.4, 5.3} × 10−2 | |
| Trained on adjusted q | {3.7, 7.6, 2.6} × 10−1 | | |
| QQP | Trained on uniform | 2.4 × 10−26 , 2.6 × 10−20 , 2.3 × 10−17 | |
| Trained on adjusted q | 7.6 × 10−20 , 5.9 × 10−7 , 1.2 × 10−5 | | |
| Finetuned transformers | SNLI | Trained on uniform | 5.9 × 10−83 |
| Trained on adjusted q | 2.0 × 10−75 | | |
| QNLI | Trained on uniform | 3.1 × 10−61 | |
| Trained on adjusted q | 1.6 × 10−10 | | |
| QQP | Trained on uniform | Approx. 10−638 | |
| Trained on adjusted q | Approx. 10−762 | | |
| From-scratch LSTM | | | |
Table 2: Exact p-values for permutation tests conducted on different models, which check the probability that the usual-gold-label subset of the test data would have at least its observed accuracy if the instances guessed correctly by the model were distributed uniformly at random across the usual and unusual gold-label test subsets. The pretrained model used to initialize each finetuned transformer was RoBERTa-large, and for each pairing of a dataset and a uniform or adjusted weighting of its data in finetuning a transformer, we ran three separate random seeds to observe variance. For each dataset-weighting pairing in training LSTMs from scratch, we used a single random seed.
while there continues to be a rise in p-value with the switch to the reweighted q, the higher p-value is still vanishingly small. **All the models trained**
from scratch are biased.
Of particular interest is the fact that the LSTMs trained on QNLI display strong evidence of bias, while the pretrained transformers that were finetuned on either version of QNLI (reweighted or not)
were the only models that did not display strong evidence of bias. This indicates that at least in QNLI's case, bias has entirely separate causes than training data; for QNLI, it's only the models trained from scratch that display significant evidence of bias. This, along with the tiny p-values for the other LSTMs, indicates that there are still factors even in the reweighted data that contribute to bias.
| Err(Uniform)(↓) | Err(Adjusted q)(↓) | |
|-------------------|----------------------|-------|
| SNLI | 0.059 | 0.122 |
| QNLI | 0.134 | 0.173 |
| QQP | 0.215 | 0.224 |
Table 3: The average absolute difference between the empirical distribution of label y (in the data) for instances with a **bigram** feature j and the overall distribution of label y given the full data (we perform this difference elementwise). The calculations over any row in this table are performed over 200 randomly selected bigrams j from that dataset, which are kept consistent across columns. Lower is better.
At first, this is surprising. Given that the LSTMs trained with the reweighted q distributions over data were exposed to no other data, why do they still exhibit bias? One possibility is issues of quality inherent to some unusual-label data. For example, consider the word "favorite" in SNLI, which has one of the highest z-scores for the "neutral" label. Even though nothing about the task of determining whether one sentence entails another inherently suggests an association between "favorite" and a particular label, since SNLI was constructed based on photographs (without any additional data about their subjects' mental states) as the underlying source of data for written premises, we expect the term "favorite" to occur mostly in hypotheses that are neither entailed nor contradicted by this data. Even though the reweighted q gives more weight to unusual examples, those examples could sometimes be of lower quality due to details of how the data was collected.
Furthermore, even though the total contribution to the loss function during training is approximately the same across labels using the reweighted q, the model still sees a wider variety of instances for types' "usual" labels, which perhaps allows it to generalize better in that regard. In other words, the characteristics of less common (fj , y) pairings aren't inherently easier for a model to learn than the characteristics of more common pairings, so models' generalization to new examples with the less common (fj , y) pairing would still be hurt by seeing a smaller variety of examples representing those kinds of instances, even if that smaller variety received greater total weight in the loss function.
## 6 **Effects Of Rebalancing On Higher-Order** Features
We have found that rebalancing labeled data doesn't remove bias in a downstream model. Another possible explanation is that rebalancing also affects higher-order features' effective correlations with labels, and such bias may carry over into models (whether it was originally present or not). We consider bigrams, as they represent only a slight additional level of complication.
To get a sense of how bigrams overall are affected, we randomly sample 200 bigrams for each of the three successfully rebalanced datasets, selecting uniformly at random among the set of bigrams that appear in at least one instance of each label.
We then examine the effect of our (unigram-based)
rebalancing of data from table 1 on associations in the data between bigram features and labels. Table 3 shows that in all cases, the average gap between the overall label distribution in the data and the empirical distribution of labels given a bigram worsens, despite unigrams' label distributions better reflection of the data's overall label distribution
(Table 1) that results from the same reweighted q.
This analysis provides a possible explanation for how rebalancing the data with respect to biased unigram features fails to prevent models from learning bias: the rebalancing didn't correct for biased bigram features, which mislead the model, effectively
"bringing the unigram features" along with them so that unigram-bias gets learned anyway. This is a troubling sign for approaches to bias reduction that focus on data alone, pointing to the need for methods that focus on other aspects of model learning as well.
## 7 Methods From Related Work
Considerable research has posed similar questions of undesirable associations in data manifesting in models, whether through spurious correlations between lexical features and labels (Tsuchiya, 2018; Gururangan et al., 2018; Poliak et al., 2018; McCoy et al., 2019) or through gender or racial bias
(Waseem and Hovy, 2016; Rudinger et al., 2018; Stanovsky et al., 2019; Davidson et al., 2019; Sap et al., 2019). Out of this large body of work, a few prevailing evaluation methods have emerged.
Foremost among these is assembling a single test set in which a particular bias of interest is lessened and evaluating models' aggregate performance on that test set, such as by excluding instances for which a model that should be too simple to perform the task is correct (Gururangan et al., 2018)
or by constructing such a dataset from scratch (McCoy et al., 2019). Similarly, Gardner et al. (2020)
assemble what is essentially a new, miniature test set (a "contrast set") for each human-identified possible category of mistake that a model might make.
We now consider what existing work finds regarding bias in models using these different methods. Overall, we see mixed results. Caliskan et al.
(2017) determine that trained word vectors do pick up societal biases from their training corpora. Likewise, Rudinger et al. (2018) find evidence of gender bias in coreference resolution systems, Stanovsky et al. (2019) find gender bias in machine translation systems, and Sap et al. (2019) find racial bias in hate speech detection models. However, whether multiple attributes' biases in data transfer to models is less clear. For example, Steed et al. (2022)
find that both pretraining data and finetuning data have an effect on biases having to do with gendered pronouns and identity terms that are learned by occupation and toxicity classifiers, but that certain forms of bias reduction in either pretraining or finetuning data don't necessarily overcome bias that the model might pick up from the other. This is possibly explained by the results of Zhou and Srikumar (2022), who find that data used for finetuning largely distances clusters of textual representations by label without significantly changing other properties of the underlying distribution of data. In a similar vein, Joshi and He (2022) find that counterfactually augmented training data can actually exacerbate other spurious correlations in models.
For all the different results reported in this body of literature, there are some typical characteristics of the bias evaluation methodology they apply. As referenced earlier, it is common for this work to test for a *single* undesirable form of behavior (e.g.,
biased use of gendered pronouns). For example, Belinkov et al. (2019) focus on whether NLI models ignore input instances' premise, an important problem, but this also simplifies their evaluation, as they doesn't need to consider the potentially disparate impact of their adjusted model on intersecting biases. Another common characteristic is the creation of new and separate test data (McCoy et al.,
2019; Zhang et al., 2019), on which decreased performance is taken to indicate bias (Tu et al., 2020; Wu et al., 2022). A concern regarding this strategy, though, is that such test sets very likely still contain
(undetected) biases of their own. Due to the complicated nature of natural language and the highly intertwined features that occur together in text, it is very likely that this will be true regardless of the test set created.
Results using our permutation testing framework indicate the difficulty of removing or mitigating bias from data in a way that corresponds to the mechanisms by which models absorb that bias in practice. This is reminiscent of results from, for example, Gonen and Goldberg (2019) or Elazar and Goldberg (2018), who note that certain ways of seemingly covering up bias still leave traces of that bias in models, and is in line with arguments made by, for example, Eisenstein (2022) and Schwartz and Stanovsky (2022). Further development and testing of hypotheses about how models acquire bias will be important to ensuring that they truly perform the tasks that we intend, and not versions that rely on biased shortcuts in the data.
## 8 Conclusion
We explored how lexical bias in labeled data affects bias in models trained on that data. Our methodological contribution is a procedure, based on the permutation test, for analyzing biased associations between given features and model predictions, in test data that might itself contain biases. Our empirical finding is that, in cases where a dataset can be rebalanced to remove most lexical bias, the resulting models remain biased. This may be related to our observation that the correlations of higher-order
(bigram) features with labels actually get *worse* after rebalancing. We conclude that reducing bias in NLP models may not be achievable by altering existing training data distributions.
## Limitations
One of the limitations of this work is that we restrict ourselves to examining datasets for supervised learning that contain relatively short instances of text. This likely facilitated the reweighting of data that we wished to perform as an intervention to produce the reweighted data that we study, as the short length of each text effectively capped the number of different lexical features that could cooccur in the same instance. The results we present here might not be representative of lexical feature bias in data with much longer units of text. Also, the fact that the datasets that we used are all in English means that our lexical features were premised on simple whitespace tokenization with punctuation removal; for other languages with a larger variety of reasonable tokenization schemes at varying levels of granularity, the distribution of lexical features, and the resulting conclusions, might look very different.
In addition, apart from the issues we have raised in transferring reduced bias in data to models, we note that an exhaustive list of all features that are present in particular data is extremely impractical
(and in some cases impossible); any set of features will inevitably leave out some trait of the data, making the reweighting procedure we follow in this work inherently incomprehensive. For those features not included in the problem setup, the measured quality of a returned q distribution will not reflect any changes relevant to those features, although the balance of those features has likely also changed. Even among the features included in the problem input, shifting q's probability mass to improve the balance for one set of features' labels may simultaneously hurt the balance for another.
## Ethics Statement
This work addresses one piece of the much broader set of questions surrounding how biases—from low-level word associations to high-level social biases—manifest in natural language, and the effects that they have on the models that we train and develop as researchers and practitioners. Parsing out how such biases transfer to models, and when they are harmful, has been and will continue to be key to making progress towards understanding the technologies we create and the scope of what they can or should do.
## Acknowledgments
The authors appreciate helpful feedback from the anonymous reviewers and members of Noah's ARK at UW and the AllenNLP group at AI2, as well as from Terra Blevins, Yulia Tsvetkov, Lucy Lu Wang, Sheng Wang, and Tim Althoff.
## References
Yonatan Belinkov, Adam Poliak, Stuart Shieber, Benjamin Van Durme, and Alexander Rush. 2019. Don't take the premise for granted: Mitigating artifacts in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 877–891, Florence, Italy.
Association for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Jonathon Byrd and Zachary Chase Lipton. 2018. What is the Effect of Importance Weighting in Deep Learning? In *International Conference on Machine Learning*.
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases.
Science, 356:183 - 186.
Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. Smote: Synthetic minority over-sampling technique. *J. Artif. Int. Res.*,
16(1):321–357.
Yubo Chen, Hang Yang, Kang Liu, Jun Zhao, and Yantao Jia. 2018. Collective event detection via a hierarchical and bias tagging networks with gated multilevel attention mechanisms. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural
Language Processing, pages 1267–1276, Brussels, Belgium. Association for Computational Linguistics.
Christopher Clark, Mark Yatskar, and Luke Zettlemoyer.
2019. Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4069–
4082, Hong Kong, China. Association for Computational Linguistics.
Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25–35, Florence, Italy. Association for Computational Linguistics.
Jacob Eisenstein. 2022. Informativeness and invariance:
Two perspectives on spurious correlations in natural language. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4326–4331, Seattle, United States.
Association for Computational Linguistics.
Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 11–21, Brussels, Belgium. Association for Computational Linguistics.
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '15, page 259–268, New York, NY, USA.
Association for Computing Machinery.
Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou.
2020. Evaluating models' local decision boundaries via contrast sets. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 1307–1323, Online. Association for Computational Linguistics.
Matt Gardner, William Merrill, Jesse Dodge, Matthew Peters, Alexis Ross, Sameer Singh, and Noah A.
Smith. 2021. Competency problems: On finding and removing artifacts in language data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1801–1813, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them.
In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614, Minneapolis, Minnesota. Association for Computational Linguistics.
Sindhu C. M. Gowda, Shalmali Joshi, Haoran Zhang, and Marzyeh Ghassemi. 2021. Pulling up by the causal bootstraps: Causal data augmentation for pretraining debiasing. *Proceedings of the 30th ACM*
International Conference on Information & Knowledge Management.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith.
2018. Annotation artifacts in natural language inference data. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics.
Nitish Joshi and He He. 2022. An investigation of the
(in)effectiveness of counterfactually augmented data.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3668–3681, Dublin, Ireland.
Association for Computational Linguistics.
Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. *Knowledge and information systems*, 33(1):1–
33.
Rabeeh Karimi Mahabadi, Yonatan Belinkov, and James Henderson. 2020. End-to-End Bias Mitigation by Modelling Biases in Corpora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8706–8716, Online. Association for Computational Linguistics.
Minwoo Lee, Seungpil Won, Juae Kim, Hwanhee Lee, Cheoneum Park, and Kyomin Jung. 2021. Crossaug:
A contrastive data augmentation method for debiasing fact verification models. Proceedings of the 30th ACM International Conference on Information
& Knowledge Management.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A Robustly Optimized BERT Pretraining Approach.
Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics.
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM
Computing Surveys (CSUR), 54(6):1–35.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space.
Belinda Phipson and Gordon K Smyth. 2010. Permutation p-values should never be zero: Calculating exact p-values when permutations are randomly drawn.
Statistical Applications in Genetics and Molecular Biology, 9(1).
Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting diverse natural language inference problems for sentence representation evaluation. In *Conference on Empirical* Methods in Natural Language Processing.
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers),
pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics.
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678, Florence, Italy. Association for Computational Linguistics.
Roy Schwartz and Gabriel Stanovsky. 2022. On the limitations of dataset balancing: The lost battle against spurious correlations. In *Findings of the Association* for Computational Linguistics: NAACL 2022, pages 2182–2194, Seattle, United States. Association for Computational Linguistics.
Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 1679–1684, Florence, Italy. Association for Computational Linguistics.
Ryan Steed, Swetasudha Panda, Ari Kobren, and Michael Wick. 2022. Upstream Mitigation Is Not All You Need: Testing the Bias Transfer Hypothesis in Pre-Trained Language Models. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 3524–3542, Dublin, Ireland. Association for Computational Linguistics.
Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recognizing textual entailment. *ArXiv*, abs/1804.08117.
Lifu Tu, Garima Lalwani, Spandana Gella, and He He.
2020. An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models.
Transactions of the Association for Computational Linguistics, 8:621–633.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL
Student Research Workshop, pages 88–93, San Diego, California. Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Yuxiang Wu, Matt Gardner, Pontus Stenetorp, and Pradeep Dasigi. 2022. Generating data to mitigate spurious correlations in natural language inference datasets. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 2660–2676, Dublin, Ireland. Association for Computational Linguistics.
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
Runtian Zhai, Chen Dan, J. Zico Kolter, and Pradeep Ravikumar. 2023. Understanding Why Generalized Reweighting Does Not Improve Over ERM. In *Proceedings of the International Conference on Learning* Representations.
Yi Zhang and Jitao Sang. 2020. Towards accuracyfairness paradox: Adversarial example-based data augmentation for visual debiasing. Proceedings of the 28th ACM International Conference on Multimedia.
Yuan Zhang, Jason Baldridge, and Luheng He. 2019.
PAWS: Paraphrase adversaries from word scrambling.
In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298–1308, Minneapolis, Minnesota. Association for Computational Linguistics.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In *Proceedings of the 2018 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics.
Yichu Zhou and Vivek Srikumar. 2022. A closer look at how fine-tuning changes BERT. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 1046–1061, Dublin, Ireland. Association for Computational Linguistics.
| A | Appendix | kid holding affection holds close instruments sitted |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------|--------------------------------------------------------|
| A.1 | List of non-stop-word types most associated with each SNLI label | |
| A.1.1 | Entailment | |
| These were the 50 word types (after stop words were filtered out) that had the highest z-scores for the "entailment" label in SNLI: outside outdoors person near people animal human humans least someone moving instrument something animals sport together wet touching vehicle things theres clothes multiple picture proximity interacting physical using activity canine music active musical object wears motion consuming clothed clothing mammals working objects present | A.1.2 | Contradiction |
| These were the 50 word types (after stop words were filtered out) that had the highest z-scores for the "contradiction" label in SNLI: sleeping nobody cat eating sitting tv alone swimming asleep inside bed couch cats naked driving home empty eats car nothing running watching woman movie basketball nap television pool sleep anything moon beach man quietly laying room frowning sleeps riding flying | | |
| 8143 | | |
| sits napping crying house desert dancing bench theater indoors pizza | best money day married son competing way wants professional trip likes show got |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
| A.1.3 | Neutral |
| These were the 50 word types (after stop words were filtered out) that had the highest z-scores for the "neutral" label in SNLI: friends tall trying waiting new sad owner first competition going favorite | |
A.1.3 Neutral
These were the 50 word types (after stop words
were filtered out) that had the highest z-scores for
the "neutral" label in SNLI:
friends
tall
trying
waiting
new sad
owner
first
competition
going
favorite friend
winning
vacation
get
date birthday wife work
brothers
ready party mother
family
sisters
championship
win husband time fun
siblings
getting
fetch parents tired school
father
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations (after Conclusion, not numbered)
✓ A2. Did you discuss any potential risks of your work?
Limitations (after Conclusion, not numbered)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Yes: Sections 3.2, 4.1, 4.2, And 5
✓ B1. Did you cite the creators of artifacts you used?
Sections 3.2 and 4.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
All four of these datasets are commonly used in NLP papers without discussion of their licenses; all were developed for the purposes of furthering NLP research.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
All four of the datasets used are very commonly used in NLP papers and were released by members of the NLP community with the understanding that they were to be used for NLP research purposes, which they are in this paper.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. All of these datasets are publicly available.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We discuss some of the limitations of using these particular datasets in the Limitations section of the paper.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Table 1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Sections 3.2, 4.1, 4.2, And 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Sections 3.2 and 4.2
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We include all unaveraged p-values from our experiments in table 2.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We reported which huggingface version of RoBERTa we used.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ansell-etal-2023-distilling | Distilling Efficient Language-Specific Models for Cross-Lingual Transfer | https://aclanthology.org/2023.findings-acl.517 | Massively multilingual Transformers (MMTs), such as mBERT and XLM-R, are widely used for cross-lingual transfer learning. While these are pretrained to represent hundreds of languages, end users of NLP systems are often interested only in individual languages. For such purposes, the MMTs{'} language coverage makes them unnecessarily expensive to deploy in terms of model size, inference time, energy, and hardware cost. We thus propose to extract compressed, language-specific models from MMTs which retain the capacity of the original MMTs for cross-lingual transfer. This is achieved by distilling the MMT *bilingually*, i.e., using data from only the source and target language of interest. Specifically, we use a two-phase distillation approach, termed BiStil: (i) the first phase distils a general bilingual model from the MMT, while (ii) the second, task-specific phase sparsely fine-tunes the bilingual {``}student{''} model using a task-tuned variant of the original MMT as its {``}teacher{''}. We evaluate this distillation technique in zero-shot cross-lingual transfer across a number of standard cross-lingual benchmarks. The key results indicate that the distilled models exhibit minimal degradation in target language performance relative to the base MMT despite being significantly smaller and faster. Furthermore, we find that they outperform multilingually distilled models such as DistilmBERT and MiniLMv2 while having a very modest training budget in comparison, even on a per-language basis. We also show that bilingual models distilled from MMTs greatly outperform bilingual models trained from scratch. | # Distilling Efficient Language-Specific Models For Cross-Lingual Transfer
Alan Ansell1 Edoardo Maria Ponti2,1 Anna Korhonen1**Ivan Vulic´**
1 1Language Technology Lab, University of Cambridge 2University of Edinburgh [email protected]
## Abstract
Massively multilingual Transformers (MMTs),
such as mBERT and XLM-R, are widely used for cross-lingual transfer learning. While these are pretrained to represent hundreds of languages, end users of NLP systems are often interested only in individual languages. For such purposes, the MMTs' language coverage makes them unnecessarily expensive to deploy in terms of model size, inference time, energy, and hardware cost. We thus propose to extract compressed, language-specific models from MMTs which retain the capacity of the original MMTs for cross-lingual transfer. This is achieved by distilling the MMT *bilingually*,
i.e., using data from only the source and target language of interest. Specifically, we use a two-phase distillation approach, termed BIS-TILLATION: (i) the first phase distils a general bilingual model from the MMT, while (ii) the second, task-specific phase sparsely fine-tunes the bilingual 'student' model using a task-tuned variant of the original MMT as its 'teacher'.
We evaluate this distillation technique in zeroshot cross-lingual transfer across a number of standard cross-lingual benchmarks. The key results indicate that the distilled models exhibit minimal degradation in target language performance relative to the base MMT despite being significantly smaller and faster. Furthermore, we find that they outperform multilingually distilled models such as DistilmBERT
and MiniLMv2 while having a very modest training budget in comparison, even on a perlanguage basis. We also show that bilingual models distilled from MMTs greatly outperform bilingual models trained from scratch.
## 1 Introduction
Massively multilingual Transformers (MMTs), pretrained on unlabelled data from hundreds of languages, are a highly effective tool for cross-lingual transfer (Devlin et al., 2019; Conneau et al., 2020; Chung et al., 2020; He et al., 2021). However, they suffer from several limitations as a result of
![0_image_0.png](0_image_0.png)
their ample language coverage. Firstly, aiming to represent many languages within their parameter budget and dealing with the training signals from different languages might result in negative interference. This is known as the "curse of multilinguality" (Conneau et al., 2020), which impairs the MMT's transfer capabilities (Pfeiffer et al., 2022).
Secondly, in practice people are often interested in using or researching NLP systems in just a *single* language. This makes the MMTs unnecessarily costly in terms of storage, memory, and compute and thus hard to deploy. This especially impacts communities which speak low-resource languages, which are more likely to have limited access to computational resources (Alabi et al., 2022).
8147 In this work, we address the question: can we increase the time-efficiency and space-efficiency of MMTs while retaining their performance in crosslingual transfer? Knowledge distillation (Hinton et al., 2015) is a family of general methods to achieve the first goal by producing smaller, faster models (Sanh et al., 2019; Jiao et al., 2020, *inter* alia) and has also been applied to MMTs specifically. However, when the distilled MMT is required to cover the same number of languages as the original model, whose capacity is already thinly stretched over hundreds of languages, the "curse of multilinguality" asserts itself, resulting in a significant loss in performance (Sanh et al., 2019).
As a consequence, to achieve the best possible performance with reduced capacity, we depart from the practice of retaining all the languages from the original MMT in the distilled model. Instead, we argue, we should cover only two languages, namely the source language and the target language of interest. In fact, distilling just one language would fall short of the second goal stated above, namely facilitating cross-lingual transfer, as a monolingually distilled model would be unable to learn from a distinct source language during task-specific finetuning. Maintaining cross-lingual transfer capabilities, however, is crucial due to the paucity of labelled task data in many of the world's languages in most tasks (Ponti et al., 2019; Joshi et al., 2020).
In particular, we propose a method for *bilingual* distillation of MMTs, termed BISTILLATION, inspired by the two-phase recipe of Jiao et al. (2020).
We start from a *"student"* model, initialized by discarding a subset of layers of the original *"teacher"*
MMT, as well as the irrelevant part of its vocabulary. In the first, *"general"* phase of distillation, unlabelled data is used to align the the hidden representations and attention distributions of the student with those of the teacher. In the second, *taskspecific* phase, the student is fine-tuned for the task of interest through guidance from a task-adapted variant of the teacher. Rather than fully fine-tuning the student during this second phase, we instead use the parameter-efficient Lottery-Ticket Sparse FineTuning (LT-SFT) method of Ansell et al. (2022).
Parameter-efficient task fine-tuning enables a system to support multiple tasks with the same distilled compact model, without unnecessarily creating full model copies per each task.
We evaluate our efficient *"bistilled"* models on a range of downstream tasks from several benchmarks for multilingual NLP, including dependency parsing from Universal Dependencies (UD; Zeman et al., 2020), named entity recognition from MasakhaNER (Adelani et al., 2021), natural language inference from AmericasNLI (Ebrahimi et al., 2022), and QA from XQuAD (Artetxe et al.,
2020). We evaluate the model performance as well as its space efficiency (measured in terms of parameter count) and time efficiency (measured in terms of FLOPs and inference time). We compare it against highly relevant baselines: bilingual models pretrained from scratch and two existing multilingual distilled models, DistilmBERT (Sanh et al., 2019) and MiniLMv2 (Wang et al., 2021a).
We find that while our bilingually distilled models are twice or thrice smaller and faster than the original MMT, their performance is only slightly degraded, as illustrated in Figure 1. Our method outperforms the baselines by sizable margins, showing the advantages of (i) bilingual as opposed to multilingual distillation, and (ii) distilling models from MMTs rather than training them from scratch. We hope that our endeavour will benefit end-users of multilingual models, and potential users under-served by currently available technologies, by making NLP systems more accessible. The code and models are publicly available at https://github.com/AlanAnsell/bistil.
## 2 Background 2.1 Cross-Lingual Transfer With Mmts
Prominent examples of MMTs include mBERT
(Devlin et al., 2019), XLM-R (Conneau et al., 2020)
and mDeBERTa (He et al., 2021), among others.
Pires et al. (2019) and Wu and Dredze (2019)
showed that mBERT is surprisingly effective at zero-shot cross-lingual transfer. Zero-shot crosslingual transfer is a useful paradigm when there is little or no training data available for the task of interest in the target language, but there is training data available in some other *source* language. In the simplest form of zero-shot cross-lingual transfer, the model is trained on source language data and is then used without modification for inference on target language data. While this generally works quite well for high-resource languages, transfer performance degrades for low-resource languages, especially those under-represented or fully unseen by the MMT during its pretraining (Lauscher et al.,
2020; Pfeiffer et al., 2020; Ansell et al., 2021; Adelani et al., 2021; Ebrahimi et al., 2022).
## 2.2 Modular Adaptation Of Mmts
Because MMTs divide their capacity among many languages, they may often perform sub-optimally with respect to a single source or target language.
Furthermore, we are sometimes interested in a target language not covered by the MMT. A naive solution to these problems is to prepare the MMT
with continued pretraining on the target language before proceeding to task fine-tuning. While this can improve performance, Pfeiffer et al. (2020)
show that a more effective approach is to perform this continued pretraining in a parameter-efficient manner, specifically with the use of *adapters* (Rebuffi et al., 2017; Houlsby et al., 2019). The resulting language-specific adapter is known as a *language adapter*. When the task fine-tuning is also learned in the form of an adapter (*task adapter*), Pfeiffer et al. demonstrate that zero-shot transfer can be achieved by composing arbitrary language and task adapter pairs.
Ansell et al. (2022) extend this idea to a new parameter-fine tuning method, *sparse fine-tuning*
(SFT). An SFT of a model is where only a sparse subset of its pre-trained parameters are fine-tuned, i.e. an SFT of a pretrained model F with parameters θ can be written as F(· ; θ + ϕ), where the difference vector ϕ is sparse (Sung et al., 2021).
Language and task SFTs with difference vectors ϕL and ϕT respectively are composed through addition, i.e. yielding F(· ; θ + ϕL + ϕT ). SFTs are learned through a procedure called "Lottery Ticket Sparse Fine-Tuning" (LT-SFT), based on the Lottery Ticket algorithm of Frankle and Carbin (2019).
The k% of parameters which undergo the greatest absolute change during an initial full fine-tuning phase are selected as tunable parameters during the second "sparse" phase which yields the final SFT.
As SFT composition exhibited somewhat better zero-shot cross-lingual transfer performance across a range of tasks than adapter composition, and SFTs avoid the inference time slow-down incurred by adapters at inference time, we adopt this parameter-efficient approach throughout this work. However, we note that other modular and parameter-efficient architectures can also be tried in future work (Pfeiffer et al., 2023).
Multi-Source Training. Ansell et al. (2021) show that multi-source task adapter training, where a task adapter is trained using data from several source languages simultaneously, yields large gains in cross-lingual transfer performance as a result of the task adapter learning more language-agnostic representations. Ansell et al. (2022) find similarly large gains from multi-source training of task SFTs.
An important aspect of cross-lingual transfer with SFTs is that the source language SFT is applied during task SFT training. This requires each batch during multi-source training to consist of examples from a single source language, for which the relevant language SFT is applied during the corresponding training step.
## 2.3 Distilling Pretrained Language Models
Knowledge distillation (Bucilua et al. ˘ , 2006; Hinton et al., 2015) is a technique for compressing a pretrained large "teacher" model into a smaller
"student" model by training the student to copy the behavior of the teacher. Whereas during standard pretraining, the model receives a single "hard" label per training example, during distillation the student benefits from the enriched signal provided by the full label distribution predicted by the teacher model. Sanh et al. (2019) use this technique to produce *DistilBERT*, a distilled version of BERTbase
(Devlin et al., 2019) with 6 instead of the original 12 layers, and *DistilmBERT*, a corresponding distilled version of multilingual BERT. There has been extensive subsequent work on distillation of pretrained language models, but with less focus on distilling MMTs in particular.
## 3 Bistillation**: Methodology**
Overview. We are interested in providing NLP capabilities with limited computational resources in a specific target language T which lacks training data in the tasks of interest. A common paradigm in previous work (Pfeiffer et al., 2020; Ansell et al., 2022)
is to use cross-lingual transfer with an MMT in conjunction with parameter-efficient task and language adaptation to support multiple tasks without adding a large number of additional parameters per task, see §2.2. Our goal in this work is to replace the highly general MMT, plus optional language adaptation, with a target language-specific model which maintains the benefits of cross-lingual transfer.
An obvious first attempt would be to simply distil the MMT into a smaller model using only text in the target language. However, this monolingual distillation approach is insufficient, as during task finetuning, the monolingually distilled student model no longer "understands" the source language. Indeed, our preliminary experiments confirmed the intuition that this approach is inadequate. This problem can be overcome through *bilingual* distillation, where text from both the source and target language is used to train the student model.1 Therefore, our aim is to devise a method for deriving from an MMT M a smaller model M′*S,T,τ* to perform a given task τ in the target language T
given only training data in the source language S.
Our approach is inspired by the two-stage distillation paradigm of Jiao et al. (2020). In the first,
"general" phase, a bilingual student model M′S,T is distilled from M using the same unsupervised task
(e.g., masked language modeling) that was used for M's pretraining. In the second, "task-specific" phase, M′*S,T,τ* is produced by fine-tuning M′S,T using Mτ as its teacher, where Mτ is derived from M
by fine-tuning it for task τ . The following sections explain the details of these phases.
## 3.1 Distillation Method
Let LT be the number of Transformer layers in the teacher model, indexed from 1 to LT . The number of student model layers LS is required to evenly divide LT . We define the downscaling stride as s =
LT
LS
.
Following Jiao et al. (2020), the loss functions of the two distillation phases make use of three components, (i) *attention-based*, (ii) *hidden statebased*, and (iii) *prediction-based*. Attention-based loss is defined as follows:
$${\mathcal{L}}_{\mathrm{attn}}={\frac{1}{L_{S}}}\sum_{i=1}^{L_{S}}{\mathsf{MSE}}(A_{i}^{S},A_{i\cdot s}^{T}).\qquad(1)$$
Here, AS
iand AT
i ∈ R
l×lrefer to the attention distribution2 of Transformer layer i of the student and teacher model, respectively; l refers to the input sequence length; MSE() denotes mean squared error loss.
Hidden state-base loss is defined as follows:
$${\mathcal{L}}_{\mathrm{hidden}}={\frac{1}{L_{S}+1}}\sum_{i=0}^{L_{S}}{\mathsf{MSE}}(H_{i}^{S},H_{i\cdot s}^{T}),\quad(2)$$
where HS
iand HT
i ∈ R
l×drefer to the hidden representations output by Transformer layer i of the student and teacher model, respectively, or the output of the embedding layer when i = 0. Note that we assume that the student and teacher share the same hidden dimensionality d.
Finally, the prediction-based loss is defined as
$${\mathcal{L}}_{\mathrm{pred}}={\mathsf{C E}}(z^{S},z^{T}),$$
where z Sand z Tare the label distributions predicted by the student and teacher model, respectively, and CE denotes cross-entropy loss.
The intuition behind using attention-based and hidden state-based loss for our purposes is as follows. We (i) require good monolingual performance in the source and target language, but we also (ii) must preserve the existing alignment between these languages in the MMT which would consequently facilitate transfer between them. The intuition is that encouraging the student's intermediate representations to match those of the teacher will help to preserve this alignment.
We next describe how these loss components are employed in each phase of BISTILLATION.
## 3.2 Stage 1: General Bilingual Distillation
Initialization. We initialize all parameters of the student model by copying those of the teacher model, but retaining only the Transformer layers whose indices are multiples of s.
Vocabulary Reduction. Our distilled models can dispose of the many irrelevant tokens in the base MMT's vocabulary, i.e. those which are not frequently used in either the source or target language of interest, an idea previously proposed by Abdaoui et al. (2020). During initialization, the vocabulary of the student model is selected by retaining only the tokens of the teacher's vocabulary whose unigram probability in either the source or target language corpus is ≥ 10−6.
Teacher Language Adaptation. As we wish to be able to produce distilled models for languages not covered in the base MMT, and to obtain the best possible performance for languages which are covered, we employ language adaptation of the teacher MMT with language-specific SFTs (Ansell et al., 2022) applied on top of the original MMT during distillation.3 Since it draws examples from two languages, each with its own language SFT, bilingual 3Put simply, additionally applying language-specific SFTs
'skews' the MMT towards those particular languages.
distillation becomes a special case of multi-source training as described in §2.2. At each training step, either the source or target language is selected at random with equal probability; the batch is composed of sequences drawn from the training corpus of the chosen language, and a pretrained SFT for that language is applied to the teacher MMT.
Objective. The overall loss function for this phase is given by the sum of the attention-based and hidden state-based loss. Omitting the prediction-based loss here has the advantage of avoiding the need to evaluate the distribution of tokens predicted by the MLM head, which is costly because of the considerable size of MMTs' embedding matrices.
## 3.3 Stage 2: Task-Specific Distillation
After a general bilingual model has been distilled from the teacher MMT in Stage 1, it can be finetuned for a specific task. We first obtain the teacher for task-specific distillation by applying task-specific LT-SFT to fine-tune the base MMT
(i.e., the teacher in the general distillation phase)
for the task in question. This teacher's outputs and representations are then used to fine-tune the bilingual student model, again using task LT-SFT at the student's end. The use of parameter-efficient task adaptation here avoids adding a large number of parameters to the system for each task. The objective during this task-specific fine-tuning consists of the sum of all three losses from §3.1: Lattn, Lhidden, and Lpred.
## 4 Experimental Setup
We largely adopt the evaluation framework of Ansell et al. (2022) for direct comparability with their LT-SFT method, which they apply to undistilled MMTs, and which we apply for task-specific fine-tuning of bilingually distilled MMTs. Specifically, we evaluate zero-shot cross-lingual transfer performance on four representative tasks: dependency parsing, named entity recognition, natural language inference, and QA. While the prior work focused only on low-resource languages, our method is also highly relevant to high-resource languages: the XQuAD QA task (Artetxe et al., 2020)
provides additional insight into high-resource target language performance. Table 1 summarizes the experimental setup, including the datasets and languages considered in our experiments. In total, we cover a set of 44 typologically and geographically diverse languages, which makes them representative of cross-lingual variation (Ponti et al., 2020).
We experiment with three different MMTs as shown in Table 1: mBERT (Devlin et al.,
2019), XLM-Rbase (Conneau et al., 2020), and mDeBERTabase (He et al., 2021).
## 4.1 Baselines And Model Variants
We refer to our main method as BISTIL. We compare it with several relevant approaches. First, the LTSFT method (Ansell et al., 2022), a state-of-theart cross-lingual transfer approach, uses LT-SFT
with language adaptation on the base MMT. LTSFT
can be seen as an upper bound for BISTIL, allowing us to measure how much the performance suffers as a result of replacing the MMT with its bilingually distilled variant.
For each task except NLI,4 we also compare against a multilingually distilled MMT, i.e. with all pretraining languages used for distillation as well.
For DP and NER, where mBERT is the base MMT,
the distilled MMT is DISTILMBERT (Sanh et al.,
2019), which is similarly based on mBERT. For QA, where BISTIL uses mDeBERTa as the base MMT, no directly comparable multilingually distilled MMT is available, so we opt for a loose comparison with MINILMV2 (Wang et al., 2021a), distilled from XLM-Rlarge, which has achieved strong results on cross-lingual transfer in high-resource languages. We perform task-specific fine-tuning with LT-SFT on DistilmBERT and MiniLMv2 in the same way as for the the undistilled MMTs in the LTSFT setting. For DP and NER we also perform language adaptation of DistilmBERT.5 We also consider SCRATCH, a setting where we train bilingual models from scratch instead of distilling them from a pretrained MMT. We then apply the same LT-SFT task fine-tuning method as for the other baselines. This comparison allows us to evaluate the benefit of distilling efficient bilingual models from the MMT rather than pretraining the same-sized bilingual models from scratch.
We refer to our main method, with the taskspecific distillation stage as described in §3.3, as
| Task | Target Dataset | Source Dataset | MMT | Target Languages Arabic† , Bambara, Buryat, Cantonese, Chinese† , Erzya, | |
|---------------|-------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------|----------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| (DP) | Universal Dependencies | | | | |
| Dependency | Parsing | 2.7 (Zeman et al., 2020) | Universal Dependencies 2.7 (Zeman et al., 2020) | mBERT | Faroese, Japanese† , Livvi, Maltese, Manx, North Sami, Komi Zyrian, Sanskrit, Upper Sorbian, Uyghur |
| CoNLL | 2003 | (Tjong | | | |
| Named Entity Recognition (NER) | MasakhaNER (Adelani | Kim Sang and De Meulder, 2003) | mBERT | Hausa, Igbo, Kinyarwanda, Luganda, Luo, NigerianPidgin, Swahili∗ , Wolof, Yorùbá∗ | |
| et al., 2021) | | | | | |
| Natural Language Inference (NLI) | AmericasNLI (Ebrahimi et al., 2022) | MultiNLI | (Williams | | |
| et al., 2018) | XLM-R | Aymara, Asháninka, Bribri, Guarani, Náhuatl, Otomí, Quechua, Rarámuri, Shipibo-Konibo, Wixarika Arabic† , Chinese† , German† , Greek† , Hindi† , | | | |
| (QA) | XQuAD (Artetxe et al., | | | | |
| Question | Answering | 2020) | SQuAD v1.1 (Rajpurkar | Romanian† , Russian† , Spanish† , Thai† , Turkish† , | |
| et al., 2016) | mDeBERTa | Vietnamese† | | | |
MMT Distillation LRF DRF #L D #V #P
mBERT
none - - 12 768 120K 178M
D'MBERT 2 - 6 768 120K 135M
BISTIL∗ 2 - 6 768 31K 67M
3 - 4 768 31K 53M
XLM-Rbasenone - - 12 768 250K 278M
BISTIL∗ 2 - 6 768 28K 65M
3 - 4 768 28K 51M
XLM-Rlargenone - - 24 1024 250K 560M
MINILMV2 2 2.67 12 384 250K 118M
mDeBERTanone - - 12 768 250K 278M
BISTIL∗ 2 - 6 768 41K 75M
3 - 4 768 41K 60M
BISTIL-TF (TF = *teacher forcing*). We also carry out an ablation focused on the second phase of BISTILLATION: here, we consider performing task-specific fine-tuning without the assistance of a teacher, i.e. in the same manner as LTSFT. We refer to this variant as BISTIL-ST (ST = *self-taught*).
Table 2 provides details of the model sizes, before and after distillation using the above methods, demonstrating the benefits of BISTILLATION with respect to model compactness.
## 4.2 Distillation/Adaptation Training Setup
We always perform language adaptation of the teacher model during both phases of BISTILLA-TION and during LTSFT except for mDeBERTa and MiniLMv26. For language adaptation of MMTs we use the pretrained language SFTs of Ansell et al. (2022), and we train our own for DistilmBERT. Similarly, for the LTSFT baseline, and for task adaptation of the teacher in the BISTIL-TF
configuration, we use their pretrained single-source task SFTs or train our own when necessary. When training/distilling our own models or SFTs, we generally choose hyperparameters which match those used to train their SFTs in the original work. See Appendix A for full training details and hyperparameters of all models in our comparison, and Appendix B for details of the training corpora.
We experiment with two layer reduction factors
(LRF) for BISTILLATION, 2 (a reduction from 12 to 6 layers) and 3 (12 to 4 layers). Whereas the BIS-TIL setting initializes the model from the teacher
(see §3.2), the SCRATCH setting initializes it randomly.
## 5 Results And Discussion
The results in terms of task performance are summarized in Tables 3-6. As expected, LTSFT on the undistilled MMTs performs best across all tasks.
However, BISTIL-TF with reduction factor 2 is not much worse, with a degradation in performance not exceeding 1.3 points relative to LTSFT on DP,
NER and NLI. The larger gap of 3.4 EM points on QA is likely a result of the fact that the base MMT is much more thoroughly pretrained on the high-resource languages found in XQuAD than on the lower-resource languages found in the datasets for the other tasks. It is therefore harder for BIDIS-6See Footnote 4 for MiniLMv2; mDeBERTa could in theory support language adaptation but its pretraining code was not made publicly available in time to be used in this work.
# Ltsft 53.6 16.5 25.9 55.5 42.4 60.5 19.7 27.2 55.4 45.3 47.8 25.2 42.1 16.7 34.0 37.0 37.8 - Distilmbert 47.7 9.9 19.5 49.1 31.7 53.2 16.2 20.0 43.0 34.9 37.6 17.7 31.4 11.4 28.9 33.9 30.4 -7.4 Scratch, Lrf = 2 16.9 4.9 6.7 27.8 9.1 15.2 6.7 5.6 16.1 12.7 11.1 3.5 9.3 3.9 11.5 14.6 11.0 -26.8 Bistil-St, Lrf = 2 50.9 15.8 24.1 53.7 38.3 57.1 18.7 **23.9** 52.2 **43.7 46.5** 25.2 39.8 13.3 31.8 34.8 35.6 -2.2 Bistil-St, Lrf = 3 48.2 16.1 23.4 52.1 35.0 55.1 18.1 22.2 49.9 40.3 41.3 22.2 37.6 13.3 30.7 33.4 33.7 -4.1 Bistil-Tf, Lrf = 2 **53.2 16.4 24.6 54.8 39.1 59.0 19.0** 23.8 **54.1** 43.5 46.0 **26.9 40.7** 13.1 **32.7 36.4 36.5 -1.3** Bistil-Tf, Lrf = 3 49.7 **16.4** 24.4 52.7 36.8 57.1 18.2 21.0 52.2 41.0 43.3 25.1 38.1 **14.5** 31.3 34.9 34.8 -3.0
ar bm bxr fo gv hsb ja kpv mt myv olo sa sme ug yue zh avg avg∆
Table 3: DP; LAS scores. The results with the smallest gap to the upper-bound LTSFT model are in **bold**.
LTSFT 83.5 76.7 67.4 67.9 54.7 74.6 79.4 66.3 74.8 71.7 -
DISTILMBERT 81.1 73.2 65.3 63.4 50.0 69.2 77.7 64.4 71.2 68.4 -3.3
BISTIL-ST, LRF = 2 **81.3** 74.1 65.9 66.7 53.5 72.1 77.1 64.6 72.8 69.8 -1.9
BISTIL-ST, LRF = 3 80.3 74.0 63.1 64.6 54.7 69.6 76.9 68.0 70.5 69.1 -2.6
BISTIL-TF, LRF = 2 81.0 **74.8 67.5 67.3** 55.0 **72.9 78.4 69.0 75.7 71.3 -0.4**
BISTIL-TF, LRF = 3 79.6 **74.8** 64.6 64.5 **56.7** 70.6 77.2 66.1 72.8 69.6 -2.1
| hau | ibo | kin | lug | luo | pcm | swa | wol | yor | avg | avg∆ |
|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------|
![6_image_0.png](6_image_0.png)
aym bzd cni gn hch nah oto quy shp tar avg avg∆
LTSFT 58.1 44.4 47.9 63.5 42.8 52.4 48.5 62.0 50.3 43.3 51.3 -
BISTIL-TF, LRF = 2 **58.9 45.7** 46.4 **62.9 44.3** 50.8 **44.0** 58.7 **47.2 43.1 50.2 -1.1**
BISTIL-TF, LRF = 3 57.7 43.6 **48.1** 60.9 41.3 **51.4** 42.6 **59.9** 45.5 40.3 49.1 -2.2
Table 5: NLI accuracy (%)
ar de el es hi ro ru th tr vi zh avg avg∆
Table 6: XQuAD; Exact Match scores.
TIL to achieve the base MMT's depth of knowledge of the target language during its relatively short distillation training time. BISTIL-TF, LRF =
2 nevertheless outperforms MiniLMv2 on QA by 1.7 EM points, despite MiniLMv2 receiving 320 times more training than each BIDISTIL model, or roughly 6 times more per language7.
Furthermore, BISTIL-TF, LRF = 2 significantly outperforms DISTILMBERT, with a 6.1 LAS gap on DP and 2.9 F1 gap on NER. BISTIL, LRF = 2 produces models roughly half the size of DISTILMBERT and that, once again, are trained for vastly less time8.
Training bilingual models from SCRATCH performs poorly, lagging behind the other methods by more than 20 points on DP.9 One crucial weakness of SCRATCH, besides its reduced monolingual performance, is a lack of alignment between its representations of the source and target languages, severely impairing cross-lingual transfer. This highlights the advantage of distilling a bilingual model from an MMT within which cross-lingual alignment is already present.
Interestingly, when we evaluate the SCRATCH
models on their *English* DP performance, we obtain an average UAS/LAS score of 81.8/77.1, which is much more competitive in relative terms with the BISTIL-TF, LRF = 2 English DP score of 91.0/88.2 than the corresponding comparison in average target language DP scores of 29.9/11.0 to 55.5/36.5. This suggests that an even larger factor in SCRATCH's weakness than its poor monolingual performance is a lack of alignment between its representations of the source and target languages,
LTSFT 56.5 64.7 61.2 62.4 57.8 69.0 61.8 56.0 56.4 57.1 60.8 60.3 -
MINILMV2 50.4 59.4 54.4 57.9 52.9 64.5 57.6 50.3 51.3 **53.8** 55.0 55.2 -5.1
BISTIL-TF, LRF = 2 **53.5 62.2 55.4 59.8 54.5 66.2 58.3 54.4 53.1** 53.4 **55.7 57.0 -3.4**
BISTIL-TF, LRF = 3 44.3 55.0 44.1 55.2 46.1 59.5 51.3 42.4 48.3 44.6 50.9 49.3 -11.1
Table 4: NER; F1 scores.
cpu ↑ gpu ↑ flops ↓
DISTILMBERT 1.41x 1.03x 0.61x
BISTIL, LRF = 2 1.44x 1.25x 0.61x
BISTIL, LRF = 3 1.71x 1.36x 0.48x
(a) DP efficiency
cpu ↑ gpu ↑ flops ↓
DISTILMBERT 1.93x 1.94x 0.50x
BISTIL, LRF = 2 1.97x 1.98x 0.50x
BISTIL, LRF = 3 2.97x 2.78x 0.33x
(b) NER efficiency
cpu ↑ gpu ↑ flops ↓
BISTIL, LRF = 2 2.02x 1.97x 0.50x
BISTIL, LRF = 3 2.89x 2.85x 0.33x
(c) NLI efficiency
cpu ↑ gpu ↑ flops ↓
MINILMV2 4.25x 3.44x 0.21x
BISTIL, LRF = 2 1.99x 1.85x 0.50x
BISTIL, LRF = 3 2.85x 2.42x 0.33x
(d) QA efficiency
severely impairing cross-lingual transfer. This highlights the advantage of distilling a bilingual model from an MMT within which cross-lingual alignment is already present.
As expected, the performance of BISTIL is somewhat weaker with a larger layer reduction factor of 3, though this is heavily task-dependent. With an LRF of 3, BISTIL-TF still comfortably outperforms DISTILMBERT on DP and NER, and does not fall much behind LRF = 2 for NLI. However, we observe a considerable degradation in performance for LRF = 3 for QA; this may indicate that a 4-layer Transformer struggles to adapt to this particular task, or that for this architecture the modest training time is not sufficient to approach the base MMT's understanding of the source and target languages.
Table 7 presents an analysis of the inference time efficiency. We measure the inference speed both on CPU with batch size 1 and GPU with the same batch size as during task-specific training. We also calculate the number of floating-point operations
(FLOPs) per example using fvcore, measured during an inference pass over the test set of the first language in each task.
For NER, NLI and QA, the efficiency results conform quite closely to the intuitive expectation that a model's inference time should scale linearly with its number of layers; that is, BIDISTIL with LRF = 2 is generally around twice as fast as the base MMT. For DP, we observe a seemingly sublinear scaling which is caused by the very large biaffine parsing head, consisting of ∼23M parameters. The significant cost of applying the model head contributes equally to all models regardless of their degree of distillation. Despite having a moderate LRF of 2, MINILMV2 exhibits impressive speed as a result of the fact that it additionally has a smaller hidden dimension than its teacher (see Table 2), a technique which we do not consider for BIDISTIL, but may be a promising avenue for future work.
We argue that BIDISTIL accomplishes its aim by achieving two- to three-fold reductions in inference time and model size without sacrificing much in the way of raw performance. Its superior performance relative to multilingually distilled models despite its comparatively very modest training budget supports the assertion that specializing multilingual models for a specific transfer pair during distillation helps to avoid performance degradation resulting from the curse of multilinguality.
## 6 Related Work
One strand of prior work focuses on parameterefficient adaptation of pretrained MMTs, i.e. adaptation by adding/modifying a small subset of parameters. Adapters (Rebuffi et al., 2017; Houlsby et al., 2019) have been used extensively for this purpose (Üstün et al., 2020), with the MAD-X framework of Pfeiffer et al. (2020) becoming a starting point for several further developments (Vidoni et al., 2020; Wang et al., 2021b; Parovic et al. ´ ,
2022), where a notable theme is adapting MMTs to unseen languages (Ansell et al., 2021; Pfeiffer et al.,
2021). Ansell et al. (2022) propose composable sparse fine-tunings as an alternative to adapters.
Pfeiffer et al. (2022) create a modular MMT
from scratch, where some parameters are shared among all languages and others are languagespecific. This allows the model to dedicate considerable capacity to every language without each language-specific model becoming overly large; thus it is quite similar in its aims to this work.
A variety of approaches have been proposed for general distillation of pretrained language models. The simplest form uses only soft target probabilities predicted by the teacher model as the training signal for the student (Sanh et al., 2019).
Other approaches try to align the hidden states and self-attention distributions of the student and teacher (Sun et al., 2020; Jiao et al., 2020) and/or finer-grained aspects of the self-attention mechanism (Wang et al., 2020, 2021a). Mukherjee et al.
(2021) initialize the student's embedding matrix with a factorization of the teacher's for better performance when their hidden dimensions differ. Of these, Sanh et al. (2019); Wang et al. (2020, 2021a);
Mukherjee et al. (2021) apply their methods to produce distilled versions of MMTs.
Parovic et al. ´ (2022) adapt pretrained MMTs to specific transfer pairs with adapters; this approach is similar to ours in spirit, but it is aimed towards improving performance rather than efficiency. Minixhofer et al. (2022) learn to transfer full monolingual models across languages. The only work prior we are aware of which creates purely bilingual models for cross-lingual transfer is that of Tran (2020). This approach starts with a monolingual pretrained source language model, initializes target language embeddings via an alignment procedure, and then continues training the model with the added target embeddings on both languages.
## 7 Conclusions
While MMTs are an effective tool for cross-lingual transfer, their broad language coverage makes them unnecessarily costly to deploy in the frequentlyencountered situation where capability is required in only a single, often low-resource, language. We have proposed BISTILLATION, a method of training more efficient models suited to this scenario which works by distilling an MMT using only the source-target language pair of interest. We show that this approach produces models that offer an excellent trade off between target language performance, efficiency, and model compactness. The
'bistilled' models exhibit only a slight decrease in performance relative to their base MMTs whilst achieving considerable reduction in both model size and inference time. Their results also compare favorably to those of multilingually distilled MMTs despite receiving substantially less training even on a per-language basis.
## Limitations
While the results of our experiments seem sufficient to validate the concept and our general approach to bilingual distillation, we have not carried out a detailed systematic analysis of alternative implementations of the various aspects of our methods, such as different student model initializations, distillation objectives and hyperparameter settings. Furthermore, our BISTIL models are likely undertrained due to limited computational resources. Consequently, we do not claim our specific implementation of bilingual distillation to be optimal or even close to optimal. Areas that warrant further investigation toward realizing the full potential of this approach include the use of hidden dimension reduction, which yielded impressive speed gains for MiniLMv2 in our experiments, and other innovations in distillation such as progressive knowledge transfer (Mukherjee et al., 2021).
With the exception of improved efficiency, our BISTIL models inherit the limitations of the MMTs from which they are distilled; notably, there is a discrepancy between the performance on high- and low-resource languages resulting from the distribution of data used during MMT pretraining.
In this work, we have only considered English as the source language; some target languages may benefit from other transfer sources. Future work may also consider the use of multi-source transfer, which would entail distilling with more than two languages. Here the challenge would be optimizing the balance of model capacity allocated to source languages versus the target language.
## Acknowledgements
Alan wishes to thank David and Claudia Harding for their generous support via the Harding Distinguished Postgraduate Scholarship Programme.
Ivan Vulic is supported by a personal Royal So- ´
ciety University Research Fellowship *'Inclusive* and Sustainable Language Technology for a Truly Multilingual World' (no 221137; 2022–).
## References
David Ifeoluwa Adelani, Jade Abbott, Graham Neubig, Daniel D'souza, Julia Kreutzer, Constantine Lignos, Chester Palen-Michel, Happy Buzaaba, Shruti Rijhwani, Sebastian Ruder, Stephen Mayhew, Israel Abebe Azime, Shamsuddeen H. Muhammad, Chris Chinenye Emezue, Joyce Nakatumba-Nabende, Perez Ogayo, Aremu Anuoluwapo, Catherine Gitau, Derguene Mbaye, Jesujoba Alabi, Seid Muhie Yimam, Tajuddeen Rabiu Gwadabe, Ignatius Ezeani, Rubungo Andre Niyongabo, Jonathan Mukiibi, Verrah Otiende, Iroro Orife, Davis David, Samba Ngom, Tosin Adewumi, Paul Rayson, Mofetoluwa Adeyemi, Gerald Muriuki, Emmanuel Anebi, Chiamaka Chukwuneke, Nkiruka Odu, Eric Peter Wairagala, Samuel Oyerinde, Clemencia Siro, Tobius Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akinode, Deborah Nabagereka, Maurice Katusiime, Ayodele Awokoya, Mouhamadane MBOUP, Dibora Gebreyohannes, Henok Tilaye, Kelechi Nwaike, Degaga Wolde, Abdoulaye Faye, Blessing Sibanda, Orevaoghene Ahia, Bonaventure F. P. Dossou, Kelechi Ogueji, Thierno Ibrahima DIOP, Abdoulaye Diallo, Adewale Akinfaderin, Tendai Marengereke, and Salomey Osei. 2021. MasakhaNER: Named entity recognition for African languages. Transactions of the Association for Computational Linguistics, 9:1116–1131.
Željko Agic and Ivan Vuli ´ c. 2019. ´ JW300: A widecoverage parallel corpus for low-resource languages.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3204–
3210, Florence, Italy. Association for Computational Linguistics.
Jesujoba O. Alabi, David Ifeoluwa Adelani, Marius Mosbach, and Dietrich Klakow. 2022. Adapting pretrained language models to African languages via multilingual adaptive fine-tuning. In *Proceedings of* the 29th International Conference on Computational Linguistics, pages 4336–4349, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Alan Ansell, Edoardo Ponti, Anna Korhonen, and Ivan Vulic. 2022. ´ Composable sparse fine-tuning for crosslingual transfer. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1778–1796, Dublin, Ireland. Association for Computational Linguistics.
Alan Ansell, Edoardo Maria Ponti, Jonas Pfeiffer, Sebastian Ruder, Goran Glavaš, Ivan Vulic, and Anna ´
Korhonen. 2021. MAD-G: Multilingual adapter generation for efficient cross-lingual transfer. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 4762–4781, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama.
2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational
Linguistics, pages 4623–4637, Online. Association for Computational Linguistics.
David Brambila. 1976. *Diccionario RaramuriCastellano: Tarahumar*.
Cristian Bucilua, Rich Caruana, and Alexandru ˘
Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '06, page 535–541, New York, NY, USA.
Association for Computing Machinery.
Gina Bustamante, Arturo Oncevay, and Roberto Zariquiey. 2020. No data to crawl? monolingual corpus creation from PDF files of truly low-resource languages in Peru. In *Proceedings of the 12th Language Resources and Evaluation Conference*, pages 2914–2923, Marseille, France. European Language Resources Association.
Luis Chiruzzo, Pedro Amarilla, Adolfo Ríos, and Gustavo Giménez Lugo. 2020. Development of a Guarani - Spanish parallel corpus. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2629–2633, Marseille, France. European Language Resources Association.
Hyung Won Chung, Thibault Fevry, Henry Tsai, Melvin Johnson, and Sebastian Ruder. 2020. Rethinking embedding coupling in pre-trained language models. In International Conference on Learning Representations.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Rubén Cushimariano Romano and Richer C. Sebastián Q. 2008. Ñaantsipeta asháninkaki birakochaki. diccionario asháninka-castellano. versión preliminar. http://www.lengamer.org/
publicaciones/diccionarios/.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Timothy Dozat and Christopher D. Manning. 2017.
Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir Meza Ruiz, Gustavo Giménez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando Coto-Solano, Thang Vu, and Katharina Kann. 2022.
AmericasNLI: Evaluating zero-shot natural language understanding of pretrained multilingual models in truly low-resource languages. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6279–6299, Dublin, Ireland. Association for Computational Linguistics.
Isaac Feldman and Rolando Coto-Solano. 2020. Neural machine translation models with back-translation for the extremely low-resource indigenous language Bribri. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 3965–
3976, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations.
Ana-Paula Galarreta, Andrés Melgar, and Arturo Oncevay. 2017. Corpus creation and initial SMT experiments between Spanish and Shipibo-konibo. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP
2017, pages 238–244, Varna, Bulgaria. INCOMA
Ltd.
Goran Glavaš and Ivan Vulic. 2021. ´ Is supervised syntactic parsing beneficial for language understanding tasks? an empirical investigation. In *Proceedings* of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3090–3104, Online. Association for Computational Linguistics.
Ximena Gutierrez-Vasques, Gerardo Sierra, and Isaac Hernandez Pompa. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16),
pages 4210–4214, Portorož, Slovenia. European Language Resources Association (ELRA).
Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021.
Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015.
Distilling the knowledge in a neural network.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790–2799.
PMLR.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020.
TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4163–
4174, Online. Association for Computational Linguistics.
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP
world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics.
Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and ´
Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483–4499, Online. Association for Computational Linguistics.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Manuel Mager, Diónico Carrillo, and Ivan Meza. 2018.
Probabilistic finite-state morphological segmenter for wixarika (huichol) language. Journal of Intelligent
& Fuzzy Systems, 34(5):3081–3087.
Elena Mihas. 2011. *Añaani katonkosatzi parenini, El idioma del alto Perené*. Milwaukee, WI: Clarks Graphics.
Benjamin Minixhofer, Fabian Paischer, and Navid Rekabsaz. 2022. WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 3992–4006, Seattle, United States. Association for Computational Linguistics.
Subhabrata Mukherjee, Ahmed Hassan Awadallah, and Jianfeng Gao. 2021. Xtremedistiltransformers: Task transfer for task-agnostic distillation.
John E Ortega, Richard Alexander Castro-Mamani, and Jaime Rafael Montoya Samame. 2020. Overcoming resistance: The normalization of an Amazonian tribal language. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 1–13, Suzhou, China. Association for Computational Linguistics.
Marinela Parovic, Goran Glavaš, Ivan Vuli ´ c, and Anna ´
Korhonen. 2022. BAD-X: Bilingual adapters improve zero-shot cross-lingual transfer. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational*
Linguistics: Human Language Technologies, pages 1791–1799, Seattle, United States. Association for Computational Linguistics.
Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. 2022.
Lifting the curse of multilinguality by pre-training modular transformers. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3479–3495, Seattle, United States. Association for Computational Linguistics.
Jonas Pfeiffer, Sebastian Ruder, Ivan Vulic, and ´
Edoardo Maria Ponti. 2023. Modular deep learning.
CoRR, abs/2302.11529.
Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Se- ´
bastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7654–7673, Online. Association for Computational Linguistics.
Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebas- ´
tian Ruder. 2021. UNKs everywhere: Adapting multilingual language models to new scripts. In *Proceedings of the 2021 Conference on Empirical Methods in* Natural Language Processing, pages 10186–10203, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
How multilingual is multilingual BERT? In *Proceedings of the 57th Annual Meeting of the Association for* Computational Linguistics, pages 4996–5001, Florence, Italy. Association for Computational Linguistics.
Edoardo Maria Ponti, Goran Glavaš, Olga Majewska, Qianchu Liu, Ivan Vulic, and Anna Korhonen. 2020. ´
XCOPA: A multilingual dataset for causal commonsense reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2362–2376, Online. Association for Computational Linguistics.
Edoardo Maria Ponti, Helen O'Horan, Yevgeni Berzak, Ivan Vulic, Roi Reichart, Thierry Poibeau, Ekate- ´
rina Shutova, and Anna Korhonen. 2019. Modeling language variation and universals: A survey on typological linguistics for natural language processing.
Computational Linguistics, 45(3):559–601.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains
with residual adapters. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT:
a compact task-agnostic BERT for resource-limited devices. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 2158–2170, Online. Association for Computational Linguistics.
Yi-Lin Sung, Varun Nair, and Colin Raffel. 2021. Training neural networks with fixed sparse masks. In *Advances in Neural Information Processing Systems 34:*
Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 24193–24205.
Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In *Proceedings of the Eighth International Conference on Language Resources and* Evaluation (LREC'12), pages 2214–2218, Istanbul, Turkey. European Language Resources Association
(ELRA).
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–
147.
Ke Tran. 2020. From english to foreign languages:
Transferring pre-trained language models.
Ahmet Üstün, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language adaptation for truly Universal Dependency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2302–2315, Online. Association for Computational Linguistics.
Marko Vidoni, Ivan Vulic, and Goran Glavaš. 2020. ´
Orthogonal language and task adapters in zero-shot cross-lingual transfer.
Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2021a. MiniLMv2: Multi-head selfattention relation distillation for compressing pretrained transformers. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2140–2151, Online. Association for Computational Linguistics.
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. In Advances in Neural Information Processing Systems, volume 33, pages 5776–5788. Curran Associates, Inc.
Xinyi Wang, Yulia Tsvetkov, Sebastian Ruder, and Graham Neubig. 2021b. Efficient test time adapter ensembling for low-resource language varieties. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 730–737, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas:
The surprising cross-lingual effectiveness of BERT.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics.
Daniel Zeman, Joakim Nivre, Mitchell Abrams, Elia Ackermann, Noëmi Aepli, Hamid Aghaei, Željko Agic, Amir Ahmadi, Lars Ahrenberg, Chika Kennedy ´ Ajede, Gabriele Aleksandravi ˙ ciˇ ut¯ e, Ika Alfina, Lene ˙
Antonsen, Katya Aplonova, Angelina Aquino, Carolina Aragon, Maria Jesus Aranzabe, Hórunn Arnardóttir, Gashaw Arutie, Jessica Naraiswari Arwidarasti, Masayuki Asahara, Luma Ateyah, Furkan Atmaca, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Keerthana Balasubramani, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Victoria Basmov, Colin Batchelor, John Bauer, Seyyit Talha Bedir, Kepa Bengoetxea, Gözde Berk, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agne Bielinskien ˙ e, Kristín Bjarnadót- ˙
tir, Rogier Blokland, Victoria Bobicev, Loïc Boizou, Emanuel Borges Völker, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Kristina Brokaite, Aljoscha Burchardt, Marie Can- ˙
dito, Bernard Caron, Gauthier Caron, Tatiana Cavalcanti, Gül¸sen Cebiroglu Eryi ˘ git, Flavio Massimil- ˘
iano Cecchini, Giuseppe G. A. Celano, Slavomír Cé- ˇ
plö, Savas Cetin, Özlem Çetinoglu, Fabricio Chalub, ˘
Ethan Chi, Yongseok Cho, Jinho Choi, Jayeol Chun, Alessandra T. Cignarella, Silvie Cinková, Aurélie Collomb, Çagrı Çöltekin, Miriam Connor, Ma- ˘
rine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Mehmet Oguz Derin, Elvis de Souza, Arantza Diaz de Ilarraza, Carly Dickerson, Arawinda Dinakaramani, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Tomaž Erjavec, Aline Etienne, Wograine Evelyn, Sidney Facundes, Richárd Farkas, Marília Fernanda, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Kazunori Fujita, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Fabrício Ferraz Gerardi, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Bernadeta Griciut¯ e, Matias Grioni, Loïc Grobol, Nor- ˙
munds Gruz¯ ¯ıtis, Bruno Guillaume, Céline GuillotBarbance, Tunga Güngör, Nizar Habash, Hinrik Hafsteinsson, Jan Hajic, Jan Haji ˇ c jr., Mika Hämäläi- ˇ
nen, Linh Hà My, Na-Rae Han, Muhammad Yudi- ˜ stira Hanifmuti, Sam Hardwick, Kim Harris, Dag Haug, Johannes Heinecke, Oliver Hellwig, Felix Hennig, Barbora Hladká, Jaroslava Hlavácová, ˇ
Florinel Hociung, Petter Hohle, Eva Huber, Jena Hwang, Takumi Ikeda, Anton Karl Ingason, Radu Ion, Elena Irimia, O.lájídé Ishola, Tomáš Jelínek, Anders Johannsen, Hildur Jónsdóttir, Fredrik Jørgensen, Markus Juutinen, Sarveswaran K, Hüner Ka¸sıkara, Andre Kaasen, Nadezhda Kabaeva, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Boris Katz, Tolga Kayadelen, Jessica Kenney, Václava Kettnerová, Jesse Kirchner, Elena Klementieva, Arne Köhn, Abdullatif Köksal, Kamil Kopacewicz, Timo Korkiakangas, Natalia Kotsyba, Jolanta Kovalevskaite, Simon Krek, Parameswari Krishna- ˙
murthy, Sookyoung Kwak, Veronika Laippala, Lucia Lam, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phương Lê Hông, Alessandro Lenci, Saran Lert- `
pradit, Herman Leung, Maria Levina, Cheuk Ying Li, Josie Li, Keying Li, Yuan Li, KyungTae Lim, Krister Lindén, Nikola Ljubešic, Olga Loginova, ´
Andry Luthfi, Mikko Luukko, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Cat˘ alina M ˘ ar˘ anduc, David Mare ˘ cek, Katrin ˇ Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Hiroshi Matsuda, Yuji Matsumoto, Ryan McDonald, Sarah McGuinness, Gustavo Mendonça, Niko Miekka, Karina Mischenkova, Margarita Misirpashayeva, Anna Missilä, Cat˘ alin Mi- ˘
titelu, Maria Mitrofan, Yusuke Miyao, AmirHossein Mojiri Foroushani, Amirsaeid Moloodi, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Shinsuke Mori, Tomohiko Morioka, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Robert Munro, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Mariam Nakhlé, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-Berzkalne, L ¯ ương Nguy˜ên Thi
., Huy`ên Nguy˜ên Thi
. Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Alireza Nourian, Hanna Nurmi, Stina Ojala, Atul Kr. Ojha, Adédayo.' Olúòkun, Mai Omura, Emeka Onwuegbuzia, Petya Osenova, Robert Östling, Lilja Øvrelid, ¸Saziye Betül Özate¸s, Arzucan Özgür, Balkız Öztürk Ba¸saran, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Angelika Peljak-Łapinska, Siyao ´ Peng, Cenel-Augusto Perez, Natalia Perkova, Guy Perrier, Slav Petrov, Daria Petrova, Jason Phelan, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler, Barbara Plank, Thierry Poibeau, Larisa Ponomareva, Martin Popel, Lauma Pretkalnin, a, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Peng Qi, Andriela Rääbis, Alexandre Rademaker, Taraka Rama, Loganathan Ramasamy, Carlos Ramisch, Fam Rashel, Mohammad Sadegh Rasooli, Vinit Ravishankar, Livy Real, Petru Rebeja, Siva Reddy, Georg Rehm, Ivan Riabov, Michael Rießler, Erika Rimkute, Larissa Ri- ˙
naldi, Laura Rituma, Luisa Rocha, Eiríkur Rögnvaldsson, Mykhailo Romanenko, Rudolf Rosa, Valentin Ros, ca, Davide Rovati, Olga Rudina, Jack Rueter, Kristján Rúnarsson, Shoval Sadde, Pegah Safari, Benoît Sagot, Aleksi Sahala, Shadi Saleh, Alessio Salomoni, Tanja Samardžic, Stephanie Samson, ´
Manuela Sanguinetti, Dage Särg, Baiba Saul¯ıte, Yanin Sawanakunanon, Kevin Scannell, Salvatore Scarlata, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Muh Shohibussirri, Dmitry Sichinava, Einar Freyr Sigurðsson, Aline Silveira, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Maria Skachedubova, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Steinhór Steingrímsson, Antonio Stella, Milan Straka, Emmett Strickland, Jana Strnadová, Alane Suhr, Yogi Lesmana Sulestio, Umut Sulubacak, Shingo Suzuki, Zsolt Szántó, Dima Taji, Yuta Takahashi, Fabio Tamburini, Mary Ann C. Tan, Takaaki Tanaka, Samson Tella, Isabelle Tellier, Guillaume Thomas, Liisi Torga, Marsida Toska, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Utku Türk, Francis Tyers, Sumire Uematsu, Roman Untilov, Zdenka Ure- ˇ
šová, Larraitz Uria, Hans Uszkoreit, Andrius Utka, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Aya Wakasa, Joel C. Wallenberg, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Maximilan Wendt, Paul Widmer, Seyi Williams, Mats Wirén, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wróblewska, Mary Yako, Kayo Yamashita, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Zhuoran Yu, Zdenek Žabokrt- ˇ ský, Shorouq Zahra, Amir Zeldes, Hanzhi Zhu, and Anna Zhuravleva. 2020. Universal Dependencies 2.7. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL),
Faculty of Mathematics and Physics, Charles Univer-
## A **Training Details And Hyperparameters**
As we evaluate over many languages and tasks, we carry out a single run per (task, language, configuration) triple.
## A.1 Language Distillation/Adaptation
The following are constant across all language distillation/SFT training: we use a batch size of 8 and a maximum sequence length of 256; model checkpoints are evaluated every 1,000 steps (5,000 for high-resource languages) on a held-out set of 5%
of the corpus (1% for high-resource languages),
and the one with the smallest loss is selected at the end of training; we use the AdamW optimizer
(Loshchilov and Hutter, 2019) with linear decay without any warm-up.
During LT-SFT training of DistilmBERT's language SFTs, the dense and sparse fine-tuning phases each last the lesser of 100,000 steps or 200 epochs, but at least 30,000 steps if 200 epochs is less. The initial learning rate is 5 · 10−5. The SFT
density is set to 4%.10 When distilling bilingual models or learning them from scratch, training lasts 200,000 steps (to equal the total length of the two phases of LT-SFT
training). The initial learning rate is 10−4. The model architecture and hyperparameters are identical to the teacher MMT's other than a reduction in the number of layers and the use of vocabulary reduction as described in §3.2.
## A.2 Task Distillation/Adaptation
For DP and NER, we train task SFTs for 3 epochs in the dense phase of LT-SFT and 10 epochs in the sparse phase, evaluating the model checkpoint on the validation set at the end of each epoch, and taking the best checkpoint at the end of training.
The selection metric is labeled attachment score for DP and F1-score for NER. The initial learning rate is 5 · 10−5 with linear decay. For NER, we use the standard token-level single-layer multi-class model head. For DP, we use the shallow variant (Glavaš and Vulic´, 2021) of the biaffine dependency parser of Dozat and Manning (2017). For NLI, we train for 5 epochs with batch size 32, with checkpoint evaluation on the validation set every 625 steps, 10This is similar but not identical to the density used by Ansell et al. (2022), who use a very specific number of trainable parameters for comparability to their baseline; we prefer to use a round number.
and an initial learning rate of 2 · 10−5. We apply a two-layer multi-class classification head atop the model output corresponding to the [CLS] token.
For QA, we train for 5 epochs with a batch size of 12, with checkpoint evaluation every 2000 steps and an initial learning rate of 3 · 10−5. The singlelayer model head independently predicts the start and end positions of the answer span, and at inference time the span whose endpoints have the largest sum of logits is selected.
We set the density of our task SFTs to 8%, which Ansell et al. (2022) found to offer the best task performance in all their experiments.
## B Languages
Source English en Indo-European, Germanic EWT Wikipedia
Hausa hau Afro-Asiatic, Chadic Aymara aym Aymaran Arabic ar Afro-Asiatic, Semitic
| DP NER NLI QA |
|-----------------|
| Task | Language | ISO Code | Family | UD Treebank | Corpus source(s) |
|-----------------|-----------------------------|--------------------------|-----------------------------------------------------------------------------------------------------------|---------------------|--------------------|
| Arabic | ar | Afro-Asiatic, Semitic | PUD | | |
| Bambara | bm | Mande | CRB | | |
| Buryat | bxr | Mongolic | BDT | | |
| Cantonese | yue | Sino-Tibetan | HK | | |
| Chinese | zh | Sino-Tibetan | GSD | | |
| Erzya | myv | Uralic, Mordvin | JR | | |
| Faroese | fo | Indo-European, Germanic | FarPaHC | | |
| Japanese | ja | Japanese | GSD | | |
| Livvi | olo | Uralic, Finnic | KKPP | | |
| Maltese | mt | Afro-Asiatic, Semitic | MUDT | | |
| Manx | gv | Indo-European, Celtic | Cadhan | | |
| North Sami | sme | Uralic, Sami | Giella | | |
| Komi Zyrian | kpv | Uralic, Permic | Lattice | | |
| Sanskrit | sa | Indo-European, Indic | UFAL | | |
| Upper Sorbian | hsb | Indo-European, Slavic | UFAL | | |
| Uyghur | ug | Turkic, Southeastern | UDT | Wikipedia Wikipedia | |
| Igbo | ibo | Niger-Congo, Volta-Niger | Wikipedia | | |
| Kinyarwanda | kin | Niger-Congo, Bantu | Wikipedia | | |
| Luganda | lug | Niger-Congo, Bantu | Wikipedia | | |
| Luo | luo | Nilo-Saharan | Luo News Dataset (Adelani et al., 2021) | | |
| Nigerian-Pidgin | pcm | English Creole | JW300 (Agic and Vuli ´ c´, 2019) | | |
| Swahili | swa | Niger-Congo, Bantu | Wikipedia | | |
| Wolof | wol | Niger-Congo, Senegambian | Wikipedia | | |
| Yorùbá | yor | Niger-Congo, Volta-Niger | Wikipedia | | |
| N/A | Tiedemann (2012); Wikipedia | | | | |
| Asháninka | cni | Arawakan | Ortega et al. (2020); Cushimariano Romano and Sebastián Q. (2008); Mihas (2011); Bustamante et al. (2020) | | |
| Bribri | bzd | Chibchan, Talamanca | Feldman and Coto-Solano (2020) | | |
| Guarani | gn | Tupian, Tupi-Guarani | Chiruzzo et al. (2020); Wikipedia | | |
| Náhuatl | nah | Uto-Aztecan, Aztecan | Gutierrez-Vasques et al. (2016); Wikipedia | | |
| Otomí | oto | Oto-Manguean, Otomian | Hñähñu Online Corpus | | |
| Quechua | quy | Quechuan | Agic and Vuli ´ c´ (2019); Wikipedia | | |
| Rarámuri | tar | Uto-Aztecan, Tarahumaran | Brambila (1976) | | |
| Shipibo-Konibo | shp | Panoan | Galarreta et al. (2017); Bustamante et al. (2020) | | |
| Wixarika | hch | Uto-Aztecan, Corachol | Mager et al. (2018) | | |
| N/A N/A | Wikipedia | | | | |
## C Additional Results
ar bm bxr fo gv hsb ja kpv mt myv olo sa sme ug yue zh avg avg∆
LTSFT 70.8 43.1 49.2 68.2 60.0 73.7 36.9 50.5 74.6 65.9 66.4 49.5 58.0 36.4 51.1 59.8 57.1 - DISTILMBERT 65.7 34.4 42.3 63.0 52.8 67.6 32.1 42.2 65.4 58.6 59.6 44.1 51.2 29.2 47.0 56.1 50.7 -6.4
SCRATCH, LRF = 2 38.5 26.6 24.8 44.9 35.4 33.5 18.6 23.4 42.9 31.5 30.2 23.0 26.1 12.3 30.8 35.6 29.9 -27.2
BISTIL-ST, LRF = 2 68.0 41.6 45.7 66.3 56.6 70.9 34.1 **48.2** 71.0 **64.5 64.3** 48.9 **57.6 34.5** 49.4 56.7 54.9 -2.2
BISTIL-ST, LRF = 3 65.5 42.5 45.9 64.1 52.7 68.1 33.2 46.5 68.0 62.0 61.5 46.9 55.1 32.4 48.6 55.3 53.0 -4.1
BISTIL-TF, LRF = 2 **70.3** 43.4 46.8 **67.1 57.7 72.4 34.5** 47.6 **72.7** 64.2 62.6 **50.5** 57.4 32.3 **49.8 58.6 55.5 -1.6** BISTIL-TF, LRF = 3 67.0 **43.9 47.6** 65.1 54.2 70.0 33.4 44.2 69.7 62.3 61.8 49.2 55.1 33.3 48.9 56.5 53.9 -3.3
Table 9: DP UAS score
| ar | de | el | es | hi | ro | ru | th | tr | vi | zh | avg | avg∆ | |
|--------------------|------|------|------|------|------|------|------|------|------|------|-------|--------|-------|
| LTSFT | 73.0 | 80.5 | 78.6 | 80.6 | 74.3 | 82.4 | 77.8 | 69.7 | 72.2 | 76.5 | 68.9 | 75.9 | - |
| MINILMV2 | 66.4 | 75.5 | 72.4 | 76.6 | 69.6 | 78.3 | 74.0 | 63.8 | 67.6 | 73.3 | 64.6 | 71.1 | -4.8 |
| BISTIL-TF, LRF = 2 | 69.4 | 77.4 | 73.8 | 77.6 | 69.7 | 79.1 | 75.0 | 66.7 | 68.8 | 72.8 | 64.5 | 72.3 | -3.6 |
| BISTIL-TF, LRF = 3 | 62.4 | 70.7 | 63.3 | 74.7 | 61.4 | 73.4 | 68.7 | 54.3 | 62.9 | 63.0 | 60.4 | 65.0 | -10.9 |
Table 10: XQuAD F1 score
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
8
✗ A2. Did you discuss any potential risks of your work?
We do not consider there to be any significant apparent risks arising from the work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4,5
✓ B1. Did you cite the creators of artifacts you used?
Throughout.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The license terms are easily accessible using the links provided and our usage was clearly in keeping with the terms.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
All use of existing artifacts was clearly in keeping with their intended purpose.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We did not use sensitive data in our experiments.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
4
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Did not include \# of examples for space reasons, though this information is easily accessible. Would include in camera-ready.
## C ✓ **Did You Run Computational Experiments?** 4,5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5. Total computational budget is not stated as it is difficult to calculate exactly, but time for core experimental method is provided.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4, A1.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5, A1.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3,4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
mcnamee-duh-2023-extensive | An Extensive Exploration of Back-Translation in 60 Languages | https://aclanthology.org/2023.findings-acl.518 | Back-translation is a data augmentation technique that has been shown to improve model quality through the creation of synthetic training bitext. Early studies showed the promise of the technique and follow on studies have produced additional refinements. We have undertaken a broad investigation using back-translation to train models from 60 languages into English; the majority of these languages are considered moderate- or low-resource languages. We observed consistent gains, though compared to prior work we saw conspicuous gains in quite a number of lower-resourced languages. We analyzed differences in translations between baseline and back-translation models, and observed many indications of improved translation quality. Translation of both rare and common terms is improved, and these improvements occur despite the less natural synthetic source-language text used in training. | # An Extensive Exploration Of Back-Translation In 60 Languages
Paul McNamee and **Kevin Duh**
Human Language Technology Center of Excellence Johns Hopkins University [email protected] [email protected]
## Abstract
Back-translation is a data augmentation technique that has been shown to improve model quality through the creation of synthetic training bitext. Early studies showed the promise of the technique and follow on studies have produced additional refinements. We have undertaken a broad investigation using backtranslation to train models from 60 languages into English; the majority of these languages are considered moderate- or low-resource languages. We observed consistent gains, though compared to prior work we saw conspicuous gains in quite a number of lower-resourced languages. We analyzed differences in translations between baseline and back-translation models, and observed many indications of improved translation quality. Translation of both rare and common terms is improved, and these improvements occur despite the less natural synthetic source-language text used in training.
## 1 Introduction
Back-translation was applied to statistical machine translation at least as far back as 2009 (Bertoldi and Federico, 2009) with modest gains being reported.
Sennrich *et al.* (2016) applied back-translation in NMT and obtained gains of 2-3 BLEU for English/German and about 4 BLEU in Turkish to English. This renewed interest in back-translation and it became a popular technique used in WMT
evaluations, particularly in high-resource settings.
Research continued in back-translation, with a paper by Hoang *et al.* (2018) who studied iterative back-translation, where the reverse model is itself improved through back-translation. In lowresource scenarios they observed gains of about 1.5 BLEU, however, the marginal gain of repeated iterations is small. Many studies conducted experiments where a high resource language pair was sampled to artificially create a "low" resource dataset, however, we are concerned that such simulations are not a good proxy due to dissimilar scripts, atypical subject matter, and noisy training data common in low-resource bitext. A few studies have looked look at *bona fide* low-resource language pairs. One example is Xia *et al.* (2019) who found 3+ BLEU point gains in several languages, and even an 8 point gain in Azerbaijani to English.
Other influential works in back-translation include: Edunov *et al.* (2018) who investigated the optimal amount of monolingual data to use in highresource pairs; Imankulova *et al.* (2017) who examined filtering out lower-quality synthetic bitext pairs; Marie *et al.* (2020) who examined weighting synthetic exemplars differently than humanproduced bitext; and Edunov *et al.* (2020) and Graça *et al.* (2019) who studied use of sampling.
Our goal in this study is to reexamine the use of back-translation through extensive experimentation in moderately and low-resourced languages. We believe that this is the largest study to date in terms of the number of languages for which back-translation effectiveness has been analyzed. We describe our experimental setup in Section 2. In Section 3 we compare back-translation to a baseline model for 60 source languages. An analysis of these results is provided in Section 4. In Section 5 we examine the amount of synthetic data to use in six languages.
And in Section 6 we report on experiments using repeated back-translation in 13 languages.
## 2 Methods
In this section we describe model training and the evaluation datasets we use for evaluation.
## 2.1 Training
Neural machine translation models were trained with the Transformer (Vaswani et al., 2017) using Amazon's Sockeye (v2) toolkit (Apache-2.0)
(Hieber et al., 2020). Data was obtained from public sources, in particular, bitext downloadable from the OPUS portal (Tiedemann, 2012). Preprocessing steps included: running the Moses tokenizer; 8166
![1_image_0.png](1_image_0.png)
removal of duplicate lines; and, learning of subword units using the *subword-nmt* toolkit. Case was retained.
Key hyperparameters include: use of 6 layers in both encoder and decoder; 1,024 dimensional embeddings; 16 attention heads; 4,096 hidden units per layer; 30,000 subword byte pair encoding (BPE) unit, separately in source and target languages; batch size of 1,024; the Adam optimizer with an initial learning rate of 2×10−4. The models were thus trained with a straightforward implementation of the Transformer.
To perform back-translation we used monolingual English text from the web-crawled news portion of the Leipzig corpus1. This consisted of 7 million sentences of web-scraped news from 2014 to 2020.2 There are 1 million sentences available from each year. The training process with backtranslation is depicted in Figure 1. In Step 1 a reverse model is trained from the ultimate target language (here always English) to the ultimate source language. In Step 2 inference is performed using the reverse model on monolingual text. Finally, in Step 3 a forward model is trained, using a concatenation of the original human-produced bitext and the synthetic bitext from Step 2. This model is independently trained; the only difference compared to a baseline model (labelled 'Base' in Table 1 below) is that the training data has been supplemented.
## 2.2 Evaluation
The FLORES-101 (Goyal et al., 2022) dataset was created by Facebook Research using content from 1https://wortschatz.uni-leipzig.de/en/download 2Experiments in Sections 5 & 6 use slightly different data.
Wikipedia (*e.g.,* news, travel guides). Translations are from English into 100 other languages, with an emphasis on obtaining translations in lowerresourced languages. 3,001 sentences were split into test, *devtest*, and dev partitions; we report results on the 1012 sentence *test* set.
TICO-19 (Anastasopoulos et al., 2020) was created as a domain-specific test set to support customization and evaluation of translation models that would be useful during the SARS-COV-2 pandemic. English content from PubMed and various Wikipedia.org projects was translated into 9 higherresourced languages and 26 lower-resourced languages. The data is provided as *test* (2,100 sents)
and dev (971 sents) partitions, though we use all 3,071 sentences for testing. Translations are available in 19 of the 60 languages that we studied.
Samples of *flores101* and *tico19* sentences can be found in the Appendix. Translations were scored using case-insensitive BLEU scores (Papineni et al.,
2002) calculated with *sacrebleu* (Post, 2018).3
## 3 Results
Scores for the baseline models (Base) and for models trained using back-translation (BT) are shown in Table 1.
The top tier of languages experience gains of 1-2 BLEU points (∼ 4% relative gain); the middle tier sees gains averaging about 5 BLEU (23% relative gain); and, the least resourced languages see average gains of about 8 BLEU (70% relative gain).
Languages such as Burmese, Gujarati, Kannada, and Khmer attain *roughly double the score* of their 3BLEU+case.lc+numrefs.1+smooth.exp+tok.13a+version.1.4.14
| flores101 | tico19 | | | | | | | | | | | |
|-------------|---------------|--------|------|------|------|-------|---------|------|------|------|-------|---------|
| Code | Language | Bitext | M2M | Base | BT | ∆ | % | M2M | Base | BT | ∆ | % |
| heb | Hebrew | 33.2M | 37.9 | 44.0 | 45.4 | +1.4 | +3.2% | | | | | |
| srp | Serbian | 32.3M | 40.7 | 42.8 | 43.4 | +0.6 | +1.4% | | | | | |
| ind | Indonesian | 26.4M | 39.6 | 42.4 | 44.3 | +0.9 | +2.1% | 43.9 | 45.1 | 46.5 | +1.4 | +3.1% |
| slv | Slovenian | 25.2M | 33.4 | 35.3 | 36.3 | +1.0 | +2.8% | | | | | |
| slk | Slovak | 22.1M | 37.6 | 38.3 | 39.7 | +1.4 | +3.7% | | | | | |
| est | Estonian | 21.0M | 35.8 | 37.7 | 38.5 | +0.8 | +2.1% | | | | | |
| kor | Korean | 15.0M | 25.6 | 29.3 | 31.0 | +1.7 | +5.8% | | | | | |
| lit | Lithuanian | 14.9M | 32.6 | 33.0 | 35.0 | +2.0 | +6.1% | | | | | |
| vie | Vietnamese | 14.3M | 33.2 | 35.5 | 36.7 | +1.2 | +3.4% | | | | | |
| lav | Latvian | 14.2M | 34.3 | 34.9 | 37.8 | +2.9 | +8.3% | | | | | |
| fas | Farsi | 11.4M | 29.9 | 35.1 | 37.6 | +2.5 | +7.1% | 30.1 | 34.3 | 35.7 | +1.4 | +4.1% |
| bos | Bosnian | 10.8M | 37.6 | 39.0 | 41.2 | +2.2 | +5.6% | | | | | |
| swh | Swahili | 9.9M | 34.2 | 40.4 | 42.8 | +2.4 | +5.9% | 33.0 | 38.5 | 40.8 | +2.3 | +6.0% |
| ukr | Ukrainian | 9.0M | 36.3 | 36.9 | 39.3 | +2.4 | +6.5% | | | | | |
| hin | Hindi | 8.7M | 34.8 | 35.2 | 40.9 | +5.7 | +16.2% | 42.6 | 45.5 | 49.5 | +4.0 | +8.8% |
| tgl | Tagalog | 6.3M | 27.9 | 40.3 | 43.4 | +3.1 | +7.7% | 40.9 | 49.6 | 54.8 | +5.2 | +10.5% |
| msa | Malay | 6.1M | 39.4 | 35.9 | 39.6 | +3.7 | +10.3% | 45.6 | 41.2 | 45.3 | +4.1 | +10.0% |
| cat | Catalan | 5.2M | 43.4 | 40.3 | 43.5 | +3.2 | +7.9% | | | | | |
| isl | Icelandic | 5.0M | 29.5 | 31.3 | 33.7 | +2.4 | +7.7% | | | | | |
| mkd | Macedonian | 4.8M | 40.3 | 40.2 | 41.6 | +1.4 | +3.5% | | | | | |
| mlt | Maltese | 4.2M | - | 49.2 | 53.5 | +4.3 | +8.7% | | | | | |
| ben | Bengali | 4.0M | 28.6 | 27.3 | 33.3 | +6.0 | +22.0% | 33.9 | 30.7 | 37.8 | +7.1 | +23.1% |
| afr | Afrikaans | 3.0M | 52.7 | 52.8 | 53.8 | +1.0 | +1.9% | | | | | |
| xho | Xhosa | 3.0M | 18.5 | 30.9 | 35.4 | +4.5 | +14.6% | | | | | |
| zul | Zulu | 2.8M | 17.9 | 30.5 | 35.2 | +4.7 | +15.4% | 24.7 | 33.8 | 38.9 | +5.1 | +15.1% |
| sna | Shona | 2.5M | - | 20.5 | 23.4 | +2.9 | +14.1% | | | | | |
| gle | Irish | 2.4M | 1.2 | 34.4 | 37.6 | +3.2 | +9.3% | | | | | |
| hau | Hausa | 2.2M | 13.9 | 25.2 | 29.8 | +4.6 | +18.3% | 16.9 | 25.9 | 31.3 | +5.4 | + 20.8% |
| tam | Tamil | 1.7M | 10.8 | 20.1 | 28.6 | +8.5 | +42.3% | 11.6 | 19.7 | 29.4 | +9.7 | +49.2% |
| urd | Urdu | 1.7M | 24.6 | 23.6 | 29.4 | +5.8 | +24.6% | 26.0 | 26.5 | 31.1 | +4.6 | +17.4% |
| yor | Yoruba | 1.4M | 4.8 | 11.3 | 14.9 | +3.6 | +31.9% | | | | | |
| kat | Georgian | 1.4M | 16.1 | 17.5 | 22.3 | +4.8 | +27.4% | | | | | |
| mal | Malayalam | 1.3M | 22.9 | 19.0 | 31.7 | +12.7 | +66.8% | | | | | |
| azj | Azerbaijani | 1.2M | 8.7 | 13.5 | 18.5 | +5.0 | +37.0% | | | | | |
| jav | Javanese | 1.2M | 23.0 | 12.4 | 20.0 | +7.6 | +61.3% | | | | | |
| mar | Marathi | 1.1M | 23.5 | 17.9 | 29.1 | +11.2 | +62.6% | 24.0 | 19.5 | 30.0 | +10.5 | +53.8% |
| nya | Nyanja | 1.1M | - | 15.6 | 20.4 | +4.8 | +30.8% | | | | | |
| bel | Belarusian | 1.1M | 15.2 | 14.1 | 16.8 | +2.7 | +19.1% | | | | | |
| hye | Armenian | 983k | 22.1 | 25.4 | 32.7 | +7.3 | +28.7% | | | | | |
| amh | Amharic | 950k | 14.3 | 19.8 | 29.5 | +9.7 | +49.0% | | | | | |
| tel | Telegu | 908k | - | 23.6 | 35.9 | +12.3 | +52.1% | | | | | |
| npi | Nepali | 787k | 14.0 | 16.5 | 29.9 | +13.4 | +81.2% | 23.9 | 20.2 | 35.7 | +15.5 | +76.7% |
| som | Somali | 786k | 3.3 | 14.6 | 21.5 | +6.9 | +47.3% | 3.0 | 8.8 | 12.0 | +3.2 | +36.4% |
| cym | Welsh | 772k | 26.7 | 40.5 | 50.8 | +10.3 | +25.4% | | | | | |
| lin | Lingala | 768k | 4.0 | 11.6 | 19.2 | +7.6 | +65.5% | 6.5 | 9.2 | 15.9 | +6.7 | +72.8% |
| lug | Ganda | 768k | 4.0 | 6.3 | 11.2 | +4.9 | +77.8% | 8.6 | 9.3 | 15.9 | +6.6 | +71.0% |
| mya | Burmese | 734k | 8.4 | 10.3 | 19.8 | +9.5 | +92.2% | 12.6 | 11.1 | 19.9 | +8.8 | +79.3% |
| nso | Pedi | 718k | 4.0 | 21.8 | 31.8 | +10.0 | +45.9% | | | | | |
| glg | Galician | 692k | 38.2 | 33.5 | 37.0 | +3.5 | +10.4% | | | | | |
| ceb | Cebuano | 691k | 21.4 | 25.6 | 32.6 | +7.0 | +27.3% | | | | | |
| orm | Oromo | 667k | - | 4.5 | 7.3 | +2.8 | +62.2% | - | 5.6 | 9.6 | +4.0 | + 71.4% |
| kaz | Kazakh | 635k | 5.4 | 16.5 | 25.7 | +9.2 | +55.8% | | | | | |
| khm | Central Khmer | 634k | 14.3 | 10.5 | 19.8 | +9.3 | +88.6% | 21.4 | 14.6 | 26.1 | +11.5 | +78.8% |
| ibo | Igbo | 568k | 12.5 | 14.3 | 19.7 | +5.4 | +37.8% | | | | | |
| mon | Mongolian | 559k | 15.8 | 10.8 | 18.9 | +8.1 | +75.0% | | | | | |
| guj | Gujarati | 410k | 1.6 | 14.6 | 29.4 | +14.8 | +101.3% | | | | | |
| kan | Kannada | 390k | 0.8 | 8.3 | 18.7 | +10.4 | +125.3% | | | | | |
| tgk | Tajik | 386k | - | 9.7 | 17.0 | +7.3 | +75.3% | | | | | |
| pan | Panjabi | 326k | 16.3 | 16.4 | 27.4 | +11.0 | +67.1% | | | | | |
| kir | Kirghiz | 318k | - | 7.6 | 13.4 | +5.8 | +76.3% | | | | | |
| Table 1: BLEU scores on the flores101 and tico19 benchmarks for our baseline bilingual models (Base), backtranslation models trained with the addition of 7M back-translated English sentences (BT), and Facebook's M2M | | | | | | | | | | | | |
baseline models. While some languages just improve a poor model to a slightly less poor model
(*e.g.,* Oromo, 4.5 to 7.3, +62%), several cases are languages that move from a score of 10 to 15 to a score between 20 and 30, an adjustment from poor to good.
Across the different languages the gains on the tico19 benchmark track gains on the *flores101* test set. This indicates that we did not just get lucky in picking good monolingual data to use for backtranslation, since the synthetic bitext works well on both the news/travel text (*flores101*) and the health domain benchmark (*tico19*).
On both tests sets, in every instance, backtranslation conferred gains. There is a strong inverse relationship between the amount of training data used in the baseline model and the improvement in BLEU score with back-translation. This is clear in Figure 2 where the less-resourced language are plotted towards the left. It was not clear that models in the impoverished languages would improve given the questionable quality of their reverse models, yet large gains are indeed seen.
We ran a bootstrap resampling test (Koehn, 2004) comparing BT with Base: with the exception of Serbian, all BLEU improvements in BT are statistically significant (p < 0.05). This expands the observation of (Guzmán et al., 2019) which measured large BT gains for an earlier version of flores101 consisting of Nepali and Sinhala.
To give context to our baseline models we also report performance using the 1.2 billion parameter M2M100 model released by Facebook (Fan et al.,
2022), which was trained on 7.5 billion sentence pairs. Note that our bilingual models often outperform the multilingual M2M.
In Figure 3 we show examples of translations.
Consider the first example, about Portuguese explorer Vasco da Gama. In the Kazakh training data, the explorer's name never occurs, neither in Kazakh nor English. But in the synthetic bitext, the name appears eight times in the monolingual English, and it is correctly back-translated in Kazakh once, along with a couple of partially correct translations and errors. This is apparently enough to learn how to decode the name properly.
## 4 Analysis Of Results
We now provide various analyses to better understand the results in Section 3. Specifically, we are interested to learn why and how back-translation
![3_image_0.png](3_image_0.png)
## (Bt) Improves Upon The Baseline (Base). Are The Improvements In Bt Consistent Across
evaluation metrics? Yes. The histograms in Figure 4 summarize the translation quality in terms of BLEU, chrF (Popovic´, 2015), and TER (Snover et al., 2006). The BLEU plot corresponds to results in Table 1, and the rightward shift of the BT curve compared to the Base curve indicates the general improvements in BLEU. Both the chrF
plot and TER plot shows similar trends of increasing chrF score and decreasing TER score for BT.
The improvements are especially pronounced in the low chrF and high TER regions, consistent with our finding about BLEU improving most for lowresource languages.
## What Kinds Of Words Are Translated Correctly?
Figure 5 shows the precision/recall of out-ofvocabulary (OOV) and high-frequency words, calculated using the compare-mt tool (Neubig et al.,
2019). We define OOV words as words in the testset that do not occur in the training text of Base, while frequent words are those with over 1,000 occurrences. For this analysis, MT hypotheses and references in English were processed with the Moses tokenizer (Koehn et al., 2007). We observed improvements in both precision and recall on both classes of words. Figure 3 gave an example of improvement in OOV translation, but in Figure 5 we see that BT improves word precision and recall across the board. In fact, the high-frequency words
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
## Lead To The Most Bleu Gain.
We also conduct a word accuracy analysis that groups words by part-of-speech tags. The English reference and hypotheses are tagged with CoreNLP
(Manning et al., 2014), then the respective precision and recall values are calculated. We average over the 60 testsets and report the resulting F1 scores in Table 2. We note that the F1 measure increases across all parts-of-speech for BT, with the largest gains in nouns, particles, and verbs.
How accurate are the reverse models? Does the reverse model need to be highly accurate for back-translation to perform well? This is a question that is especially pertinent to low-resource conditions. We measure the accuracy of the reverse model that synthesized the 7 million lines of BT data. Since the reference is in a non-English lan-
POS Share F1 %
![4_image_2.png](4_image_2.png)
![4_image_3.png](4_image_3.png)
![4_image_4.png](4_image_4.png)
CC: coord. conjunction 3.3% 0.88 +3%
CD: card. number 1.8% 0.80 +6%
DT: determiner 9.5% 0.67 +7%
IN: preposition 12.3% 0.61 +8%
JJ∗: all adjectives 7.7% 0.56 +12%
MD: modal 1.1% 0.54 +8%
NN∗: all nouns 28.0% 0.61 +14%
PRP: personal pronoun 1.9% 0.61 +10%
RB∗:: all adverbs 4.3% 0.49 +9%
RP: particle 0.2% 0.29 +19%
TO: to 1.3% 0.72 +5%
VB∗: all verbs 14.5% 0.47 +14%
WDT: Wh-determiner 0.6% 0.46 +11%
WP: Wh-pronoun 0.2% 0.52 +13%
WRB: Wh-adverb 0.2% 0.56 +10%
All other tags 13.1% 0.78 +2%
![5_image_0.png](5_image_0.png)
guage, we compute BLEU using sentence-piece tokenization (spBLEU) for consistency of evaluation across languages. Figure 6 is a scatterplot where the x-axis is the BLEU score of a forward baseline model (*e.g.,* zul-eng) and the y-axis is the spBLEU
of the reverse model (*e.g.,* eng-zul) trained on the same bitext. For most language-pairs, we see a strong correlation between the two BLEU scores, which is reasonable because both forward and reverse models are trained on the same bitext. For about a fifth of the language pairs, reverse model spBLEU is significantly low (*e.g.,* in the range 010) compared to forward model BLEU. These are mostly models for Indian languages (tam, kan, mal)
or languages that may be challenging to segment
(mya, khm); nevertheless, the BT gains are still rather impressive in these languages. These results suggest that the reverse model does not need to be highly accurate for back-translation to be effective.
![5_image_1.png](5_image_1.png)
What does the BT bitext look like? We attempt to characterize the BT training data by comparing statistics on the foreign and English sides. Figure 7
(top) shows the out-of-vocabulary (OOV) rate (by word types) of the baseline bitext compared to the backtranslated bitext (which includes the baseline bitext). We observe that the English OOV rate on the *flores101* test set is on average 4.5% for Base, and this drops significantly to 1.5% for BT. This shows that the BT data improves coverage on the flores101 vocabulary. Previous work has shown that one explanation for back-translation's success is the improved coverage in domain mismatch conditions (Dou et al., 2020). We believe there is certainly some of this effect, but the improvements in both *flores101* and *tico19* imply that domain coverage is not the only reason for improvement.
The OOV rate on the foreign side presents an additional explanation. We use the Moses tokenizer and other language-specific tokenizers for this analysis. While the OOV rate on the foreign side is higher (10%), there is still considerable reduction by BT (7.5%). The only way for OOV rate to reduce on the foreign side is for the reverse model to generate via subword unit combinations new words that were previously not seen in the original bitext.
Finally, we train language models (4-gram *kenlm*
(Heafield, 2011)) on both sides of the bitext for Base and BT, and measure the perplexity on *flores101* validation set. Here we use subwords as tokens to ameliorate the presence of OOV words, which complicates perplexity calculations. Figure 7
(bottom) shows that perplexity of a 4-gram trained Base English text is approximately 110 on average, and it drops to 100 for a 4-gram trained on BT English text. Surprisingly, the perplexity increases on the foreign side, growing from 75 to 85.
For perplexity, these are minor differences, but we make some conjectures: (1) The small change in perplexity is likely due to the BT data being relatively broad domain; if the BT data were selected to be very similar to the test set, the perplexities would drop much more significantly. (2) The upward trend in perplexity for BT on the foreign side suggests that the synthesized foreign text might not be wholly natural (see Appendix D). These texts do not improve monolingual perplexity, yet when paired as bitext they do improve MT accuracy.
Summary: BT improvements over Base are measured on multiple metrics, and translation improves across the board on all word types. The reverse
![6_image_0.png](6_image_0.png)
model does not need to be highly accurate and the BT bitext (if broad-domain) does not need to be specifically matched to the test domain for BT to work effectively.
## 5 Monolingual Data Size
Some have studied the effect of the amount of
![6_image_1.png](6_image_1.png)
monolingual text used in creating synthetic bitext.
A common heuristic is to use a small multiple of the human-produced training bitext, for example two or three times the amount. We wanted to assess this ourselves, and we did this in six languages that varied in the amount of Base training data, from 300k lines of bitext up to 11 million.
In Section 3 we used 7 million sentences per language from the web-crawled English news portion of the Leipzig corpus. For these new experiments we expand to 14 million sentences from the years 2005 to 2020, training six additional models per language, each using differing amounts of monolingual text. When back-translation is used, we choose the most recent data up to our desired limit.
Figure 8 plots BLEU scores for six languages:
Bengali, Farsi, Hausa, Kazakh, Marathi, and Panjabi. They vary in the amount of Base bitext from about 300k lines (Panjabi) up to 11 million lines
(Farsi). At the left is the no back-translation condition, and proceeding left to right, larger amounts of synthetic bitext are used.
We make several observations from the plot.
First, consistent with Table 1, the three least resourced languages show the greatest gains. Second, even the smallest amount of synthetic data considered, 500k sentences, produced tangible benefit.
And third, the four rightmost conditions (*i.e.,* 4, 7, 10, and 14 million) are best, though there is little difference among them. Our earlier choice of 7 million sentences was felicitous.
We conclude that using even relatively small amounts of data can be effective, and that the risk of using too much data is low. For example, with the use of 14 million lines of synthetic bitext, the Panjabi model is using 40x more synthetic data than original human-produced bitext, and this still conveys large gains compared to using less data, and is nearly optimal compared to other choices.
## 6 Repeated Back-Translation
Earlier work in iterative back-translation (Hoang et al., 2018) showed small gains when first improving the reverse model, and then using that improved model to generate the final synthetic bitext.
It makes sense that an improved synthetic bitext should have fewer errors and lead to an ultimately better model. We decided to investigate this method in thirteen languages, using just one attempt to improve the reverse model. This requires monolingual text in the source language to create synthetic data for the reverse model. For non-English text we used data from the OSCAR 22.01 corpus (Abadji et al., 2022), which was filtered to remove possibly problematic text4and then performed sentence splitting using *ersatz* (Wicks and Post, 2021).
4Anything marked as adult, footer, header, noisy, short_sentences, or *tiny*.
![7_image_0.png](7_image_0.png)
Lang Bi/Monotext RBT ∆ RBT ∆
tam 1.7M 5.4M 28.6 +0.3 29.4 -0.3
urd 1.7M 3.4M 29.4 0.0 31.1 -0.2
kat 1.4M 4.3M 22.3 +1.3
azj 1.2M 4.0M 18.5 +0.4
amh 950k 143k 29.5 +0.8
tel 908k 1.7M 35.9 +0.8
mya 734k 339k 19.8 +0.7 19.9 +1.0
kaz 635k 3.0M 25.7 +2.0
khm 634k 171k 19.8 -0.9 26.1 +0.9
mon 559k 1.2M 18.9 +1.8
guj 410k 1.1M 29.4 +2.8
kan 390k 946k 18.7 +4.4
tgk 386k 1.7M 17.0 +3.3
Table 3: Results for repeated back-translation (RBT).
Resultant BLEU scores are shown for the *flores101* and
tico19 benchmarks, along with the change in BLEU
compared to the BT model from Table 1.
This is a somewhat less controlled experiment, as the amount of monolingual text in OSCAR
varies by the language. After the filtering mentioned above we used all of the remaining text.
Our results are shown in Table 3. On *flores101*,
positive gains were seen in 11 of 13 cases (a tie for Urdu; a loss in Khmer). Changes tended to be small, except in the lesser resourced languages, where gains of between 2.0 and 4.4 points were achieved. On *tico19*, the changes were relatively small, with two minor losses (Tamil and Urdu), and two gains of about a point (Burmese and Khmer).
Back-translation requires training two separate models, one after the other. However, with the extra step of improving the reverse model, we must train a third model. Based on these results, the added expense of improving the reverse model is likely only worthwhile for languages with less than one million lines of human-produced bitext.
## 7 Related Work
BT for low-resource languages: Most papers on this topic examines some aspect of BT with experiments on specific low-resource languages, e.g.:
Telegu (Dandapat and Federmann, 2018); Gujarati
(Bawden et al., 2019); Lithuanian, Gujarati (Xu et al., 2019); Tagalog, Swahili, Somali, Turkish
(Niu et al., 2019); Swahili (Sánchez-Martínez et al.,
2020); Bribri (Feldman and Coto-Solano, 2020);
Vietnamese (Li et al., 2020); Tamil, Inuktitut (Chen et al., 2020). Our contribution is orthogonal in that we have an expansive exploration over 60 moderate and low-resource languages.
Two recent survey papers on low-resource translation (Ranathunga et al., 2021; Haddow et al.,
2022) mention the importance of data augmentation and back-translation in particular, though neither highlights the outsized impact of backtranslation compared to higher resourced settings.
BT variants: Although we use only the most simple BT technique, there are many advanced variants that may be interesting as future work. In addition to the papers on sampling, filtering, and weighting mentioned in the introduction, BT can be improved with meta-learning (Pham et al., 2021), transliteration (Karakanta et al., 2018), data selection (Soto et al., 2020), tagging (Caswell et al., 2019), lexical/syntactic diversity (Burchell et al., 2022).
BT for multilingual models: We focus on bilingual models, but BT for multilingual models is an area of growing interest. Fan *et al.* (2022) observed consistent, yet small gains in multilingual models
(seemingly less than 2 BLEU, cf. their Figs. 4 &
6). Our experiments were exclusively bilingual and to-English, with larger gains in low-resource conditions, though direct comparison is not possible.
In a follow-on study (NLLB Team et al., 2022),
Meta develop a larger version of the FLORES data in 200 languages, and built a massively multilingual many-to-many model. As part of that wideranging work, they conducted experiments with back-translation (their Sec. 6.4.1). Their best results used statistical MT to generate the synthetic bitext. Consistent with our results in translation to English, they found gains largest in "very low" resource languages (50.9 vs. 46.1 chrF++), but using multilingual mixture-of-experts models.
## 8 Conclusions
By revisiting back-translation for an expansive list of 60 mid- and low-resource languages we have come to a better understanding of the landscape.
We found that:
- Back-translation improves performance in moderately resourced languages, but is significantly more effective in improving translation quality in low-resource languages with less than 1 million lines of training bitext.
- Translation of rare terms is improved due to increased lexical coverage in the synthetically generated bitext; however, translation of frequently occurring terms is also improved.
- Even when initial models are of low quality,
and the synthetic bitext contains noise, significant gains still occur.
- The risk of using too much synthetic data is low.
- Repeated back-translation imparts only minor gains, except in some of the least resourced cases we studied.
## Limitations
Aside from the reverse models used in backtranslation (which we did analyze in Section 4),
we only studied translation of language pairs into English. Using data augmentation techniques like back-translation where English is not the target language, or is neither the source or target language is certainly worthy of study, but was out of scope in the present work. We did however, include many source languages that are typologically different from English (see Table 8 in the Appendix).
In order to study the effectiveness of BT in a large number of languages we relied on extant multilingual datasets, namely *flores101* and *tico19*. The direction of human translation when building these datasets was from English into another language.
We did not run repeated trials on our experiments. Many models required training for a couple of GPU-weeks on V100s, and additional trials would have added significant computational expense. We believe the trends we have identified are sufficiently clear and supported by the statistical analysis in Section 4.
## Ethics Statement
Our goal in this work is to contribute to an understanding of how and when back-translation can be successfully employed when translating out of moderate- and low-resource languages. We believe that improving translation where English is the target language has utility both for its 1.5 billion L1 and L2 speakers globally, as well as for those non-English speakers whose content can be made accessible to additional communities.
State-of-the-art systems will make errors, including failing to resolve ambiguity, mistranslating proper names, hallucinations, subject-verb disagreement, among others. These errors could lead to harms if automated translations are used injudiciously by end users. Translation in lowresource conditions is inherently error-prone, however, based on our results, we believe that using back-translation will often lead to more robust translations.
## References
Julien Abadji, Pedro Ortiz Suarez, Laurent Romary, and Benoît Sagot. 2022. Towards a Cleaner DocumentOriented Multilingual Crawled Corpus. *arXiv eprints*, page arXiv:2201.06642.
Antonios Anastasopoulos, Alessandro Cattelan, ZiYi Dou, Marcello Federico, Christian Federmann, Dmitriy Genzel, Franscisco Guzmán, Junjie Hu, Macduff Hughes, Philipp Koehn, Rosie Lazar, Will Lewis, Graham Neubig, Mengmeng Niu, Alp Öktem, Eric Paquin, Grace Tang, and Sylwia Tur. 2020. TICO-19:
the translation initiative for COvid-19. In *Proceedings of the 1st Workshop on NLP for COVID-19 (Part* 2) at EMNLP 2020, Online. Association for Computational Linguistics.
Rachel Bawden, Nikolay Bogoychev, Ulrich Germann, Roman Grundkiewicz, Faheem Kirefu, Antonio Valerio Miceli Barone, and Alexandra Birch. 2019.
The University of Edinburgh's submissions to the WMT19 news translation task. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 103–115, Florence, Italy. Association for Computational Linguistics.
Nicola Bertoldi and Marcello Federico. 2009. Domain adaptation for statistical machine translation with monolingual resources. In *Proceedings of the Fourth* Workshop on Statistical Machine Translation, pages 182–189, Athens, Greece. Association for Computational Linguistics.
Laurie Burchell, Alexandra Birch, and Kenneth Heafield. 2022. Exploring diversity in back translation for low-resource machine translation. In *Proceedings of the Third Workshop on Deep Learning for* Low-Resource Natural Language Processing, pages 67–79, Hybrid. Association for Computational Linguistics.
Isaac Caswell, Ciprian Chelba, and David Grangier.
2019. Tagged back-translation. In *Proceedings of the* Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 53–63, Florence, Italy.
Association for Computational Linguistics.
Peng-Jen Chen, Ann Lee, Changhan Wang, Naman Goyal, Angela Fan, Mary Williamson, and Jiatao Gu. 2020. Facebook AI's WMT20 news translation task submission. In *Proceedings of the Fifth Conference on Machine Translation*, pages 113–125, Online.
Association for Computational Linguistics.
Sandipan Dandapat and Christian Federmann. 2018. Iterative data augmentation for neural machine translation: a low resource case study for english–telugu.
In *Proceedings of the Conference of the European* Association for Machine Translation.
Zi-Yi Dou, Antonios Anastasopoulos, and Graham Neubig. 2020. Dynamic data selection and weighting for iterative back-translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5894–5904, Online. Association for Computational Linguistics.
David M. Eberhard, Gary F. Simons, and Charles D.
Fennig, editors. 2021. Ethnologue: Languages of the World. SIL International, Dallas, Texas.
Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 489–500, Brussels, Belgium. Association for Computational Linguistics.
Sergey Edunov, Myle Ott, Marc'Aurelio Ranzato, and Michael Auli. 2020. On the evaluation of machine translation systems trained with back-translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2836–
2846, Online. Association for Computational Linguistics.
Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2022. Beyond english-centric multilingual machine translation. J.
Mach. Learn. Res., 22(1).
Isaac Feldman and Rolando Coto-Solano. 2020. Neural machine translation models with back-translation for the extremely low-resource indigenous language Bribri. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 3965–
3976, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022. The Flores-101 evaluation benchmark for low-resource and multilingual machine translation. *Transactions of the Association for* Computational Linguistics, 10:522–538.
Miguel Graça, Yunsu Kim, Julian Schamper, Shahram Khadivi, and Hermann Ney. 2019. Generalizing back-translation in neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 45–
52, Florence, Italy. Association for Computational Linguistics.
Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali–English and Sinhala–
English. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098–6111, Hong Kong, China. Association for Computational Linguistics.
Barry Haddow, Rachel Bawden, Antonio Valerio Miceli Barone, Jindˇrich Helcl, and Alexandra Birch.
2022. Survey of low-resource machine translation.
Computational Linguistics, 48(3):673–732.
Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In *Proceedings of the Sixth* Workshop on Statistical Machine Translation, pages 187–197, Edinburgh, Scotland. Association for Computational Linguistics.
Felix Hieber, Tobias Domhan, Michael Denkowski, and David Vilar. 2020. Sockeye 2: A toolkit for neural machine translation. In *Proceedings of the 22nd* Annual Conference of the European Association for Machine Translation, pages 457–458, Lisboa, Portugal. European Association for Machine Translation.
Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18–24, Melbourne, Australia. Association for Computational Linguistics.
Aizhan Imankulova, Takayuki Sato, and Mamoru Komachi. 2017. Improving low-resource neural machine translation with filtered pseudo-parallel corpus.
In *Proceedings of the 4th Workshop on Asian Translation (WAT2017)*, pages 70–78, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Alina Karakanta, Jon Dehdari, and Josef van Genabith.
2018. Neural machine translation for low-resource languages without parallel corpora. *Machine Translation*, 32:167–189.
Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
Hongzheng Li, Jiu Sha, and Can Shi. 2020. Revisiting back-translation for low-resource machine translation between chinese and vietnamese. *IEEE Access*,
8:119931–119939.
Yin Lou, Rich Caruana, and Johannes Gehrke. 2012. Intelligible models for classification and regression. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '12, page 150–158, New York, NY, USA.
Association for Computing Machinery.
Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky.
2014. The Stanford CoreNLP natural language processing toolkit. In *Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations*, pages 55–60, Baltimore, Maryland. Association for Computational Linguistics.
Benjamin Marie, Raphael Rubino, and Atsushi Fujita.
2020. Tagged back-translation revisited: Why does it really work? In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 5990–5997, Online. Association for Computational Linguistics.
Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, and Xinyi Wang. 2019. compare-mt:
A tool for holistic comparison of language generation systems. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*,
pages 35–41, Minneapolis, Minnesota. Association for Computational Linguistics.
Xing Niu, Weijia Xu, and Marine Carpuat. 2019. Bidirectional differentiable input reconstruction for lowresource neural machine translation. In *Proceedings* of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 442–448, Minneapolis, Minnesota. Association for Computational Linguistics.
NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
2022. No language left behind: Scaling humancentered machine translation.
Harsha Nori, Samuel Jenkins, Paul Koch, and Rich Caruana. 2019. Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Hieu Pham, Xinyi Wang, Yiming Yang, and Graham Neubig. 2021. Meta back-translation. In *International Conference on Learning Representations*
(ICLR), Online.
Maja Popovic. 2015. ´ chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Surangika Ranathunga, En-Shiun Annie Lee, Marjana Prifti Skenduli, Ravi Shekhar, Mehreen Alam, and Rishemjit Kaur. 2021. Neural machine translation for low-resource languages: A survey.
Felipe Sánchez-Martínez, Víctor M. Sánchez-Cartagena, Juan Antonio Pérez-Ortiz, Mikel L. Forcada, Miquel Esplà-Gomis, Andrew Secker, Susie Coleman, and Julie Wall. 2020. An English-Swahili parallel corpus and its use for neural machine translation in the news domain. In *Proceedings of the 22nd Annual* Conference of the European Association for Machine Translation, pages 299–308, Lisboa, Portugal. European Association for Machine Translation.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics.
Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223–231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas.
Xabier Soto, Dimitar Shterionov, Alberto Poncelas, and Andy Way. 2020. Selecting backtranslated data from multiple sources for improved neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3898–3908, Online. Association for Computational Linguistics.
Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2214–2218, Istanbul, Turkey. European Language Resources Association
(ELRA).
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 6000–6010, Red Hook, NY,
USA. Curran Associates Inc.
Rachel Wicks and Matt Post. 2021. A unified approach
![12_image_0.png](12_image_0.png)
to sentence segmentation of punctuated text in many languages. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3995–4007, Online. Association for Computational Linguistics.
Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5786–
5796, Florence, Italy. Association for Computational Linguistics.
Nuo Xu, Yinqiao Li, Chen Xu, Yanyang Li, Bei Li, Tong Xiao, and Jingbo Zhu. 2019. Analysis of backtranslation methods for low-resource neural machine translation. In *Natural Language Processing and* Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II, page 466–475, Berlin, Heidelberg. Springer-Verlag.
## A Testset Examples
Examples from the *flores101* test partition are shown in Table 4. The text is rich in named-entities and multiword expressions. Examples from *tico19* are shown in Table 5. The language contains terminology specific to the medical and public health communities, and some texts are written in a scientific style.
## B Correlation Between Flores101 And Tico19
In Section 3 we mentioned that the relative gain on the public-health related *tico19* dataset tracked the improvement seen on *flores101*. Figure 9 is a scatterplot of the relative gains of both datasets.
We calculated Pearson's correlation coefficient to be 0.979.
## C **Can We Find Features That Quantitatively** Explain Bt Improvements?
We attempt to define features x for each languagepair and build a glassbox regression model to predict y, defined as the percentage improvement when comparing BT BLEU with Base BLEU (*e.g.,*
the column % in Table 1). The goal is to find explainable features that predict when BT improvement will be large or small. As a glassbox model, we use the Explainable Boosting Machine (EBM),
which is introduced in (Lou et al., 2012) and implemented in Nori et al. (2019):
$$g(y)=\beta_{0}+\sum_{j}f_{j}(x_{j})$$
$$\mathrm{(1)}$$
Here, g is a link function (identity for regression),
xj is a feature we manually define, and fj the shape function for feature xj that is learnt through bagging and gradient boosting. The advantage of EBM
over conventional linear regression is that the fj can be of arbitrary shape (leading to low meansquared error) and yet can be easily interpretable
(similar to decision trees).
We define the following features:
- train_token: number of tokens for training, in millions
- oov_type: the OOV rate, by type
- tt_ratio: type-to-token ratio, number of distinct word types divided by number of tokens
(computed on the testset)
- perplexity: perplexity of the aforementioned 4-gram language model
Each feature is prefixed with (en,fr) to indicate that it is computed on the English or foreign side, respectively. Additionally, each feature is suffixed with (1, 2) to indicate that it is computed on Base (1) or BT (2).
We have available only 60 "samples" for EBM:
a random 85% is used for fitting the EBM and The JAS 39C Gripen crashed onto a runway at around 9:30 am local time (0230 UTC) and exploded, closing the airport to commercial flights.
Around 11:29, the protest moved up Whitehall, past Trafalgar Square, along the Strand, passing by Aldwych and up Kingsway towards Holborn where the Conservative Party were holding their Spring Forum in the Grand Connaught Rooms hotel. Nadal's head to head record against the Canadian is 7–2.
Table 4: Several examples from *flores101*. Only the English text is shown.
In ca 14% cases, COVID-19 develops into a more severe disease requiring hospitalisation while the remaining 6% cases experience critical illness requiring intensive care. On 11 March 2020, the Director General of the World Health Organization (WHO) declared COVID-19 a pandemic.
Patients with severe respiratory symptoms have to be supported by extracorporeal membrane oxygenation (ECMO), a modified cardiopulmonary bypass technique used for the treatment of life-threatening cardiac or respiratory failure.
![13_image_0.png](13_image_0.png)
15% for test. While the sample size is small, the model is simple and the coefficient of determination (R2) on the test set is a reasonable 0.7. We show the EBM interpretation results in Figure 10.
According to this model, the en_train_token_2 and fr_oov_type_1 are the top two features for predicting the improvement in BLEU (y). A visualization of the the shape functions show that low values of en_train_token_2 lead to high score
(high y); this coincides with the previous observation that lower-resourced languages saw more improvements in BLEU. The shape function for fr_oov_type_1 shows an interesting step function at around 10, meaning that systems with a foreign word OOV rate greater than 10% had a large amount to gain in BLEU.
We should note that this EBM analysis only shows correlation, not causation.
## D Quality In Reverse Models
In Section 4 we mentioned that back-translation can still be effective despite significant noise in the reverse models. In fact, in some languages, significant numbers of exact match hallucinations are produced. Some frequently repeated lines from the Javanese synthetic bitext are listed in Table 6.
Detecting and filtering out implausible sentence pairs is one approach to mitigate this problem Imankulova et al. (2017), however, in our work we simply removed any duplicates, so that at most one spurious example remained instead of possibly thousands. Despite the residual noise, backtranslation is remarkably effective in these lowresource languages. In Table 7 we list the number of unique lines of back-translated text (*i.e.,* on the non-English side) for certain languages in which we observed this problem.
## E Computational Expense
Our computing infrastructure consisted of a mix of NVIDIA V100 32GB and A100 40GB machines.
We estimate that model training and decoding required 41,000 GPU-hours for the experiments reported in this paper. We are not able to estimate the actual carbon footprint incurred due to many factors involved, but we can estimate it for a given scenario as follows. If we take 250 watts (the rating for a V100), that is 10.25 MWh. If we assume a CO2e emission of 432 kg/MWh, we end up with:
$${\frac{10.25\;\mathrm{MWh}}{1}}\times{\frac{432\;\mathrm{kg}}{\mathrm{MWh}}}\times{\frac{1\;\mathrm{ton}}{907.19\;\mathrm{kg}}}=4.9\;\mathrm{tons}\quad(2)$$
8179
| Count | Sentence |
|-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
| 6,784 | ]]]] iku kecamatan ing Kabupaten Sumba Tengah Propinsi Nusa Tenggara Wétan. GT: ]]]] is a district in Central Sumba Regency, East Nusa Tenggara Province. |
| 6,684 | Motorola C115 ya iku tilpun sélulèr kang diprodhuksi déning pabrikan Motorola. GT: Motorola C115 is a mobile phone produced by Motorola. |
| 1,528 | Kutha iki dumunung ing sisih kidul. GT: The city is located in the south. |
| 1,463 | Nokia N80 ya iku tilpun sélulèr kang diprodhuksi déning pabrikan Nokia. GT: Nokia N80 is a mobile phone produced by the manufacturer Nokia. |
| 1,269 | Kuwi sing paling penting banget. GT: That is the most important thing. |
| 1,246 | Kemangga iki racaké akèh tinemu ing Amérika Sarékat. GT: Many of these mangoes are found in the United States. |
| Table 6: Some commonly repeated lines in the Javanese synthetic bitext, with a English translation below obtained | |
Lang Uniq Base BT %
Indonesian 6,901,700 42.4 44.3 +1.4% Oromo 6,746,709 4.5 7.3 +62.2% Kannada 6,716,601 8.3 18.7 +125.3% Javanese 6,446,456 12.4 20.0 +61.3%
Tajik 6,018,847 9.7 17.0 +75.3%
Table 7: Number of unique lines (foreign side) of
synthetic bitext. In total, 6,920,211 lines were backtranslated. Little duplication is present in the Indonesian
data, but the problem is significant in Oromo, Kannada,
Javanese, and and Tajik. Base and BT BLEU scores and
relative improvement are from Table 1.
Further, if we assume the data center power usage effectiveness (PUE) is 1.5 and there are no additional offsets for renewable energy, the CO2e emission might be 4.9 × 1.5 = 7.35 tons.
Our Transformer models average about 275 million parameters.
## F Language Properties
Table 8 lists some of the properties of the languages investigated in this work.
Code Language Family Script Speaker Example Region Type: MorphoSyntax, Phonology, etc.
![15_image_0.png](15_image_0.png)
![15_image_1.png](15_image_1.png)
![15_image_2.png](15_image_2.png)
![15_image_3.png](15_image_3.png)
![15_image_4.png](15_image_4.png)
![15_image_5.png](15_image_5.png)
heb Hebrew Afro-Asiatic, Semitic Hebrew 9.4m Israel SVO, 22c/5v/4d srp Serbian Indo-European, Balto-Slavic Cyrillic 10.3m Serbia SVO, 7 cases, 25c/5v ind Indonesian Austronesian, Malayo-Polynesian Latin 199.0m Indonesia SVO, 19c/6v/3d slv Slovenian Indo-European, Balto-Slavic Latin 2.2m Slovenia SVO, 6 cases, 21c/8v/2d slk Slovak Indo-European, Balto-Slavic Latin 7.2m Slovakia SVO, 6 case, 27c/10v/4d est/ekk Estonian Uralic, Finnic Latin 1.2m Estonia SVO, 14 cases kor Korean Koreanic Hangul 81.5m South Korea SOV, 6 cases, 21c/8v/12d lit Lithuanian Indo-European, Balto-Slavic Latin 2.9m Lithuania SVO, 6 cases, 37c/10v, tonal vie Vietnamese Austro-Asiatic, Mon-Khmer Latin 76.8m Vietnam SVO, 25c/11v/20d, 6 tones lav/lvs Latvian Indo-European: Balto-Slavic Latin 2.0m Latvia SVO, 5 case, 25c/11v/5d fas/pes Farsi Indo-European, Indo-Iranian Arabic 74.2m Iran SOV, 23c/6v bos Bosnian Indo-European, Balto-Slavic, Slavic Latin 2.7m Bosnia&Herzegovina SVO, 7 cases, 25c/5v swh Swahili Niger-Congo, Atlantic-Congo Latin 69.2m Tanzania SVO, 18 noun classes ukr Ukrainian Indo-European, Balto-Slavic Cyrillic 33.2m Ukraine SVO, 7 cases, 30c/6v hin Hindi Indo-European, Indo-Iranian Devanagari 600.5m India SOV, 30c/10v/2d tgl Tagalog Austronesian, Malayo-Polynesian Latin 25.7m Philippines VSO, 16c/5v msa/zsm Malay Austronesian, Malayo-Polynesian Latin 81.6m Malaysia SVO cat Catalan Indo-European, Italic, Romance Latin 9.2m Spain SVO, 22c/7v/4d isl Icelandic Indo-European, Germanic Latin 0.3m Iceland SVO, 4 cases, 20c/8v/5d mkd Macedonian Indo-European, Balto-Slavic Cyrillic 1.7m North Macedonia SVO, 26c/5v mlt Maltese Afro-Asiatic, Semitic Latin 0.5m Malta SVO, 23c10v8d ben Bengali Indo-European, Indo-Iranian Bengali 267.7m Bangladesh SOV, 5 cases, 35c/5v afr Afrikaans Indo-European, Germanic Latin 17.6m South Africa SVO, sometimes SOV, 20c/16v/9d xho Xhosa Niger-Congo, Atlantic-Congo Latin 19.2m South Africa SVO, 17 noun classes, 58c/10v, 2 tones zul Zulu Niger-Congo, Atlantic-Congo Latin 27.8m South Africa SVO, 13 noun classes, 30c/10v sna Shona Niger-Congo, Atlantic-Congo Latin 9.0m Zimbabwe SVO, 13 noun classes, 31c/5v/2d, 2 tones gle Irish Indo-European, Celtic Latin 1.2m Ireland VSO, 3 cases, 32c/11v/4d hau Hausa Afro-Asiatic, Chadic Latin 74.9m Nigeria SVO, 33c/10v/2d, 2 tones tam Tamil Dravidian, Southern Tamil 85.5m India SOV, 8 cases, 18c/10v/2d urd Urdu Indo-European, Indo-Iranian Arabic 230.1m Pakistan SOV, 30c/20v/2d yor Yoruba Niger-Congo, Atlantic-Congo Latin 43.0m Nigeria SVO, 17c/11v, 3 tones kat Georgian Kartvelian, Georgian Georgian 3.9m Georgia SOV, 18 cases 27c/5v mal Malayalam Dravidian, Southern Malayalam 37.9m India SOV, 7 cases 37c/11v/4d azj Azerbaijani Turkic, Southern Latin 9.2m Azerbaijan SOV, 6 cases, 24c/9v jav Javanese Austronesian, Malayo-Polynesian Latin 68.3m Indonesia SVO, 21c/8v mar Marathi Indo-European, Indo-European Devanagari 99.1m India SOV, 7 cases, 37c/8v/2d nya Nyanja Niger-Congo, Atlantic-Congo Latin 14.4m Malawi SVO bel Belarusian Indo-European, Balto-Slavic, Slavic Cyrillic 3.9m Belarus SVO, 6 cases, 37c/6v hye Armenian Indo-European, Armenian Armenian 3.8m Armenia SVO, 7 cases, 30c/7v amh Amharic Afro-Asiatic, Semitic Ethiopic 57.4m Ethiopia SOV 4 cases, 27c/7v tel Telegu Dravidian, South Central Telegu 95.6m India SOV, 7 cases, 21c/11v npi Nepali Indo-European, Indo-Iranian Devanagari 24.7m Nepal SOV, 11 noun classes, 4 cases, 29c/11v som Somali Afro-Asiatic, Cushitic Latin 21.9m Somalia SOV, 22c/10v, 3 tones cym Welsh Indo-European, Celtic Latin 0.6m United Kingdom VSO, 23c/12v/8d lin Lingala Niger-Congo, Atlantic-Congo Latin 2.3m D.R. Congo SVO, 12 noun classes, 16c/5v, 2 tones lug Ganda Niger-Congo, Atlantic-Congo Latin 11.0m Uganda SVO mya Burmese Sino-Tibetan, Tibeto-Burman Myanmar 43.0m Myanmar SOV, 31c/8v/4d, 3 tones nso Pedi Niger-Congo, Atlantic-Congo Latin 13.7m South Africa SVO glg Galician Indo-European, Italic, Romance Latin 3.1m Spain SVO ceb Cebuano Austronesian, Malayo-Polynesian Latin 15.9m Phillippines VSO, 16c/3v/4d orm/gaz Oromo Afro-Asiatic, Cushitic Latin 19.2m Ethiopia SOV, 7 cases, 25c/10v kaz Kazakh Turkic, Western Cyrillic 13.2m Kazakhstan SOV, 7 cases, 18c/9v khm Central Khmer Austro-Asiatic, Mon-Khmer Khmer 17.9m Cambodia SVO, 21c/17v/13d ibo Igbo Niger-Congo, Atlantic-Congo Latin 29.0m Nigeria SVO, 37c8v, 3 tones mon/khk Mongolian Mongolic, Eastern Cyrillic 2.7m Mongolia SOV, 7 cases, 29c/14v/4d guj Gujarati Indo-European, Indo-Iranian Gujarati 61.9m India SOV, 6 cases, 31c/8v/2d kan Kannada Dravidian, South Kannada 58.6m India SOV, 7 cases, 22c/20v/2d tgk Tajik Indo-European, Indo-Iranian Cyrillic 8.1m Tajikistan SOV, 27c/6v pan Panjabi Indo-European, Indo-Iranian Gurmukhi 52.2m India SOV, 7 cases, 15c/24v, 3 tones kir Kirghiz Turkic, Western Cyrillic 5.4m Kyrgyzstan SOV, 7 cases, 19c/8v
![15_image_6.png](15_image_6.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section (unnumbered, before references).
✓ A2. Did you discuss any potential risks of your work?
Ethics section (unnumbered, before references).
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract & 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2
✓ B1. Did you cite the creators of artifacts you used?
2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
2
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We are not releasing any artifacts. Our use of open source software was consistent with its intended use.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We did not collect any data. We used existing open source datasets (i.e., OPUS bitext).
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix F.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
2
## C ✓ **Did You Run Computational Experiments?** 3, 5, & 6.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix E.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
2 & 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
zhou-etal-2023-aom | {A}o{M}: Detecting Aspect-oriented Information for Multimodal Aspect-Based Sentiment Analysis | https://aclanthology.org/2023.findings-acl.519 | Multimodal aspect-based sentiment analysis (MABSA) aims to extract aspects from text-image pairs and recognize their sentiments. Existing methods make great efforts to align the whole image to corresponding aspects. However, different regions of the image may relate to different aspects in the same sentence, and coarsely establishing image-aspect alignment will introduce noise to aspect-based sentiment analysis (i.e., visual noise). Besides, the sentiment of a specific aspect can also be interfered by descriptions of other aspects (i.e., textual noise). Considering the aforementioned noises, this paper proposes an Aspect-oriented Method (AoM) to detect aspect-relevant semantic and sentiment information. Specifically, an aspect-aware attention module is designed to simultaneously select textual tokens and image blocks that are semantically related to the aspects. To accurately aggregate sentiment information, we explicitly introduce sentiment embedding into AoM, and use a graph convolutional network to model the vision-text and text-text interaction. Extensive experiments demonstrate the superiority of AoM to existing methods. | # Aom: Detecting Aspect-Oriented Information For Multimodal Aspect-Based Sentiment Analysis
Ru Zhou1 Wenya Guo1∗ Xumeng Liu1 **Shenglong Yu**1 Ying Zhang1 **Xiaojie Yuan**1 1 College of Computer Science, TKLNDST, Nankai University, Tianjin, China
{zhouru,guowenya,liuxumeng,yushenglong,zhangying}@dbis.nankai.edu.cn [email protected]
## Abstract
Multimodal aspect-based sentiment analysis
(MABSA) aims to extract aspects from textimage pairs and recognize their sentiments. Existing methods make great efforts to align the whole image to corresponding aspects. However, different regions of the image may relate to different aspects in the same sentence, and coarsely establishing image-aspect alignment will introduce noise to aspect-based sentiment analysis (*i.e.*, visual noise). Besides, the sentiment of a specific aspect can also be interfered by descriptions of other aspects (*i.e.*, textual noise). Considering the aforementioned noises, this paper proposes an Aspect-oriented Method (AoM) to detect aspect-relevant semantic and sentiment information. Specifically, an aspect-aware attention module is designed to simultaneously select textual tokens and image blocks that are semantically related to the aspects. To accurately aggregate sentiment information, we explicitly introduce sentiment embedding into AoM, and use a graph convolutional network to model the vision-text and text-text interaction. Extensive experiments demonstrate the superiority of AoM to existing methods. The source code is publicly released at https://github.com/SilyRab/AoM.
## 1 Introduction
As an important and promising task in the field of sentiment analysis, Multimodal Aspect-Based Sentiment Analysis (MABSA) has attracted increasing attention (Lv et al., 2021; Ju et al., 2021). Given an image and corresponding text, MABSA is defined as jointly extracting all aspect terms from imagetext pairs and predicting their sentiment polarities
(Ju et al., 2021).
In this scenario of fine-grained sentiment recognition for multimodal information, the input imagetext pairs are always complex. (1) The semantics of sentence is complex which adds sentiment confusion among different aspects. Take Figure 1 as an
∗Corresponding author.
![0_image_0.png](0_image_0.png)
Figure 1: An example of MABSA task, including the aspects, their corresponding descriptions, and sentiments.
example, there are 3 aspects in the sentence with 3 different sentiments, The sentiment of "mayor" can be easily confused by the keyword, "Interesting",
which is of positive sentiment. (2) The images contain a large amount of detailed information, and the visual contents are usually related to only one or several of the aspects. For example, as shown in Figure 1, the objects in red boxes are more helpful in analyzing the sentiment of "Mayor Kadokawa" than the other aspects. The complex input greatly challenges the recognition of aspect-based sentiment.
Considering the multimodal input, existing methods are typically dedicated to associated visual and textual contents (Ju et al., 2021; Ling et al., 2022; Yang et al., 2022). Ju et al. (2021) uses imagetext relation to evaluate the contribution of visual contents to aspect sentiment, based on which to determine whether the image is involved in sentiment analysis. Ling et al. (2022) and Yang et al. (2022)
align visual representations of objects and their attributes with corresponding textual contents. To summarize, the whole image is directly associated with textual content in these methods. Intuitively, without aligning image blocks to corresponding aspects, the coarse whole-image-text association can introduce aspect-irrelevant visual noise, which further hinders aspect sentiment analysis. In addition, the performance can be further impacted by the textual noise from the confusion among different aspects.
In this paper, we propose an Aspect-oriented Method (AoM) to mitigate aforementioned noises from both image and text. AoM can detect aspectrelevant information from perspectives of both semantics and sentiment. There are two key modules in AoM: Aspect-Aware Attention Module (A3M)
for semantically fine-grained image-text alignment and Aspect-Guided Graph Convolutional Network
(AG-GCN) for sentiment information aggregation.
In A3M, we first extract aspect features associated with each visual and textual token. And then aspectrelevant token representations are computed based on their relevance to the corresponding aspect features. In AG-GCN, we first explicitly add sentiment embeddings to the obtained representations of visual and textual tokens. A multimodal weightedassociation matrix is constructed containing aspectto-image-block similarity and word-to-word dependency. Then we use a graph convolutional network to aggregate sentiment information according to the constructed multimodal matrix.
The contributions can be summarized as follows:
(1) We propose an aspect-oriented network to mitigate the visual and textual noises from the complex image-text interactions.
(2) We design an aspect-aware attention module and an aspect-guided graph convolutional network to effectively detect aspect-relevant multimodal contents from the perspectives of semantic and sentiment, respectively.
(3) Experiments on two benchmark datasets, including Twitter2015 and Twitter2017, show that our approach generally outperforms the state-ofthe-art methods.
## 2 Related Work
In this section, we review the existing methods for both ABSA and MABSA.
## 2.1 Aspect-Based Sentiment Analysis
In the past few years, Aspect-Based Sentiment Analysis (ABSA) in the textual fields has attracted much attention and gained mature research (Chen and Qian, 2020; Oh et al., 2021; Xu et al., 2020). On the one hand, most recent works are based on the pre-trained language model BERT because of its remarkable performance in many NLP tasks
(Liang et al., 2022a). On the other hand, some recent efforts focus on modeling the dependency relationship between aspects and their corresponding descriptions, in which graph convolutional networks (GCNs) (Chen et al., 2022; Liang et al.,
2022b, 2020; Li et al., 2021a; Pang et al., 2021)
or graph attention networks (GATs) (Yuan et al.,
2020) over dependency with the syntactic structure of a sentence are fully exploited.
## 2.2 Multimodal Aspect-Based Sentiment Analysis
With the enrichment of multimodal users' posts in social media, researchers find that images offer great supplementary information in aspect term extraction (Wu et al., 2020a; Zhang et al., 2018; Asgari-Chenaghlu et al., 2021) and sentiment analysis (Wu et al., 2022; Li et al., 2021b; Hazarika et al.,
2020; Cai et al., 2019). Thus, Multimodal Aspectbased Sentiment Analysis (MABSA) begins to be widely studied. MABSA task can be divided into two independent sub-tasks, i.e., Multimodal Aspect Term Extraction (MATE) and Multimodal Aspectoriented Sentiment Classification (MASC). The former extracts all aspect terms in the sentence at the prompt of the image, and the latter predicts the sentiment polarities for the aspects.
Ju et al. (2021) first realizes MABSA in a unified framework and designs an auxiliary cross-modal relation detection to control whether the visual information will be used in prediction. For capturing cross-modal alignment, Ling et al. (2022) constructs a generative multimodal architecture based on BART for both vision-language pre-training and the downstream MABSA tasks. Yang et al. (2022)
dynamically controls the contributions of the visual information to different aspects via the trick that the lower confidence of the results predicted by purely textual is, the more contributions from images will be considered.
On the one hand, the above methods ignore the alignment of fine-grained visual blocks and the corresponding aspects, which introduce irrelevant visual noise. On the other hand, modeling of syntax dependency and sentiment information for aspect descriptions is absent in these methods, which is proved important in sentiment analysis (Liang et al., 2022a; Kalaivani et al., 2022; Xu et al., 2022).
To tackle the aforementioned issues, we propose an aspect-oriented model consisting of AspectAware Attention Module and Aspect-Guided Graph Convolutional Network which respectively work to capture semantic information by fine-grained
![2_image_0.png](2_image_0.png)
image-text alignment and effectively aggregate aspect-relevant sentiment information.
## 3 Methodology 3.1 Overview
Task Definition. Formally, given a tweet that contains an image V and a sentence with n words S = (w1, w2*, ..., w*n), our goal is to acquire the sequence Y representing all aspects and their associated sentiment polarities. We formulate the output of MABSA as Y =
[a s1
, ae1
, s1*, ..., a*s i
, ae i
, si*, ...a*sk
, aek
, sk], where a s i
, a e i and si depict the start index, end index of the i-th aspect and its sentiment polarity in the tweet, and k is the number of aspects.
Model preview. Figure 2 shows the overview of our model architecture, which builds on an encoder-decoder architecture based on BART
(Lewis et al., 2019). Between the encoder and the decoder of BART, we creatively implement the Aspect-Aware Attention Module (A3M) and Aspect-Guided Graph Convolutional Network (AGGCN) to align the textual aspect to its associated visual blocks and textual description, simultaneously mitigate interference both from semantics and sentiment among different aspects. In the following subsections, we will illustrate the details of
## The Proposed Model.
Feature Extractor. The initial word embeddings are obtained from the pre-trained BART due to its excellent ability of textual representation. The embeddings of visual blocks are obtained by preprocessing via ResNet (Chen et al., 2014) following
(Yu et al., 2019). We consider every feature of a visual block or word token as an atomic feature. We add <img> and </img> before and after the visual features, <bos> and <eos> for the textual features.
Then, we concatenate the multimodal features as X which is the input of BART encoder.
We can get the multimodal hidden state H =
{h V
0
, ...hV
i
, ...hVm, hT
0
, ..., hT
j
, ...hTn } with m visual blocks and n words, where h V
iand h T
jrefer to features of the i-th visual block and the j-th word in the sentence.
## 3.2 Aspect-Aware Attention Module (A3M)
Since aspects are not specially modeled by BART
encoder, we creatively design the Aspect-Aware Attention Module (A3M) aiming to capture aspectrelevant semantic information. For this purpose, we align the multimodal information of target objects and filter out the semantic noise from images.
First, as aspects are usually noun phrases from the sentences, we extract those phrases as the candidate aspects (CA) with the NLP tool Spacy1.
And from the hidden state H of the BART encoder, we obtain the features of all candidate aspects denoted as HCA = {h CA
1, ..., hCA
i*, ..., h*CA
l}, where l is the number of noun phrases in the sentence. To get the relationship between candidate aspects and atomic features, we implement an attention-based mechanism guided by the candidate aspects. Given the t-th hidden feature ht, its attention distribution αt over k candidate aspects is obtained by:
$$Z_{t}=tanh((W_{CA}H^{CA}+b_{CA})\oplus(W_{H}h_{t}+b_{H})),\tag{1}$$ $$\alpha_{t}=softmax(W_{\alpha}Z_{t}+b_{\alpha}),\tag{2}$$
where Zt ∈ R
2d×kis the comprehensive feature extracted from both the candidate aspects and the hidden states. HCA ∈ R
d×k denotes the features of candidate aspects. WCA ∈ R
d×d, WH ∈ R
d×d, Wα ∈ R
1×2d, bCA, bH and bα are the learned parameters.⊕ is an operator between a matrix and a vector, where the vector is repeated into the appropriate size to concatenate with the matrix. We then get the aspect-related hidden feature h A
t by calculating the weighted sum of all candidate aspects following the below equation:
$$h_{t}^{A}=\sum_{i}^{k}\alpha_{t,i}h_{i}^{C A}.$$
For example, if a visual block is strongly associated with the j-th aspect, the corresponding αt,j is approximately 1. h A
t would be equal to the aspect semantically. And if the visual block is not related to any specific candidate aspects, both αt and h A t would be zero-like vectors of no information.
Considering that not every visual block can be used for prediction, βtis learned to add up the atomic feature ht and its aspect-related hidden feature h A t
. Details are as follows:
$$\beta_{t}=sigmoid(W_{\beta}[W_{1}h_{t};W_{2}h_{t}^{A}]+b_{\beta}),\tag{4}$$ $$\hat{h_{t}}=\beta_{t}h_{t}+(1-\beta_{t})h_{t}^{A},\tag{5}$$ where $W_{\beta}$, $W_{1}$, $W_{2}$, $b_{\beta}$ are parameters, and $[;]$
denotes the concatenation operator for vectors.
hˆt ∈ Hˆ is the final output of A3M after the semantic alignment and the noise reduction procedure.
Thus we get the noiseless and aligned information for every atomic feature.
Pre-training To align the two modalities and reduce noise, we conduct a pre-training task in A3M.
1Spacy: https://spacy.io/
![3_image_0.png](3_image_0.png)
Specifically, we detect the image-text relationship on the datasets TRC (Vempala and Preo¸tiuc-Pietro, 2019) as illustrated by Figure 3. We first obtain the average feature of image blocks from the output of A
3M and then pass it to a fully connected softmax layer, which outputs a probability distribution over whether the image is related to the text. Finally, we use cross entropy loss to train our model.
$$(3)$$
## 3.3 Aspect-Guided Graph Convolutional Network (Ag-Gcn)
The aspect-focused interaction between visual modality and textual modality in A3M concentrates on the context semantics, and that is not adequate for MABSA. Sentiment interference among different aspects still exists and influences sentiment prediction. Thus, we design the Aspect-Guided Graph Convolutional Network (AG-GCN) module to introduce external sentiment information and mitigate emotional confusion among different aspects to a certain extent.
Specifically, for word wiin the sentence, we gain its affective score w S
ifrom SenticNet (Ma et al.,
2018) and project it to the space with the same dimension as h A
t
, with si obtained. Then we add the sentiment feature sito the output of A3M:
$$w_{i}^{S}=SentiNet(w_{i}),\tag{6}$$ $$s_{i}=W_{S}w_{i}^{S}+b_{S},\tag{7}$$ $$h_{i}^{S}=\hat{h}_{i}+s_{i},\tag{8}$$ where $W_{S}$, $b_{S}$ are the learned parameters. $h_{i}^{S}$ is the
feature with affective knowledge.
Next, we build a boolean dependency matrix
D among visual blocks and words. First, for the
word-to-word part, submatrix DT T representing
about takes Kyoto a someone you . to the mayor see
![4_image_0.png](4_image_0.png)
and
the dependency tree2 of the input sentence like Figure 4. If two words can be associated within two generations, the element of DT T would be set to 1, otherwise 0 instead. For example, "Kyoto" is associated with "bit" (child),"a" (grandchild),"about"
(father) and "Complain" (grandfather). Second, the visual dependency submatrix DV V is initialized as a diagonal matrix. And as for the word-imageblock dependency, denoted as DT V and equaled to DT
V T , we set all the elements in the i-th line of DT V to 1 if the i-th word is an aspect, otherwise 0.
And the matrix D is defined as:
$$D=\begin{bmatrix}D_{V V}&D_{V T}\\ D_{T V}&D_{T T}\end{bmatrix},\qquad\qquad(9)$$
Considering the different importance of different dependencies, we attach weights onto D with cosine similarity among hˆi as follows:
$$A_{ij}=D_{ij}F_{cosine\_similarity}(\hat{h_{i}},\hat{h_{j}}),\tag{10}$$ where both $D,A\in\mathbb{R}^{(m+n)\times(m+n)}$, and $A$ is the weighted association matrix.
AG-GCN takes HSfrom Eq.8 as initial node representations in the graph. For the i-th node at the l-th layer, the hidden state h S
i,l is updated by the following equation:
$$h_{i,l}^{S}=R e L U(\sum_{j=1}^{n}A_{i j}W_{l}h_{i,l-1}^{S}+b_{l}),\quad\quad(11)$$
where Wl,bl are learned parameters and we use ReLU as activation function. Significantly, h S i,0 is equal to h S i
. Accordingly, we get the final output Hˆ Sfrom the last GCN layer which is rich in sentiment information. Every underlying aspect aggregates its relevant information from both the image-text pair. Moreover, sentiment confusion of different aspects is weakened because the association matrix makes little interference among different aspects.
2We use spaCy toolkit to construct the dependency tree referring from https://spacy.io
| Twitter2015 | Twitter2017 | |
|---------------------------|----------------|----------------|
| #sentence | 3,502 | 2,910 |
| #with one aspect | 2,159 (61.65%) | 976 (33.54%) |
| #with multiple aspects | 1,343 (38.35%) | 1,934 (66.46%) |
| #with multiple sentiments | 1,257 | 1,690 |
| for | | |
| your | | |
Complain
Interesting
!
thanks
, !
Kadokawa
Mayor
time
## 3.4 Prediction And Loss Function
The BART decoder takes the combination of Hˆ ,
Hˆ S, and the previous decoder output Y<t as inputs, and predicts the token probability distribution as follows:
$$\begin{array}{c}{{\tilde{H}=\lambda_{1}\hat{H}+\lambda_{2}\hat{H}^{S},}}\\ {{{}}}\\ {{h_{t}^{d}=D e c o d e r(\hat{H};Y_{<t})}}\\ {{\overline{{{H}}}_{T}=(W+\hat{H}_{T})/2}}\\ {{{}}}\\ {{P(y_{t})=s o f t m a x([\overline{{{H}}}_{T};C^{d}]h_{t}^{d})}}\end{array}$$
t) (15)
where λ1, λ2 are the hyper-parameters to control the contribution from the two modules. H˜T is the textual part of H˜ . W denotes the embeddings of input tokens. C
d means the embeddings of the [
positive, neutral, negative, <eos>]. The loss function is as follows:
$${\mathcal{L}}=-\mathbb{E}_{X\sim D}\sum_{t=1}^{O}l o g P(y_{t}|Y_{<t},X),\qquad(16)$$
where O = 2M + 2N + 2 is the length of Y, and X denotes the multimodal input.
## 4 Experiment 4.1 Experimental Settings
Datasets. Our two benchmark datasets are Twitter2015 and Twitter2017 (Yu and Jiang (2019)). As shown in the statistics of Table 1, sentences with multiple aspects take up a considerable part of the two datasets.
Implementation Details. Our model is based on BART (Lewis et al., 2019), and the pre-training task is trained for 40 epochs with batch size 64, and for 35 epochs with batch size 16 on MABSA.
The learning rates are both 7e-5 and hidden sizes are 768. Hyper-parameters λ1 and λ2 are 1 and 0.5 respectively. Besides, we pre-train A3M on TRC
dataset (Vempala and Preo¸tiuc-Pietro, 2019), which is divided into two groups according to whether the text is represented.
| Twitter2015 | Twitter2017 | | | | | |
|-------------------------------------------------------|---------------|------|------|------|------|------|
| Methods | P | R | F1 | P | R | F1 |
| SPAN* (Hu et al., 2019) | 53.7 | 53.9 | 53.8 | 59.6 | 61.7 | 60.6 |
| D-GCN* (Chen et al., 2020) | 58.3 | 58.8 | 59.4 | 64.2 | 64.1 | 64.1 |
| BART* (Yan et al., 2021) | 62.9 | 65.0 | 63.9 | 65.2 | 65.6 | 65.4 |
| UMT+TomBERT* (Yu et al., 2020; Yu and Jiang, 2019) | 58.4 | 61.3 | 59.8 | 62.3 | 62.4 | 62.4 |
| OSCGA+TomBERT* (Wu et al., 2020c; Yu and Jiang, 2019) | 61.7 | 63.4 | 62.5 | 63.4 | 64.0 | 63.7 |
| OSCGA-collapse* (Wu et al., 2020c) | 63.1 | 63.7 | 63.2 | 63.5 | 63.5 | 63.5 |
| RpBERT-collapse* (Sun et al., 2021) | 49.3 | 46.9 | 48.0 | 57.0 | 55.4 | 56.2 |
| UMT-collapse (Yu et al., 2020) | 61.0 | 60.4 | 61.6 | 60.8 | 60.0 | 61.7 |
| JML (Ju et al., 2021) | 65.0 | 63.2 | 64.1 | 66.5 | 65.5 | 66.0 |
| VLP-MABSA* (Ling et al., 2022) | 65.1 | 68.3 | 66.6 | 66.9 | 69.2 | 68.0 |
| CMMT (Yang et al., 2022) | 64.6 | 68.7 | 66.5 | 67.6 | 69.4 | 68.5 |
| AoM (ours) | 67.9 | 69.3 | 68.6 | 68.4 | 71.0 | 69.7 |
Evaluation Metrics. We evaluate the performance of our model on MABSA task and MATE task by Micro-F1 score (F1), Precision (P) and Recall (R),
while on MASC task we use Accuracy (Acc) and F1 following previous studies.
## 4.2 Baselines
We compare our proposed model with four types of methods listed below.
Approaches for textual ABSA. 1) **SPAN** (Hu et al., 2019) detects opinion targets with their sentiments. 2) **D-GCN** (Chen et al., 2020) models dependency relations among words via dependency tree. 3) **BART** (Yan et al., 2021) solves seven ABSA subtasks in an end-to-end framework.
Approaches for MATE. 1) RAN (Wu et al.,
2020b) focus on alignment of text and object regions. 2) UMT (Yu et al., 2020) takes text-based entity span detection as an auxiliary task. 3) **OSCGA** (Wu et al., 2020c) foucus on alignments of visual objects and entities.
Approaches for MASC. 1) **ESAFN** (Yu et al.,
2019) is an entity-level sentiment analysis method based on LSTM. 2) **TomBERT** (Yu and Jiang, 2019) applies BERT to obtain aspect-sensitive textual representations. 3) **CapTrBERT** (Khan and Fu, 2021) translates images into text and construct an auxiliary sentence for fusion.
Approaches for MABSA. 1) **UMT-collapse**
(Yu et al., 2020), **OSCGA-collapse** (Wu et al.,
2020c) and **RpBERT-collapse** (Sun et al., 2021)
are adapted from models for MATE by using collapsed labels to represent aspect and sentiment pairs. 2) UMT+TomBERT, **OSCGA+TomBERT**
are two pipeline approaches by combining UMT
(Yu et al., 2020) or OSCGA (Wu et al., 2020c) with TomBERT (Yu and Jiang, 2019). 3) JML (Ju et al.,
2021) is the first joint model for MABSA with auxiliary cross-modal relation detection module. 4)
CMMT (Yang et al., 2022) implements a gate to control the multimodal information contributions during inter-modal interactions. 5) **VLP-MABSA**
(Ling et al., 2022) performs five task-specific pretraining tasks to model aspects, opinions and alignments.
## 4.3 Main Results
| Text-based Multimodal |
|-------------------------|
In this section, we show the excellent performance of AoM on the two datasets for the three tasks compared with SOTAs.
Performance on MABSA: The results for MABSA are shown in Table 2. **First**, our AoM
far exceeds all text-based models, which means detection of richer visual information and textual information in our model is helpful. **Second**, multimodal pipeline methods and adaptive methods are generally unsatisfactory, because they ignore the interaction between the semantic information and sentiment for the two sub-tasks. **Last**, AoM outperforms all multimodal methods in every metric.
Especially, AoM achieves the improvement of 2%
and 1.2% with respect to F1 in contrast with the second best models on two datasets (*VLP-MABSA* for Twitter2015 and *CMMT* for Twitter2017), which demonstrates the effectiveness of learning aspectrelevant visual blocks and textual words compared to focusing on all visual and textual inputs.
Performance on MATE: As shown in Table 3, AoM is ahead of most of the current models and performs the best in Twitter 2015 by 0.3% higher than the second best *CMMT* on F1. The performance of *CMMT* in Twitter2017 is 0.8% higher
![6_image_1.png](6_image_1.png)
![6_image_3.png](6_image_3.png)
![6_image_4.png](6_image_4.png)
RAN* 80.5 81.5 81.0 90.7 90.7 90.0
UMT* 77.8 81.7 79.7 86.7 86.8 86.7 OSCGA* 81.7 82.1 81.9 90.2 90.7 90.4
JML* 83.6 81.2 82.4 92.0 90.7 91.4 VLP-MABSA* 83.6 87.9 85.7 90.8 92.6 91.7
CMMT 83.9 **88.1** 85.9 **92.2 93.9 93.1**
AoM (ours) **84.6** 87.9 **86.2** 91.8 92.8 92.3
Table 3: Results of different methods for MATE. * denotes the results from Ling et al. (2022).
![6_image_5.png](6_image_5.png)
Twitter2015 Twitter2017
Methods ACC F1 ACC F1 ESAFN 73.4 67.4 67.8 64.2 TomBERT 77.2 71.8 70.5 68.0 CapTrBERT 78.0 73.2 72.3 70.2
JML 78.7 - 72.7 - VLP-MABSA 78.6 73.8 73.8 71.8
CMMT 77.9 - 73.8 -
AoM (ours) **80.2 75.9 76.4 75.0**
Table 4: Results of different methods for MASC.
than ours, probably due to our model wrongly predicting some noun phrases as aspects. But considering the improvement in MASC and MABSA, it is still worthy treating all noun phrases in the sentence as candidate aspects when acquiring aspectrelevant visual information.
Performance on MASC: Table 4 shows the performance of MASC. It is exciting that our model outperforms the second-best results by 1.5% and 2.6% in accuracy, 2.1% and 3.2% points in F1 score on Twitter2015 and Twitter2017. It demonstrates that AoM has the ability to detect aspect-related sentiment information from both images and text, even disturbed by other noisy aspects.
## 4.4 Ablation Study
In this section, we research the effectiveness of each component in AoM, the results are shown in Table 5.
W/o A3**M&AG-GCN** shows that after removing the two specially designed modules, the per-
Table 5: The performance comparison of our full model and its ablated approaches.
![6_image_0.png](6_image_0.png)
| Twitter2015 | Twitter2017 | | | | | |
|----------------|---------------|------|------|------|------|------|
| Methods | P | R | F1 | P | R | F1 |
| Full | 67.9 | 69.3 | 68.6 | 68.4 | 71.0 | 69.7 |
| w/o A3M&AG-GCN | 65.7 | 67.3 | 66.5 | 66.5 | 69.0 | 67.8 |
| w/o A3M&TRC | 62.1 | 61.0 | 61.6 | 63.7 | 64.1 | 63.9 |
| w/o TRC | 66.8 | 68.4 | 67.6 | 67.8 | 69.8 | 68.8 |
| w/o AG-GCN | 67.0 | 69.4 | 68.2 | 67.8 | 69.7 | 68.8 |
| w/o SenticNet | 65.7 | 70.5 | 68.0 | 68.1 | 69.4 | 68.7 |
| w/o TRC&AG-GCN | 66.7 | 69.2 | 68.0 | 67.8 | 69.5 | 68.6 |
![6_image_2.png](6_image_2.png)
formance declines by 2.1% on Twitter2015 and 1.9% on Twitter2017. It fully demonstrates their contributions to learning effective information.
W/o A3**M&TRC** performs worse after removing A3M including the pre-training on TRC. It proves the necessity of modeling semantic alignment between visual blocks and aspects in A3M.
With the alignment, AG-GCN can obtain appropriate aspect-image-block and text-text association.
W/o TRC pre-training shows a slight drop after we remove the TRC pre-training on A3M, which implies relevant pre-training task is useful for the model to learn better parameters.
W/o AG-GCN displays the performance without AG-GCN, declining by 0.42% on Twitter2015 and 0.9% on Twitter2017. It means that AG-GCN
does make the prediction focus on specific aspects related to blocks and words with syntax dependencies. In other words, the multimodal interference from other aspects can be mitigated.
W/o SenticNet is the model without sentiment information in AG-GCN. Its performance shows adding external affective knowledge can enhance the sentiment comprehension of the model.
W/o TRC&AG-GCN is the BART model only with our A3M module. We can see from Table 5 that *w/o TRC&AG-GCN* improves w/o A3*M&AGGCN* by 1.5% and 0.8%. So it is effective to align the fine-grained visual block to related aspect and reduce irrelevant information.
## 4.5 Case Study
To better analyze how the Aspect-Aware Attention Module and Aspect-Guided Graph Convolutional Network work, we present the case study as follows. Figure 5 displays two examples with predictions from VLP-MABSA (Ling et al., 2022),
![7_image_0.png](7_image_0.png)
BART+A3M and our AoM. In example (a), VLPMABSA misses the aspect "Golden State Warriors", gets an incomplete aspect "Oklahoma City Thunder" and wrongly predicts the sentiment. It may be caused by the interference from the visual region which represents pride expression of a person. However, BART+A3M gets all right predictions due to the ability of aspect-oriented attention. In example (b), compared with our whole model, BART+A3M wrongly predicts the sentiment of "Daniel Radcliffe" which should be negative. We attribute the wrong prediction to lacking syntax association which benefits sentiment transmission. In other words, AG-GCN contributes to the correctness.
## 4.6 Attention Visualization
To investigate the effectiveness of detecting aspectrelevant information, we visualize the attention process as shown in Figure 6.
For A3M: (i) Figure 6-(I.a) shows the attention weights of candidate aspects computed according to the images. We can see that "Mayor Kadokawa" is the most relevant aspect. (ii) Figure 6-(I.b)
shows the proportions of the visual information reserved at the last step in A3M, where we weighted add up the representations of visual blocks and the corresponding aspects. The heat map shows that the visual information associated with "Mayor Kadokawa" is reserved to a great extent, while the helpless information from other blocks is disregarded as noise. It demonstrates that attention in A
3M is able to detect aspect-relevant information.
For AG-GCN: (i) Figure 6-(II.a) shows the word-to-word part of the weighted association matrix. The matrix effectively excludes sentiment interference from other aspects by adding syntax dependency information. For example, the sentiment of "mayor" cannot be influenced by irrelevant keywords, such as "Complain" and "thanks". (ii) Figure 6-(II.b) shows the dependencies between visual blocks and words. (iii) Specifically, we visualize the visual attention of aspects "Kyoto" (see Figure 6-(II.c) left) and "Mayor Kadokawa" (see Figure 6-(II.c) right). We can see that "Kyoto" pays more attention to the pictures hanging on the wall which are full of Japanese elements related to the place, while "Mayor Kadokawa" focus more on the joyful expressions of the two people. (iv) Figure 6-(II.d) shows the words and image blocks "Mayor Kadokawa" focused on in sentiment transmission.
It's obvious that these attentions are helpful for the prediction.
## 5 Conclusion
In this paper, we proposed an aspect-oriented model (AoM) for the task of multimodal aspectbased sentiment analysis. We use two specially designed modules to detect aspect-relevant information from the semantic and sentiment perspectives. On the one hand, to learn aspect-relevant semantic information especially from the image, we construct the Aspect-Aware Attention Module to align the visual information and descriptions to the corresponding aspect. On the other hand, to detect the aspect-relevant sentiment information, we explicitly add sentiment embedding into AoM.
Then, a graph convolutional network is used to aggregate the semantic and sentiment embedding under the guidance of both image-text similarity and syntax dependency in sentences. The experimental results on two widely used datasets demonstrate the effectiveness of our method.
## Limitations
Though our proposed method outperforms current state-of-the-art methods, there are still many challenges we should overcome in future research.
First, for colloquial expression which confuses current dependency tree parser, we should come up with new solutions. Second, emotional prediction of tweet posts describing current issues needs external knowledge, which is absent in existing research.
## Acknowledgments
We thank anonymous reviewers for their valuable comments. This work was supported by the Natural Science Foundation of Tianjin, China (No.22JCJQJC00150, 22JCQNJC01580), the National Natural Science Foundation of China (No.62272250), Tianjin Research Innovation Project for Postgraduate Students
(No.2022SKYZ232), and the Fundamental Research Funds for the Central Universities (No.
63231149).
## References
Meysam Asgari-Chenaghlu, M. Reza Feizi-Derakhshi, Leili Farzinvash, M. A. Balafar, and Cina Motamed.
2021. CWI: A multimodal deep learning approach for named entity recognition from social media using character, word and image features. Neural Computing and Applications, 34(3):1905–1922.
Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019.
Multi-Modal Sarcasm Detection in Twitter with Hierarchical Fusion Model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2506–2515, Florence, Italy. Association for Computational Linguistics.
Guimin Chen, Yuanhe Tian, and Yan Song. 2020.
Joint aspect extraction and sentiment analysis with directional graph convolutional networks. In Proceedings of the 28th international conference on computational linguistics, pages 272–279.
Hao Chen, Zepeng Zhai, Fangxiang Feng, Ruifan Li, and Xiaojie Wang. 2022. Enhanced MultiChannel Graph Convolutional Network for
Aspect Sentiment Triplet Extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2974–2985, Dublin, Ireland.
Association for Computational Linguistics.
Tao Chen, Damian Borth, Trevor Darrell, and ShihFu Chang. 2014. Deepsentibank: Visual sentiment concept classification with deep convolutional neural networks.
Zhuang Chen and Tieyun Qian. 2020. RelationAware Collaborative Learning for Unified AspectBased Sentiment Analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3685–3694, Online. Association for Computational Linguistics.
Devamanyu Hazarika, Roger Zimmermann, and Soujanya Poria. 2020. Misa: Modality-invariant and -specific representations for multimodal sentiment analysis. In Proceedings of the 28th ACM
International Conference on Multimedia, MM '20, page 1122–1131, New York, NY, USA. Association for Computing Machinery.
Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, and Yiwei Lv. 2019. Open-domain targeted sentiment analysis via span-based extraction and classification. arXiv preprint arXiv:1906.03820.
Xincheng Ju, Dong Zhang, Rong Xiao, Junhui Li, Shoushan Li, Min Zhang, and Guodong Zhou.
2021. Joint Multi-modal Aspect-Sentiment Analysis with Auxiliary Cross-modal Relation Detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4395–4405, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
KS Kalaivani, M Rakshana, K Mounika, and D Sindhu.
2022. Senticnet-based feature weighting scheme for sentiment classification. In Mobile Computing and Sustainable Informatics, pages 839–848. Springer.
Zaid Khan and Yun Fu. 2021. Exploiting bert for multimodal target sentiment classification through input space translation. In Proceedings of the 29th ACM International Conference on Multimedia, pages 3034–3042.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.
arXiv preprint arXiv:1910.13461.
Ruifan Li, Hao Chen, Fangxiang Feng, Zhanyu Ma, Xiaojie Wang, and Eduard Hovy.
2021a. Dual Graph Convolutional Networks for Aspect-based Sentiment Analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics
and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 6319–6329, Online. Association for Computational Linguistics.
Yuanqing Li, Ke Zhang, Jingyu Wang, and Xinbo Gao.
2021b. A cognitive brain model for multimodal sentiment analysis based on attention neural networks.
Neurocomputing, 430:159–173.
Bin Liang, Hang Su, Lin Gui, Erik Cambria, and Ruifeng Xu. 2022a. Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks. Knowledge-Based Systems, 235:107643.
Bin Liang, Rongdi Yin, Lin Gui, Jiachen Du, and Ruifeng Xu. 2020. Jointly Learning Aspect-Focused and Inter-Aspect Relations with Graph Convolutional Networks for Aspect Sentiment Analysis. In Proceedings of the 28th International Conference on Computational Linguistics, pages 150–161, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Shuo Liang, Wei Wei, Xian-Ling Mao, Fei Wang, and Zhiyong He. 2022b. BiSyn-GAT+: Bi-Syntax Aware Graph Attention Network for Aspect-based Sentiment Analysis. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1835–
1848, Dublin, Ireland. Association for Computational Linguistics.
Yan Ling, Jianfei Yu, and Rui Xia. 2022. VisionLanguage Pre-Training for Multimodal AspectBased Sentiment Analysis. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 2149–2159, Dublin, Ireland. Association for Computational Linguistics.
Yanxia Lv, Fangna Wei, Lihong Cao, Sancheng Peng, Jianwei Niu, Shui Yu, and Cuirong Wang. 2021.
Aspect-level sentiment analysis using context and aspect memory network. Neurocomputing, 428:195–
205.
Yukun Ma, Haiyun Peng, and Erik Cambria. 2018. Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive lstm.
Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
Shinhyeok Oh, Dongyub Lee, Taesun Whang, IlNam Park, Seo Gaeun, EungGyun Kim, and Harksoo Kim. 2021. Deep Context- and Relation-Aware Learning for Aspect-based Sentiment Analysis.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2:
Short Papers), pages 495–503, Online. Association for Computational Linguistics.
Shiguan Pang, Yun Xue, Zehao Yan, Weihao Huang, and Jinhui Feng. 2021. Dynamic and Multi-Channel Graph Convolutional Networks for Aspect-Based Sentiment Analysis. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2627–2636, Online. Association for Computational Linguistics.
Lin Sun, Jiquan Wang, Kai Zhang, Yindu Su, and Fangsheng Weng. 2021. Rpbert: A text-image relation propagation-based bert model for multimodal ner.
Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13860–13868.
Alakananda Vempala and Daniel Preo¸tiuc-Pietro. 2019.
Categorizing and inferring the relationship between the text and image of twitter posts. In Proceedings of the 57th annual meeting of the Association for Computational Linguistics, pages 2830–2840.
Hanqian Wu, Siliang Cheng, Jingjing Wang, Shoushan Li, and Lian Chi. 2020a. Multimodal aspect extraction with region-aware alignment network. In Natural Language Processing and Chinese Computing, pages 145–156, Cham. Springer International Publishing.
Hanqian Wu, Siliang Cheng, Jingjing Wang, Shoushan Li, and Lian Chi. 2020b. Multimodal aspect extraction with region-aware alignment network. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 145–156. Springer.
Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, and Wenting Zhao.
2022. Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1397–
1406, Dublin, Ireland. Association for Computational Linguistics.
Zhiwei Wu, Changmeng Zheng, Yi Cai, Junying Chen, Ho-fung Leung, and Qing Li. 2020c. Multimodal representation with embedded visual guiding objects for named entity recognition in social media posts. In Proceedings of the 28th ACM International Conference on Multimedia, pages 1038–1046.
Junjie Xu, Shuwen Yang, Luwei Xiao, Zhichao Fu, Xingjiao Wu, Tianlong Ma, and Liang He. 2022.
Graph convolution over the semantic-syntactic hybrid graph enhanced by affective knowledge for aspectlevel sentiment classification. In 2022 International Joint Conference on Neural Networks (IJCNN),
pages 1–8. IEEE.
Lu Xu, Hao Li, Wei Lu, and Lidong Bing.
2020. Position-Aware Tagging for Aspect Sentiment Triplet Extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2339–2349, Online. Association for Computational Linguistics.
Hang Yan, Junqi Dai, Xipeng Qiu, Zheng Zhang, et al. 2021. A unified generative framework for aspect-based sentiment analysis. arXiv preprint arXiv:2106.04300.
Li Yang, Jin-Cheon Na, and Jianfei Yu. 2022.
Cross-Modal Multitask Transformer for End-toEnd Multimodal Aspect-Based Sentiment Analysis. Information Processing & Management, 59(5):103038.
Jianfei Yu and Jing Jiang. 2019. Adapting bert for target-oriented multimodal sentiment classification. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5408–5414. International Joint Conferences on Artificial Intelligence Organization.
Jianfei Yu, Jing Jiang, and Rui Xia. 2019. Entitysensitive attention and fusion network for entity-level multimodal sentiment classification. IEEE/ACM
Transactions on Audio, Speech, and Language Processing, 28:429–439.
Jianfei Yu, Jing Jiang, Li Yang, and Rui Xia. 2020.
Improving multimodal named entity recognition via entity span detection with unified multimodal transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3342–3352, Online. Association for Computational Linguistics.
Li Yuan, Jin Wang, Liang-Chih Yu, and Xuejie Zhang.
2020. Graph attention network with memory fusion for aspect-level sentiment analysis. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 27–36, Suzhou, China.
Association for Computational Linguistics.
Qi Zhang, Jinlan Fu, Xiaoyu Liu, and Xuanjing Huang.
2018. Adaptive co-attention network for named entity recognition in tweets. In Thirty-Second AAAI
Conference on Artificial Intelligence.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We discuss it in section Limitations.
✗ A2. Did you discuss any potential risks of your work?
Our research is foundational research and not tied to particular applications.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In the abstract and section 1 Introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 3 Methodology And 4 Experiment.
✓ B1. Did you cite the creators of artifacts you used?
In section 3.1 Overview and 4 Experiment.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The datasets used in section 4 and pre-trained models in section 3 are in public domain and licensed for research purposes.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
In section 4 Experiment.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. The data is in public domain and licensed for research purposes.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
In section 4 Experiment.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
In section 4 Experiment.
## C ✓ **Did You Run Computational Experiments?** In Section 4 Experiment.
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We only use the most commonly used pre-trained models and the parameters or GPU hours are not focus of our research.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In section 4 Experiment.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In section 4 Experiment.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In section 4 Experiment.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
koval-etal-2023-forecasting | Forecasting Earnings Surprises from Conference Call Transcripts | https://aclanthology.org/2023.findings-acl.520 | There is a multitude of textual data relevant to the financial markets, spanning genres such as financial news, earnings conference calls, and social media posts. Earnings conference calls are one of the most important to information flow as they reflect a direct communication between company executives, financial analysts, and large shareholders. Since these calls contain content that is forward-looking in nature, they can be used to forecast the future performance of the company relative to market expectations. However, they typically contain over 5,000 words of text and large amounts of industry jargon. This length and domain-specific language present problems for many generic pretrained language models. In this work, we introduce a novel task of predicting earnings surprises from earnings call transcripts and contribute a new long document dataset that tests financial understanding with complex signals. We explore a variety of approaches for this long document classification task and establish some strong baselines. Furthermore, we demonstrate that it is possible to predict companies{'} future earnings surprises from solely the text of their conference calls with reasonable accuracy. Finally, we probe the models through different interpretability methods and reveal some intuitive explanations of the linguistic features captured that go beyond traditional sentiment analysis. | # Forecasting Earnings Surprises From Conference Call Transcripts
Ross Koval1,3, Nicholas Andrews2**, and Xifeng Yan**1 1University of California, Santa Barbara 2Johns Hopkins University 3AJO Vista [email protected]
## Abstract
There is a multitude of textual data relevant to the financial markets, spanning genres such as financial news, earnings conference calls, and social media posts. Earnings conference calls are one of the most important to information flow as they reflect a direct communication between company executives, financial analysts, and large shareholders. Since these calls contain content that is forward-looking in nature, they can be used to forecast the future performance of the company relative to market expectations. However, they typically contain over 5,000 words of text and large amounts of industry jargon. This length and domainspecific language present problems for many generic pretrained language models. In this work, we introduce a novel task of predicting earnings surprises from earnings call transcripts and contribute a new long document dataset that tests financial understanding with complex signals. We explore a variety of approaches for this long document classification task and establish some strong baselines. Furthermore, we demonstrate that it is possible to predict companies' future earnings surprises from solely the text of their conference calls with reasonable accuracy. Finally, we probe the models through different interpretability methods and reveal some intuitive explanations of the linguistic features captured that go beyond traditional sentiment analysis.
## 1 Introduction
There is a multitude of textual data relevant to the financial markets, spanning genres such as financial news, earnings conference calls, analyst recommendation reports, social media posts, and regulatory filings. Earnings conference calls are one of the most important datasets relevant to the information flow in equity markets because they reflect a direct communication between company executives and financial analysts (Brown et al., 2004).
Many public companies in the US hold earnings
| Input: |
|----------|
Figure 1: Paragraph of a sample transcript from the Validation Set that resulted in a positive surprise prediction from the model.
| ". . . we continued our positive momentum in the first quarter reporting comp sales that accelerated from our strong fourth quarter performance. during the quarter, we drove market share gains and better than expected profitability by capitalizing on the advantages of our business model with dynamic marketing, compelling brands, and providing our customers with the preferred beauty shopping experience. . . " | |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------|
| Output: | Positive Surprise, P(y = 1) = 0.95 |
conference calls quarterly, in which their executives discuss the recent performance of the firm, their prospects, and answer questions from financial analysts covering their firms. Typically, companies report results quarterly at a lag of 4-6 weeks from the end of the previous period, and hold a conference call shortly thereafter. Therefore, the company executives have substantial knowledge into the next period's results when the call is held, providing a rare opportunity to detect textual indicators, such as tone and emotion in executive and analyst language patterns. These diverse signals, which can vary from clear sentiment to more subtle signs of deception (Larcker and Zakolyukina, 2012) or obfuscation (Bushee et al., 2018), may reveal important information about the current and future prospects of the company and be used to forecast its future earnings surprises. Earnings surprises, which measure the operating performance of a company relative to market expectations, are highly followed by equity investors and often result in high magnitude stock returns and volatility (Doyle et al., 2006).
In this paper, we consider the problem of using the textual content of earnings call transcripts to forecast future earnings surprises. It is important to note that this is a challenging task because the forecasting horizon is long (∼3 months), producing a lot of uncertainty between the forecast and event data, and there are legal restrictions about what is allowed to be disclosed to the public during the call. As a result, it is not clear *a priori* if the content of call transcripts contains sufficient task signal to outperform even uninformed baselines.
In the broader literature, there has been growing interest in the relevance of textual content to financial markets that has increasingly grown in sophistication in recent years. Some initial attempts used the Harvard General Inquirer IV-4 (HGI) dictionary to measure word polarity in financial text but found there to be domain mismatch (Price et al.,
2012). Then, Loughran and Mcdonald (2011) constructed their own financial-specific sentiment dictionary (LM). Interestingly, they found that 75%
of words with strong negative polarity in the HGI dictionary had a neutral sentiment in a financial context (e.g. liability, tax, excess, etc.). Further, Ke et al. (2019) develop a supervised learning method to identify sentiment in WSJ news articles, and found that over 40% of the most negative words identified by their model are not present in the LM dictionary because they are not negative in the context of regulatory filings. In other words, even within the financial domain, there can be a genre mismatch between different types of financial text. This motivated us in this work to explore models that are well attuned to the language of earnings conference calls.
However, the length of these conference calls, typically ranging from 5,000 to 10,000 words per transcript, creates some challenges because many of the popular pretrained language models only support a maximum length of 512 tokens. While there have been many advances in developing Efficient Transformers that reduce the time complexity of the self-attention mechanism, these methods are still computationally expensive to pretrain and there does not yet exist a variety of pretrained versions, such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), which both contain many variants targeting specific domains, such as biology (Lee et al., 2020), medicine (Gu et al., 2021), law (Chalkidis et al., 2020), finance (Yang et al., 2020a), and many others. As far as we know, there does not exist equivalent domain specific pretrained versions of these Efficient Transformers.
In this work, we explore a variety of approaches to this novel long document classification task and make the following contributions.
1. We introduce a novel task of using the text from earnings conference calls to make long horizon predictions of future company earnings surprises and explore a variety of approaches to this long document classification task (§3).
2. We contribute a new long document dataset that tests financial language understanding with complex signals that we anticipate to be of broader interest to the computational linguistics community (§3.1).
3. We explore a variety of approaches for this task, including simple bag-of-words models as well as long document Transformers, such as Efficient Transformers and tailored Hierarchical Transformer models, establishing the state of the art (§4).
4. We demonstrate that it is possible to predict companies' future earnings surprises with reasonable accuracy from solely the textual content of their most recent conference calls (§5).
5. We probe the best model through different interpretability methods to reveal some intuitive explanations of the linguistic features captured, which indicates that our model is learning more powerful features than just traditional sentiment (§6).
We release the dataset and sample code at:
https://github.com/rosskoval/fc-es-ccts
## 2 Related Work 2.1 Long Document Classification
There are generally two different approaches to modeling long documents. First, there is a class of Efficient Transformer models that were designed for long documents, such as TransformerXL (Dai et al., 2019), Longformer (Beltagy et al.,
2020), BigBird (Zaheer et al., 2020), and Reformer (Kitaev et al., 2020), which modify the self-attention mechanism in the original implementation to make it more efficient to accommodate longer contexts. These models have been shown to excel at long document understanding tasks, such as classification, question-answering, and summarization.
Alternatively, a hierarchical attention approach can be used. In Hierarchical Attention Networks
(Yang et al., 2016), the authors model a document in a hierarchical fashion, viewing a sentence as a sequence of words and a document as a sequence of sentences, and use self-attention and recurrent networks to produce document representations.
More recent work on Hierarchical Transformers
(HTs) extend this approach to use pretrained language models for segment-level embeddings and Transformers for document-level representations
(Pappagari et al., 2019; Yang et al., 2020b; Zhang et al., 2019; Mulyar et al., 2019). This approach naturally supports longer sequence lengths due to the product of multiple self-attention mechanisms. Although Efficient Transformers are attractive in principle, there is much less availability of them pretrained in different domains and languages, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020).
## 2.2 Financial Prediction
Since earning conference calls are highly followed by most investors, there has been a considerable amount of research performed on them. For instance, Larcker and Zakolyukina (2012) use data on subsequent financial restatements to analyze conference calls for deceptive behavior. Frankel et al. (2018), apply traditional ML models to extract the sentiment of conference calls and use it to predict subsequent analyst revisions. However, as far as we know, this is the first work that uses deep learning to directly learn to predict earnings surprises from conference call transcripts in an endto-end manner.
In Qin and Yang (2019), the authors propose a deep multi-modal regression model that jointly leverages textual and audio data from a small sample of conference calls to predict short-term stock volatility. Sawhney et al. (2020a,b) build on that work to leverage the network structure of stock correlations with graph networks and financial features to make joint predictions. Similarly, Sang and Bao (2022) propose a multi-modal structure that jointly models the textual dialogue in the call and the company network structure. Further, Yang et al. (2020a, 2022) propose a multi-modal model that leverages the numerical content of financial text.
In Huang et al. (2022), the authors release a pretrained language model for the financial domain, termed FinBERT, and show that their model understands financial text significantly better than competing methods in many aspects, including sentiment and ESG. They constructed the model by pretraining the BERT-base architecture from scratch with a custom vocabulary on a large corpus of English text from the financial domain, consisting of conference calls, corporate filings, and analyst recommendation reports, that is commensurate in size to the BERT pretraining corpus.
## 3 Problem 3.1 Data
We collected manually transcribed English conference calls from the largest publicly trade companies in the US (MSCI USA Index) over January 2004 to December 2011, from FactSet Document Distributor. We also source Reported Earnings per Share (EPS) and Analyst Consensus Estimates of EPS from FactSet Fundamentals and Consensus Estimates, respectively. To focus on the largest and most actively traded companies in the US, we filter the data such that all companies in the sample have market capitalizations above $1B USD and daily average trading volume over $50M USD.
Then, we temporally partition the data into train, validation, and test sets. We use transcripts that occurred between 2005 and 2009 as the training set, those that occurred in 2010 as the validation set, and those that occurred in 2011 as the test set. It is important to note that these sets much be temporally disjoint and monotonically ascending in time to avoid look-ahead bias. We provide summary statistics in Table 1.
| Train | Validation | Test | |
|--------------------|--------------|----------|----------|
| Start Date | Jan 2004 | Jan 2010 | Jan 2011 |
| End Date | Dec 2009 | Dec 2010 | Dec 2011 |
| Sample Size | 4,056 | 524 | 588 |
| Avg # of Words | 9,016 | 8,907 | 9,010 |
| Max # of Words | 26,130 | 19,853 | 17,650 |
| Avg # of Sentences | 390 | 409 | 415 |
| Avg # of Words per | 25 | 22 | 22 |
| Sentence | | | |
Table 1: Summary Statistics of the Earnings Call Transcript dataset on each sample split.
## 3.2 Supervised Learning Task
We propose the task to predict the direction of the next earnings surprise ES from solely the textual content of the most recent transcript as a supervised learning task. Therefore, the input is a raw, unsegmented, English transcript with maximum number of words LT . We set LT to be 12,000 words (∼20,000 BERT tokens) for computational and memory constraints, and because less than 10% of the transcripts in the sample are longer than that length. We select the Standardized Unexpected Earnings (SUE; Latane and Jones, 1979) as our measure of earnings surprise, which is defined to be the difference between the reported EPS of the company and the analyst consensus estimate of the EPS, scaled by the inverse of the standard deviation (dispersion) of the analyst forecasts. We measure the analyst consensus estimate as the mean of all latest valid analyst forecasts, collected 1-month following the last earnings call transcript (the one we are using to make the prediction), which serves are the closest approximation to forward-looking market expectations; this allows analysts to update their forecasts based off their perception of the conference call and recently reported company results, and yields a more challenging, but potentially more rewarding, task than if we collected analyst forecasts at an earlier time horizon. We note that there is roughly a 3-month time horizon between the earnings call and the reporting of the next earnings surprise, making this long horizon prediction task particularly challenging.
$$E S={\frac{R e p E P S-A v g(E s t E P S)}{S t d(E s t E P S)}}$$
$$y={\begin{cases}0,&E S\leq-\delta\\ 1,&E S\geq\delta\end{cases}}$$
We binarize the continuous value, such that an ES above δ corresponds to a label of +1 and represents a positive surprise, while an ES below −δ corresponds to a label of 0 and represents a negative surprise. We select a value for δ of 0.10 as a balance between the sample size and the significance of the events, such that about 14 of events are positive surprises and 14 of events are negative surprises. We discard transcripts which do not result in a material earnings surprise. Since this does not translate to a perfect class label balance, we randomly down-sample the majority class, such that there is an equal 50/50 split of positive surprises and negative surprises in each sample split, to more easily interpret the results. Thus, we can use accuracy score as our primary evaluation metric. We use binary cross-entropy as the loss function for this binary classification task.
While the underlying earnings surprise metric is continuous, the importance of the metric to the market is often more binary. In general, market participants are more interested in the direction of the surprise than the precise magnitude of it and typically react accordingly insofar as the surprise is a "material" event. However, there are neutral cases when the company beats or misses the forecast by a small margin (i.e. [-0.10, 0.10] in this work) in which market reaction is typically lower.
These boundary cases are generally difficult for the model to learn from at this long horizon because the ex-ante true probability is approximately random and there is often some form of earnings management involved. Therefore, we choose to focus on the most important earnings surprise events and disregard the neutral class.
## 4 Methods 4.1 Approach
We provide a wide variety of baseline models for this task, consisting of a combination of traditional and neural models, and establish several strong baselines as well as the state-of-the-art. While the current literature suggests that domain adaptation methods, such as language model finetuning on a domain-relevant corpus, is beneficial when using generic pretrained language models in out-ofdistribution tasks, (Han and Eisenstein, 2019; Gururangan et al., 2020), this additional pretraining is computationally expensive and requires largescale datasets to be effective. Therefore, we explore the use of existing pretrained language models.
## 4.2 Bag-Of-Words
For the classical ML baselines, we select bag-ofwords with n-grams (BOW) with TF-IDF weighting (Salton and Buckley, 1988), and Logistic
(Logistic) and Gradient Boosted Decision Trees
(GBDT) as classifiers. We also provide a simple dictionary-based model that uses the proportion of words in each category of the Loughran and McDonald (LM) financial sentiment dictionary as features to a Logistic classifier. Additionally, we consider both general (BERT-Sent) and domain-specific (FinBERT-Sent) pretrained sentiment classifiers that are applied at the sentence level and aggregated with simple majority-rule voting. BERT-Sent is BERT-base-uncased (Devlin et al., 2019) finetuned on IMDB movie reviews
(Maas et al., 2011) and FinBERT-Sent is FinBERT
(Huang et al., 2022) finetuned on manually labeled sentences from financial analyst research reports.
Given that the positive autocorrelation of earnings surprises is documented in the financial literature (Kama, 2009), we provide a simple autoregressive time-series baseline AR(1) that fits a logistic classifier on the continuous value of the firm's previous earnings surprise. In general, the autocorrelation beyond the most recent quarter is much lower. While the resulting performance is far below that of the best long-document models we provided, it is important to note that the signal contained in the text is likely largely distinct from and complementary to the information contained in the lagged surprise variables.
## 4.3 Short Context Models
For the short context models that only support up to 512 tokens of text, we consider multiple variants of FinBERT, including first, last, and random 512 tokens (truncation), as well as various forms of aggregation, including mean & max pooling over time, to aggregate segment embeddings into document representations.
## 4.4 Hierarchical Transformers
We provide various forms of Hierarchical Transformers (HTs) with different pretrained segment encoders, and train them end-to-end on our supervised learning task. HTs take a hierarchical approach by dividing each long document into shorter non-overlapping segments of maximum length L. To do so, we use greedy sentence chunking to recursively add sentences to each segment until the length of the segment exceeds L. We choose this segmentation strategy to avoid breaking up and mixing sentences into chunks to better preserve their syntax and semantics. Since it is not clear what the optimal value of L should be, we treat it as an additional hyperparameter and tune it over {32, 64, 128}.
## Segment Encoder
We initialize the segment encoder with pretrained models, which typically supports a maximum sequence length of 512 tokens. We explore BERT
(Devlin et al., 2019) and FinBERT (Huang et al.,
2022). This model produces contextualized embeddings of all tokens in each segment and we extract the last hidden state of the first [CLS] token as our segment representation (Devlin et al., 2019).
## Document Encoder
We use the standard Transformer architecture
(Vaswani et al., 2017) with multi-head selfattention and sinusoidal positional encodings as the document encoder. The document encoder is responsible for taking the segment embeddings and producing contextualized segment representations by allowing each segment to attend to all other segments in the document and share information. We apply max pooling over time and concatenate with the first state to arrive at our document representation. We tune the number of layers over {2, 3, 4} and set the number of attention heads in each layer to be 6.
## 4.5 Efficient Transformers
We select BigBird (Zaheer et al., 2020) as our Efficient Transformer baseline model because it has been shown to exhibit state-of-the-art performance on long document classification and questionanswering tasks (Zaheer et al., 2020). The model applies a combination of local (sliding window),
random, and global attention to sparsely approximate the full self-attention matrix. We also experimented with Longformer (Beltagy et al., 2020) but found BigBird to be more efficient and effective on this task, likely due to the increased number of global attention tokens and smaller attention window sizes. This result may suggest that the number of global attention tokens can be tradedoff against the attention window size to improve efficiency and maintain effectiveness in Efficient Transformer models.
Since there are no versions of BigBird pretrained on the financial domain, we use the RoBERTa-base checkpoint that was warm-started from RoBERTa-base and further pretrained on a large corpus of long documents. We continue the MLM pretraining process on our in-task dataset to adapt to financial language (Gururangan et al.,
2020). We tune the block size over {32, 64, 84} and the number of random blocks over {3, 4, 5}.
Please see Appendix A for more details.
We also provide two simple, heuristic-based extraction baselines in which we first extract the most salient (*a priori*) sentences in each transcript, defined to be those that contain forward-looking statements (FLSE) according to Li (2010) or positive/negative sentiment words (LMSE) according to the LM dictionary, and pass the resulting abridged text to BigBird, thereby reducing the text length by about 75% and 67%, respectively, and potentially avoiding the truncation of the most relevant sentences. However, we do not find these extraction steps to be effective and discuss them further below.
## 4.6 Implementation Details
We train all models for a maximum of 10 epochs and select the checkpoint with the highest validation accuracy for further evaluation. BigBird and Hierarchical Transformers both contain approximately 130M parameters. We include more details on the implementation and training process in Appendix A.
## 5 Results 5.1 Comparison
| Model | Test Accuracy |
|------------------------|-----------------|
| AR(1) | 58.33% |
| BERT-Sent | 49.58% |
| FinBERT-Sent | 51.27% |
| LM + Logistic | 60.20% |
| BOW + Logistic | 67.68% |
| BOW + GBDT | 71.43% |
| FinBERT - First 512 | 63.44% |
| FinBERT - Last 512 | 62.24% |
| FinBERT - Random 512 | 62.07% |
| FinBERT + Mean Pooling | 66.84% |
| FinBERT + Max Pooling | 73.28% |
| BigBird | 75.87% |
| BigBird + MLM | 74.30% |
| BigBird + FLSE | 65.65% |
| BigBird + LMSE | 72.98% |
| Hierarchical BERT | 70.24% |
| Hierarchical FinBERT | 76.56% |
GBDTs, outperforming many of the simple neural baselines. This indicates that substantial signal can be captured through non-linear interaction of normalized unigram/bigram features. It also indicates the models that try to reduce the length of text through either various forms of truncation or simple aggregation, likely dilute the signal and do not possess the ability to identify and capture the most salient portions of the transcript.
Further, we observe the importance of domain alignment within the Hierarchical Transformer models, with FinBERT performing significantly better than BERT for this task. Given earnings conference calls were a large component of the pretraining corpus, the benefit of FinBERT is expected. However, we do not find that domain adaptation of BigBird via further MLM pretraining to be beneficial in this setting, likely because of the small size of the training set.
Interestingly, we find that the addition of FLSE
and LMSE is detrimental to the performance of BigBird, and that FLSE performs considerably worse than the best simple baselines, suggesting that the signal contained in the task requires the additional text to contextualize statements about forward-looking performance with information about the past and present, and supports the view that a model that can simultaneously process the full transcript in an end-to-end manner is required for strong performance on this task.
## 5.2 Document Length
| # Tokens | Test Accuracy |
|------------|-----------------|
| 1,000 | 70.09% |
| 2,000 | 72.64% |
| 5,000 | 74.34% |
| 10,000 | 75.72% |
| 20,000 | 76.56% |
As shown in Table 2, the pretrained sentiment models with simple majority-rule voting, BERTSent and FinBERT-Sent, fail to perform much better than random chance, indicating the difficulty of the task. Surprisingly, we observe that a simple bag-of-ngrams with TF-IDF weighting performs comparatively well on this dataset when used with In Table 3, we apply the trained Hierarchical FinBERT model to the test set with different truncation lengths. Our results also indicate the performance degradation when truncating long documents to shorter lengths and demonstrate the need for models to support the ability to process longer documents, as there is more than a 6% gap in test set accuracy between using the first 1,000 tokens and using the first 20,000 tokens, indicating that truncation loses valuable information contained in the middle of the transcript. While the first and last portions of the transcript may be the most relevant, it is clear that the middle portions also contain predictive value.
## 5.3 Training Efficiency
| Model | Time |
|----------------------|--------|
| Hierarchical FinBERT | 1.00 |
| BigBird | 1.40 |
| Longformer | 1.79 |
Table 4: Comparison of model finetuning times per epoch normalized to Hierarchical FinBERT.
In Table 4, we observe that the hierarchical structure of our model is quite efficient for processing long documents (>20K tokens). Compared to BigBird and Longformer, which can only process the first 4,096 tokens, we observe an approximately 50% speed up in finetuning time.
## 6 Model Interpretability And Analysis
Since Hierarchical FinBERT was the best performing model, we probe its predictions through a variety of interpretability methods to better understand the linguistic features important to this task.
## 6.1 Lime
| Top Words for Positive Surprise: Top Words for Negative Surprise: |
|---------------------------------------------------------------------|
First, we conduct Local Interpretable Modelagnostic Explanations (LIME) (Ribeiro et al., 2016) as a word-level attribution test, which constructs a sparse linear model across the perturbed neighborhood of each sample to approximate the influence of the top 10 features (words) to the model's prediction. We sort by the words that have the highest feature importance summed over 100 random samples from test set (given length of the transcripts, performing this analysis on the full test set was not computationally feasible).
As shown in Figure 2, many of the phrases with the highest importance have a strong sentiment attached to them. For instance, "congratulations" and "great quarter" are important features, which are used by financial analysts to praise the performance of the company. Since it appears that the top positive words are more intuitive and have larger magnitude weights, we conjecture that positive sentiment is more easily expressed, while negative sentiment may often manifest in what is not said. We also note that words such as "because" and "caused" may be used to try to explain away poor performance.
| higher, strong, great, raising, beat, congrats, outperformance, benefitted, quarter, formidable, improvement bad, challenging, negatives, impacted, caused, tough, unfavorable, because, capacity, offset |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
## 6.2 Lm Sensitivity Analysis
To further understand model behavior, we perform another interpretability test using the LM financial dictionary and the predictions of Hierarchical FinBERT. We provide an overview of the summary statistics of the dictionary and results in Table 5.
To do so, we compute the proportion of total words in each transcript that belong to each LM category as financial sentiment variables. Then, we extract the model predictions (P(y = 1)) on the test set and regress them onto the financial sentiment variables. We observe that the model's predictions are positively associated with positive financial sentiment, and negatively associated with negative and constraining financial sentiment.
We also see smaller associations with strong modal (negative), litigious (positive), and uncertain (positive), but these are less statistically significant. We note that the LM dictionary was created based on word meaning in a sample of firm regulatory filings, which are distinct in style from conference calls, so there may be some domain mismatch. While some of the variables are sta-
| Category | # | % | sentences | coeff | pvalue |
|--------------|-------|------|-------------|---------|-------|
| % | | | | | |
| words | words | | | | |
| Positive | 347 | 1.29 | 21.69 | 53.73 | 0.000 |
| Negative | 2345 | 0.65 | 11.99 | -38.56 | 0.000 |
| Uncertain | 297 | 0.88 | 15.61 | 16.90 | 0.117 |
| Litigious | 903 | 0.13 | 2.50 | 14.59 | 0.284 |
| Constraining | 184 | 0.10 | 1.30 | -31.97 | 0.050 |
| Strong Modal | 19 | 0.48 | 9.39 | -9.51 | 0.106 |
| Weak Modal | 27 | 0.40 | 7.56 | 43.04 | 0.017 |
tistically significant at the 95% level, the linear model has an adjusted R2 of 10.3% (without the fixed effects), indicating that the trained model is capturing more than just LM sentiment. We note the negative relationship with strong modal words, such as "must," "best," "clearly," which may be used in persuasive writing to convince the audience of a particular viewpoint, and we conjecture that this may be a potential sign of executives trying to control the market narrative towards their company. In fact, Loughran and Mcdonald (2011)
find that firms with higher proportions of strong modal words in their regulatory filings are more likely to report material weakness in their accounting controls. Conversely, there appears to be a positive relationship with weak modal and uncertain words, such as "may," "depends," "appears,"
which may be a reflection of executives's honest portrayal of their expectations about the future.
## 6.3 Lm Sentence Masking
We also conduct a masking-based interpretability method in which we remove all sentences in the test set that contain at least one word from any LM
financial sentiment category and perform inference on the masked test set. We report the model performance in Table 6. We observe that while the trained model is capturing the sentiment conveyed by each category of words, it is not overly reliant on any single category and appears to be multifaceted and balanced in its ability to utilize multiple types of signals inherent in the transcripts.
This indicates that the model is relatively robust to perturbations in the input space, suggesting that it may be less susceptible to manipulation (e.g. if executives try to avoid certain words or phrases they think the market would react negatively to) than keyword based approaches.
| LM Category | Accuracy |
|---------------|------------|
| None | 76.56% |
| Positive | 70.41% |
| Negative | 72.11% |
| Uncertain | 69.73% |
| Litigious | 69.39% |
| Constraining | 69.90% |
| Strong Modal | 69.56% |
| Weak Modal | 70.69% |
## 6.4 Forward-Looking Statements
Since a typical conference call contains information about the past, present, and future performance of the company, we wish to understand the importance of forward-looking content to the predictions. In particular, we define sentences that contain certain keywords, such as "will," "expect," "believe," etc., to be forward-looking statements
(FLS) according to Li (2010). We then examine the relationship between the number of forwardlooking statements in the text (NFLS) and the model performance in Figure 3. Further, we observe that model performance generally increases for larger values of NFLS. We conjecture that higher values of NFLS provide the model with more signal about the firm's future prospects.
![7_image_0.png](7_image_0.png)
## 6.5 Comparison
While the Hierarchical Transformer models are able to process almost the full transcript in an efficient manner, the best model (FinBERT) only outperforms BigBird by about 0.70 absolute percentage points, even though it is able to process more than 5 times the number of tokens. This result seems to indicate that the BigBird architecture, which applies the global attention simultaneously with local attention rather than in a hierarchical fashion, is more effective in this setting, perhaps because of the ability of the model to inject global context into the token-level representations. Therefore, we would expect BigBird to outperform the HTs if it could be extended to support longer sequence lengths and/or adapted to the financial domain. However, given that BigBird is already 50% slower than the HTs, the training time may become intractable without adjusting to significantly smaller block sizes, and we leave it to future research to identify the best approaches to efficiently extend Efficient Transformer models, such as BigBird, to support longer sequence lengths (Phang et al., 2022).
## 7 Conclusion
In conclusion, we propose a novel task that uses transcripts from earnings conference calls to predict future earnings surprises. We formulate the problem as a long document classification task and explore a variety of different approaches to address it. While the length and language of the calls presents challenges for generic pretrained language models, we establish several strong baselines. We demonstrate that it is possible to predict companies' future earnings surprises with reasonable accuracy from the solely the text of their earnings conference calls. Further, we probe the model through multiple interpretability methods to uncover intuitive linguistic features that go beyond traditional sentiment analysis.
## Limitations
Our experiments demonstrate that it is possible to analyze company executive and analyst language during earnings calls and use it to predict future earnings surprises with reasonable accuracy that is well above random chance. We acknowledge that the dataset contains events that result in significant (in magnitude) earnings surprises so the performance numbers do not directly translate to a live trading setting in which many events do not result in material surprises. We also note that predicting future earnings surprises is correlated with but not equivalent to predicting future stock returns so more work must be done to translate our results into an actual trading strategy that is out of the scope of this paper.
## Ethics Statement
We acknowledge that our Earnings Conference Call dataset contains English transcripts from the largest US-based companies so it is possible that some populations may be underrepresented in this sample. We plan to extend this work to international companies and conference calls held in other languages in the future.
## Acknowledgements
We thank the anonymous reviewers for their thoughtful comments. We would also like to thank AJO Vista and FactSet for providing access to and permission to release the data. The authors are solely responsible for the content and views expressed in this publication do not reflect those of the affiliated institutions.
## References
Iz Beltagy, Matthew E Peters, and Arman Cohan.
2020. Longformer: The long-document transformer. *arXiv preprint arXiv:2004.05150*.
Stephen Brown, Stephen A Hillegeist, and Kin Lo.
2004. Conference calls and information asymmetry.
Journal of Accounting and Economics, 37(3):343–
366.
Brian J Bushee, Ian D Gow, and Daniel J Taylor. 2018.
Linguistic complexity in firm disclosures: Obfuscation or information? *Journal of Accounting Research*, 56(1):85–121.
Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. Legal-bert: The muppets straight out of law school. *arXiv preprint arXiv:2010.02559*.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440–
8451.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019.
Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988.
Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*, pages 4171–
4186.
Jeffrey T Doyle, Russell J Lundholm, and Mark T Soliman. 2006. The extreme future stock returns following i/b/e/s earnings surprises. Journal of Accounting Research, 44(5):849–887.
Richard M. Frankel, Jared N. Jennings, and Joshua A.
Lee. 2018. Using natural language processing to assess text usefulness to readers: The case of conference calls and earnings prediction. SSRN Electronic Journal.
Jiuxiang Gu, Jason Kuen, Vlad I. Morariu, Handong Zhao, Nikolaos Barmpalios, Rajiv Jain, Ani Nenkova, and Tong Sun. 2021. Unified pretraining framework for document understanding. volume 1.
Suchin Gururangan, Ana Marasovic, Swabha ´
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360.
Xiaochuang Han and Jacob Eisenstein. 2019. Unsupervised domain adaptation of contextualized embeddings for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4238–4248.
Allen H Huang, Hui Wang, and Yi Yang. 2022. Finbert: A large language model for extracting information from financial text. *Contemporary Accounting* Research.
Itay Kama. 2009. On the market reaction to revenue and earnings surprises. *Journal of Business Finance* & Accounting, 36(1-2):31–50.
Zheng Tracy Ke, Bryan T Kelly, and Dacheng Xiu.
2019. Predicting returns with text data. Technical report, National Bureau of Economic Research.
Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya.
2020. Reformer: The efficient transformer. *arXiv* preprint arXiv:2001.04451.
David F Larcker and Anastasia A Zakolyukina. 2012.
Detecting deceptive discussions in conference calls. Journal of Accounting Research, 50(2):495–540.
Henry A Latane and Charles P Jones. 1979. Standardized unexpected earnings–1971-77. The journal of Finance, 34(3):717–724.
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. *Bioinformatics*, 36(4):1234–1240.
Feng Li. 2010. The information content of forwardlooking statements in corporate filings-a naïve bayesian machine learning approach. *Journal of Accounting Research*, 48.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Tim Loughran and Bill Mcdonald. 2011. When is a liability not a liability? textual analysis, dictionaries, and 10-ks. *Journal of Finance*, 66.
Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142–150.
Andriy Mulyar, Elliot Schumacher, Masoud Rouhizadeh, and Mark Dredze. 2019. Phenotyping of clinical notes with improved document classification models using contextualized neural language models. *arXiv preprint arXiv:1910.13664*.
Raghavendra Pappagari, Piotr Zelasko, Jesus Villalba, Yishay Carmiel, and Najim Dehak. 2019. Hierarchical transformers for long document classification.
In *2019 IEEE Automatic Speech Recognition and* Understanding Workshop (ASRU), pages 838–844. IEEE.
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)*, pages 1532–1543.
Jason Phang, Yao Zhao, and Peter J Liu. 2022.
Investigating efficiently extending transformers for long input summarization. arXiv preprint arXiv:2208.04347.
S McKay Price, James S Doran, David R Peterson, and Barbara A Bliss. 2012. Earnings conference calls and stock returns: The incremental informativeness of textual tone. *Journal of Banking & Finance*, 36(4):992–1011.
Yu Qin and Yi Yang. 2019. What you say and how you say it matters: Predicting stock volatility using verbal and vocal cues. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 390–401.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should i trust you?" explaining the predictions of any classifier. In *Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining*,
pages 1135–1144.
Gerard Salton and Christopher Buckley. 1988. Termweighting approaches in automatic text retrieval. *Information processing & management*, 24(5):513–
523.
Yunxin Sang and Yang Bao. 2022. Predicting corporate risk by jointly modeling company networks and dialogues in earnings conference calls. arXiv preprint arXiv:2206.06174.
Ramit Sawhney, Piyush Khanna, Arshiya Aggarwal, Taru Jain, Puneet Mathur, and Rajiv Ratn Shah.
2020a. Voltage: Volatility forecasting via text audio fusion with graph convolution networks for earnings calls. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 8001–8013.
Ramit Sawhney, Puneet Mathur, Ayush Mangal, Piyush Khanna, Rajiv Ratn Shah, and Roger Zimmermann.
2020b. Multimodal multi-task financial risk forecasting. In *Proceedings of the 28th ACM international conference on multimedia*, pages 456–465.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing systems*, 30.
Ho Chung Wu, Robert Wing Pong Luk, Kam Fai Wong, and Kui Lam Kwok. 2008. Interpreting tf-idf term weights as making relevance decisions. ACM Transactions on Information Systems, 26.
Linyi Yang, Jiazheng Li, Ruihai Dong, Yue Zhang, and Barry Smyth. 2022. Numhtml: Numeric-oriented hierarchical transformer model for multi-task financial forecasting. *arXiv preprint arXiv:2201.01770*.
Linyi Yang, Tin Lok James Ng, Barry Smyth, and Riuhai Dong. 2020a. Html: Hierarchical transformerbased multi-task learning for volatility prediction.
In *Proceedings of The Web Conference 2020*, pages 441–451.
Liu Yang, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020b. Beyond 512 tokens: Siamese multi-depth transformer-based hierarchical encoder for long-form document matching.
In *Proceedings of the 29th ACM International Conference on Information & Knowledge Management*,
pages 1725–1734.
Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification.
In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489, San Diego, California. Association for Computational Linguistics.
Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. volume 2020-December.
Xingxing Zhang, Furu Wei, and Ming Zhou. 2019.
Hibert: Document level pre-training of hierarchical bidirectional transformers for document summarization. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5059–5069.
## A Appendix
Transformer-based models in PyTorch and source all pretrained checkpoints from HuggingFace.
For the BOW models, we remove stop words, create both unigrams and bigrams from the resulting 50,000 most frequent phrases vocabulary, and apply Term Frequency-Inverse Document Frequency weighting (TF-IDF; Salton and Buckley, 1988; Wu et al., 2008) to create features. For the CNN model, we initialize the word embeddings with pretrained weights from GLOVE (100D;
Pennington et al., 2014), select the 50,000 most frequent words as our vocabulary, and truncate all transcripts after the first 12,000 words.
We perform all experiments on a single Tesla A100 GPU with 40GB in memory. We use AdamW to optimize all parameters. We tune the hypeparamters of each neural model by conducting a limited grid search over learning rates
∈ {5e − 6, 1e − 5, 5e − 5}, weight decay ∈ {1e −
4, 1e − 3, 1e − 2} and batch size ∈ {32, 64, 128},
based off validation set accuracy score. For computational constraints, we train all models using FP16 precision training, and apply gradient checkpointing to satisfy GPU memory constraints, and clip gradient norms. It takes approximately 10 minutes per epoch of supervised finetuning for the Hierarchical Transformer models and 15 minutes per epoch of training for BigBird with block size of 64.
We conduct the MLM pretraining process for BigBird on the training set for a maximum of 10 epochs or until the MLM loss on the validation set increases. This pretraining process takes multiple days of run time and indicates the difficulty of pretraining these Efficient Transformers models on domain relevant text. We tune the block size over {32, 64, 84} and the number of random blocks over {3, 4, 5}.
## A.1 Implementation Details And Training Process
We use Scikit-learn and XGBoost to develop the non-neural baseline models. We develop all
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Yes, in the dedicated limitations section.
✓ A2. Did you discuss any potential risks of your work?
Yes, in the dedicated limitations and ethics statement sections.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank.
✓ B1. Did you cite the creators of artifacts you used?
(3)
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
(3)
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
(3)
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. The text transcripts are from publicly available earnings conference calls for US-based public companies.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
(3.1)
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
(3.1)
## C ✓ **Did You Run Computational Experiments?** (5)
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
(4.6) and Appendix (A)
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
(4.6) and Appendix (A)
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
(5.1)
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix (A)
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
vincent-etal-2023-mtcue | {MTC}ue: Learning Zero-Shot Control of Extra-Textual Attributes by Leveraging Unstructured Context in Neural Machine Translation | https://aclanthology.org/2023.findings-acl.521 | Efficient utilisation of both intra- and extra-textual context remains one of the critical gaps between machine and human translation. Existing research has primarily focused on providing individual, well-defined types of context in translation, such as the surrounding text or discrete external variables like the speaker{'}s gender. This work introduces MTCue, a novel neural machine translation (NMT) framework that interprets all context (including discrete variables) as text. MTCue learns an abstract representation of context, enabling transferability across different data settings and leveraging similar attributes in low-resource scenarios. With a focus on a dialogue domain with access to document and metadata context, we extensively evaluate MTCue in four language pairs in both translation directions. Our framework demonstrates significant improvements in translation quality over a parameter-matched non-contextual baseline, as measured by BLEU (+0.88) and Comet (+1.58). Moreover, MTCue significantly outperforms a {``}tagging{''} baseline at translating English text. Analysis reveals that the context encoder of MTCue learns a representation space that organises context based on specific attributes, such as formality, enabling effective zero-shot control. Pre-training on context embeddings also improves MTCue{'}s few-shot performance compared to the {``}tagging{''} baseline. Finally, an ablation study conducted on model components and contextual variables further supports the robustness of MTCue for context-based NMT. |
## Mtcue**: Learning Zero-Shot Control Of Extra-Textual Attributes** By Leveraging Unstructured Context In Neural Machine Translation
Sebastian Vincent and **Robert Flynn** and **Carolina Scarton**
Department of Computer Science, University of Sheffield, UK
{stvincent1,rjflynn2,c.scarton}@sheffield.ac.uk
## Abstract
Efficient utilisation of both intra- and extratextual context remains one of the critical gaps between machine and human translation. Existing research has primarily focused on providing individual, well-defined types of context in translation, such as the surrounding text or discrete external variables like the speaker's gender. This work introduces MTCUE, a novel neural machine translation (NMT) framework that interprets all context (including discrete variables) as text. MTCUE learns an abstract representation of context, enabling transferability across different data settings and leveraging similar attributes in low-resource scenarios. With a focus on a dialogue domain with access to document and metadata context, we extensively evaluate MTCUE in four language pairs in both translation directions. Our framework demonstrates significant improvements in translation quality over a parametermatched non-contextual baseline, as measured by BLEU (+0.88) and COMET (+1.58). Moreover, MTCUE significantly outperforms a "tagging" baseline at translating English text. Analysis reveals that the context encoder of MTCUE
learns a representation space that organises context based on specific attributes, such as formality, enabling effective zero-shot control. Pretraining on context embeddings also improves MTCUE's few-shot performance compared to the "tagging" baseline. Finally, an ablation study conducted on model components and contextual variables further supports the robustness of MTCUE for context-based NMT.
github.com/st-vincent1/MTCue
## 1 Introduction
Research in neural machine translation (NMT) has advanced considerably in recent years, much owing to the release of the Transformer architecture
(Vaswani et al., 2017), subword segmentation (Sennrich et al., 2016c) and back-translation (Sennrich et al., 2016b). This resulted in claims of human parity in machine translation (Hassan et al., 2018),
![0_image_0.png](0_image_0.png)
which in turn prompted researchers to look beyond the sentence level: at how a translation still needs to be compatible with the context it arises in.
The task of contextual adaptation to more nuanced extra-textual variables like the description of the discourse situation has been largely overlooked, in spite of earlier work suggesting that conversational machine translation may benefit from such fine-grained adaptations (van der Wees et al.,
2016). Most existing work on contextual NMT has focused on document-level context instead, aiming to improve the coherence and cohesion of the translated document (e.g. Tiedemann and Scherrer, 2017). Some research has successfully adapted NMT to extra-textual context variables using supervised learning frameworks on labelled datasets, targeting aspects such as gender (Vanmassenhove et al., 2018; Moryossef et al., 2019; Vincent et al.,
2022b), formality (Sennrich et al., 2016a; Nadejde et al., 2022), translators' or speakers' style (Michel and Neubig, 2018a; Wang et al., 2021b) and translation length (Lakew et al., 2019), sometimes controlling multiple attributes simultaneously (Schioppa et al., 2021; Vincent et al., 2022b). However, to our knowledge, no prior work has attempted to model the impact of continuous extra-textual contexts in translation or combined the intra- and extratextual contexts in a robust framework. This is 8210 problematic since translating sentences without or with incomplete context is akin to a human translator working with incomplete information.
Similarly, only a handful of earlier research has contemplated the idea of controlling these extratextual attributes in a zero-shot or few-shot fashion (Moryossef et al., 2019; Anastasopoulos et al.,
2022); such approaches are essential given the difficulty of obtaining the labels required for training fully supervised models.
In some domains, extra-textual context is paramount and NMT systems oblivious to this information are expected to under-perform. For instance, for the dubbing and subtitling domain, where translated shows can span different decades, genres, countries of origin, etc., a one-size-fits-all model is limited by treating all input sentences alike. In this domain, there is an abundance of various metadata (not just document-level data) that could be used to overcome this limitation. However, such adaptation is not trivial: (i) the metadata often comes in quantities too small for training and with missing labels; (ii) it is expressed in various formats and types, being difficult to use in a standard pipeline; (iii) it is difficult to quantify its exact
(positive) effect.
In this paper, we address (i) and (ii) by proposing MTCUE (Machine Translation with Contextual universal embeddings), a novel NMT framework that bridges the gap between training on discrete control variables and intra-textual context as well as allows the user to utilise metadata of various lengths in training, easing the need for laborious data editing and manual annotation (Figure 1). During inference, when context is provided verbatim, MTCUE falls back to a code-controlled translation model; by vectorising the inputs, it exhibits competitive performance for noisy phrases and learns transferrability across contextual tasks. While (iii)
is not directly addressed, our evaluation encompasses two translation quality metrics and two external test sets of attribute control, showing the impact on both translation quality and capturing relevant contextual attributes.
MTCUE can generalise to unseen context variables, achieving 100% accuracy at a zero-shot formality controlling task; it learns to map embeddings of input contexts to discrete phenomena
(e.g. formality), increasing explainability; and it exhibits more robust few-shot performance at multiattribute control tasks than a "tagging" baseline.
The main contributions of this work are:
1. MTCUE (§2): a novel framework for **combining (un)structured intra- and extra-textual**
context in NMT that significantly improves translation quality for four language pairs in both directions: English (EN) to/from German
(DE), French (FR), Polish (PL) and Russian (RU).
2. A comprehensive evaluation, showing that MTCUE can be primed to exhibit **excellent**
zero-shot and few-shot performance at downstream contextual translation tasks (§4 and §5).
3. Pre-trained models, code, and an organised version of the OpenSubtitles18 (Lison et al., 2018)
dataset **with the annotation of six metadata**
are made available.
This paper also presents the experimental settings (§3), related work (§6) and conclusions (§7).
## 2 Proposed Architecture: Mtcue
MTCUE is an encoder-decoder Transformer architecture with two encoders: one dedicated for contextual signals and one for inputting the source text.
The signals from both encoders are combined using parallel cross-attention in the decoder. Below we describe how context inputs are treated in detail, and later in §2.2 and §2.3 we describe the context encoder and context incorporation, respectively.
## 2.1 Vectorising Contexts
Context comes in various formats: for example, the speaker's gender or the genre of a film are often supplied in corpora as belonging to sets of predetermined discrete classes, whereas plot descriptions are usually provided as plain text (and could not be treated as discrete without significant loss of information). To leverage discrete variables as well as short and long textual contexts in a unified framework, we define a **vectorisation function** that maps each context to a single meaningful vector, yielding a matrix Ec×r, where c is the number of contexts and r is the embedding dimension. The function is deterministic (the same input is always embedded in the same way) and semantically coherent (semantically similar inputs receive similar embeddings). We use a sentence embedding model
(Reimers and Gurevych, 2019) for vectorisation, which produces embeddings both deterministic and semantically coherent. Motivated by Khandelwal et al. (2018) and O'Connor and Andreas (2021)
who report that generation models mostly use general topical information from past context, ignor-
![2_image_0.png](2_image_0.png)
ing manipulations such as shuffling or removing non-noun words, we hypothesise that sentence embeddings can effectively compress the relevant context information into a set of vectors, which, when processed together within a framework, will formulate an abstract representation of the dialogue context. We select the MINILMV2 sentence embedding model (Wang et al., 2021a), which we access via the sentence-transformers library;1 a similar choice was made concurrently in Vincent et al. (2023). In the experiments, we also refer to DISTILBERT (Sanh et al., 2019) which is used by one of our baselines, and a discrete embedding function which maps unique contexts to the same embeddings but has no built-in similarity feature.
For any sample, given a set of its k textual contexts C = [c1*, ...c*k], we vectorise each one separately using the method described above. The resulting array of vectors is the input we supply to the context encoder in MTCUE.
## 2.2 Context Encoder
Processing vectorised contexts The context encoder of MTCUE is a standard self-attention encoder with a custom input initialisation. Its inputs are sentence embeddings of context (§2.1) projected to the model's dimensions with a linear layer
(384 → d*model*). In preliminary experiments, we observe that the first layer of the context encoder receives abnormally large input values, which sometimes leads to the explosion of the query (Q) and key (K) dot product QKT. We prevent this by replacing the scaled dot product attention with querykey normalisation (Henry et al., 2020): applying L2 normalisation of Q and K before the dot product, and replacing the scaling parameter √d with a learned one, initialised to a value based on training data lengths.2 Positional embeddings We use positional context embeddings to (a) indicate the distance of a past utterance to the source sentence and (b)
to distinguish metadata inputs from document information. In particular, when translating the source sentence si at position i in the document, a sentence distance positional embedding
(*P OS*) is added to the embedding representations of each past sentence si−j , with j ∈ [0, t]
where t is the maximum allowed context distance:
e′(si−j , j) = e(si−j ) + *P OS*(j). Metadata contexts (m0*, . . . , m*n) do not receive positional em2An alternative solution applies layer normalisation to the input of the first layer, but we found that this degraded performance w.r.t. QK-NORM.
1https://sbert.net/, accessed 1/5/23.
beddings since their order is irrelevant. The final vectorised input of the context encoder is:
e′(si, 0), e′(si−1, 1), . . . , e′(si−t, t), e(m0)*, . . . , e*(mn).
## 2.3 Context Incorporation
The outputs of the context and source encoders
(respectively C and S) are combined in the decoder using **parallel attention** (Libovický et al., 2018).
Let the output of the decoder self-attention be T .
Let Tout = FFN(T′) + T′, where T′is the multihead attention output; i.e. Tout is T′ with the feedforward layer and the residual connection applied.
In a non-contextual Transformer, source and target representations are combined with cross-attention:
T
′ = mAttn(kv = S, q = T )
In contrast, parallel attention computes individual cross-attention of T with S and C and then adds them together:
$$\begin{array}{l}{{S^{\prime}=\mathrm{mA}\mathrm{t}\mathrm{n}(k v=S,q=T)}}\\ {{{\mathcal{C}}^{\prime}=\mathrm{mA}\mathrm{t}\mathrm{n}(k v={\mathcal{C}},q=T)}}\\ {{T^{\prime}={\mathcal{C}}^{\prime}+{\mathcal{S}}^{\prime}}}\end{array}$$
Parallel attention is only one of many combination strategies which can be used, and in preliminary experiments we found the choice of the strategy to have a minor impact on performance.
## 3 Experimental Setup
| Data type | EN↔DE | EN↔FR | EN↔PL | EN↔RU |
|-------------------------|---------|---------|---------|---------|
| Source & target | 5.3M | 14.7M | 12.9M | 12.4M |
| metadata Genre | 45.3% | 57.8% | 60.5% | 73.4% |
| PG rating | 35.9% | 46.9% | 48.8% | 62.3% |
| Writer(s) | 45.3% | 57.1% | 58.9% | 71.7% |
| Year | 45.3% | 57.8% | 60.5% | 73.7% |
| Country | 37.7% | 42.9% | 45.7% | 42.7% |
| Plot description | 43.4% | 57.1% | 59.7% | 72.6% |
| previous dialogue n − 1 | 60.1% | 68.0% | 63.7% | 73.6% |
| n − 2 | 42.0% | 51.2% | 46.4% | 57.9% |
| n − 3 | 31.2% | 40.1% | 35.5% | 46.9% |
| n − 4 | 23.9% | 32.2% | 28.0% | 38.6% |
| n − 5 | 18.7% | 26.2% | 22.4% | 32.2% |
## 3.1 Data: The Opensubtitles18 Corpus
Table 1: Data quantities for the extracted OpenSubtitles18 corpus. An average of 81% samples has at least one context input.
The publicly available OpenSubtitles183corpus (Lison et al., 2018), hereinafter OPENSUB-TITLES, is a subtitle dataset in .xml format with 3Created from data from https://opensubtitles.org.
| Key | Value |
|-------------|-------------------------------------------------------------------------------------------------------------------------------------------|
| Source (EN) | This is the Angel of Death, big daddy reaper. |
| Target (PL) | To anioł smierci. Kosiarz przez wielkie "k". ´ |
| PG rating | PG rating: TV-14 |
| Released | Released in 2009 |
| Writers | Writers: Eric Kripke, Ben Edlund, Julie Siege |
| Plot | Dean and Sam get to know the whereabouts of Lucifer and want to hunt him down. But Lucifer is well prepared and is working his own plans. |
| Genre | Drama, Fantasy, Horror |
| Country | United States, Canada |
IMDb ID attribution and timestamps. It is a mix of original and user-submitted subtitles for movies and TV content. Focusing on four language pairs (EN↔{DE,FR,PL,RU}), we extract parallel sentence-level data with source and target document-level features (up to 5 previous sentences) using the timestamps (see Appendix A).
We also extract a range of metadata by matching the IMDb ID against the Open Movie Database
(OMDb) API.4 Table 1 shows training data quantities and portions of annotated samples per context while Table 2 shows an example of the extracted data. We select six metadata types that we hypothesise to convey useful extra-textual information:
plot description (which may contain useful topical information), *genre* (which can have an impact on the language used), *year of release* (to account for the temporal dimension of language), *country of* release (to account for regional differences in expression of English), *writers* (to consider writers' style), *PG rating* (which may be associated with e.g. the use of adult language). For validation and testing, we randomly sample 10K sentence pairs each from the corpus, based on held-out IMDb IDs.
Preprocessing The corpus is first detokenised and has punctuation normalised (using Moses scripts (Koehn et al., 2007)). Then a custom cleaning script is applied, which removes trailing dashes, unmatched brackets and quotation marks, and fixes common OCR spelling errors. Finally, we perform subword tokenisation via the BPE algorithm with Sentencepiece (Kudo and Richardson, 2018).
Film metadata (which comes from OMDb) is left intact except when the fields contain non-values such as "N/A", "Not rated", or if a particular field is not sufficiently descriptive (e.g. a PG rating field represented as a single letter "R"), in which case 4https://omdbapi.com/, accessed 1/5/23.
we enrich it with a disambiguating prefix (e.g. "R"
→ "PG rating: R"). Regardless of the trained language pair, metadata context is provided in English
(which here is either the source or target language).
Document-level context is limited to source-side context. Since for *→EN language pairs the context input comes in two languages (e.g. English metadata and French dialogue), we use multilingual models to embed the context in these pairs.
## 3.2 Evaluation
We evaluate the presented approach with the general in-domain test set as well as two external contextual tasks described in this section.
Translation quality The approaches are evaluated against an in-domain held-out test set of 10K
sentence pairs taken from OPENSUBTITLES. As metrics, we use BLEU5(Papineni et al., 2002) and COMET6(Rei et al., 2020).
## Control Of Multiple Attributes About Dialogue
participants (EAMT22) The EAMT22 task, introduced by Vincent et al. (2022b), evaluates a model's capability to control information about dialogue participants in English-to-Polish translation. The task requires generating hypotheses that align with four attributes: gender of the speaker and interlocutor(s) (masculine/feminine/mixed), number of interlocutors (one/many), and formality (formal/informal). These attributes can occur in a total of 38 unique combinations. We investigate whether MTCUE can learn this task through zero-shot learning (pre-training on other contexts) or through fewshot learning (when additionally fine-tuned on a constrained number of samples).
To prepare the dataset, we use scripts provided by Vincent et al. (2022b) to annotate OPENSUB-TITLES with the relevant attributes, resulting in a corpus of 5.1M annotated samples. To leverage the context representation in MTCUE, we transcribe the discrete attributes to natural language by creating three sentences that represent the context. For example, if the annotation indicates that the speaker is male, the interlocutor is a mixed-gender group, and the register is formal, we create the following context: (1) "I am a man", (2) "I'm talking to a group of people" and (3) "Formal".
We train seven separate instances of MTCUE
using different artificial data settings. Each set-5Computed with SacreBLEU (Post, 2018).
6Computed using the wmt20-comet-da model.
ting contains the same number of samples (5.1M)
but a varying number of **annotated** samples. To address class imbalances in the dataset (e.g. *masculine speaker* occurring more often than feminine speaker) and ensure equal representation of the 38 attribute combinations, we collect multiples of these combinations. We select sample numbers to achieve roughly equal logarithmic distances: 1, 5, 30, 300, 3K and 30K supervised samples per each of 38 combinations, yielding exactly 38, 180, 1, 127, 10, 261, 81, 953 and 510, 683 samples respectively. Including the zero-shot and full supervision (5.1M cases), this results in a total of eight settings. Each model is trained with the same hyperparameters as MTCUE, and on the same set of 5.1M samples, with only the relevant number of samples annotated (non-annotated samples are given as source-target pairs without contexts). We compare our results against our re-implementation of the TAGGING approach which achieved the best performance in the original paper (i.e. Vincent et al.,
2022b). We train the TAGGING model in replicas of the eight settings above.
## Zero-Shot Control Of Formality (Iwslt22) We
experiment with the generalisation of MTCUE
to an unseen type of context: formality. In the IWSLT22 formality control task (Anastasopoulos et al., 2022), the model's challenge is to produce hypotheses agreeing with the desired formality
(formal/informal). For the English-to-German language pair, the task provides a set of paired examples (each source sentence is paired with a formal reference and an informal one), to a total of 400 validation and 600 test examples; for the Englishto-Russian pair, only the 600 test examples are provided. We test the capacity of MTCUE to control formality zero-shot, given a textual cue as context input.7
## 3.3 Baselines
In our experiments, we compare MTCUE with three types of baselines:
1. BASE and BASE-PM. These are pre-trained translation models that match MTCUE either in the shape of the encoder-decoder architecture (BASE) or in terms of the total number of parameters (BASE-PM). For BASE-PM, the extra parameters are obtained from enhancing the source encoder, increasing the number 7We describe the process of choosing the context input for evaluation in Appendix D.
| Model | Params dmodel | Layers | h | FFN dim. | GPU Hour/Epoch | Epochs to best | | | | | |
|--------------|-----------------|----------|-----|------------|------------------|------------------|------|------|------|-------------|-------------|
| Cxt | Src | Dec | Cxt | Src | Dec | | | | | | |
| BASE | 66M | 512 | − | 6 | 6 | 8 | − | 2048 | 2048 | − | − |
| BASE-PM 107M | 512 | − | 10 | 6 | 8 | − | 4096 | 2048 | − | − | |
| TAGGING | 107M | 512 | − | 10 | 6 | 8 | − | 4096 | 2048 | 0.74 ± 0.35 | 6.13 ± 4.09 |
| NOVOTNEY-CUE | 99M | 512 | 6 | 6 | 6 | 8 | 2048 | 2048 | 2048 | 1.29 ± 0.56 | 9.13 ± 3.60 |
| MTCUE | 105M | 512 | 6 | 6 | 6 | 8 | 2048 | 2048 | 2048 | 0.81 ± 0.39 | 9.38 ± 4.57 |
![5_image_0.png](5_image_0.png)
of layers (6 → 10) and doubling the feedforward dimension (2048 → 4096).
2. T**AGGING**. Following previous work (e.g.
Schioppa et al., 2021; Vincent et al., 2022b),
we implement a model that assigns a discrete embedding to each unique context value. Architecturally, the model matches BASE-PM.
The tags are prepended to feature vectors from the source context and then together fed to the decoder.
3. NOVOTNEY-CUE. This baseline is a reimplementation of the CUE vectors architecture (Novotney et al., 2022) for NMT. It utilises DISTILBERT for vectorisation and averages the context feature vectors to obtain the decoder input. In contrast, MTCUE employs a parallel attention strategy.
In experiments on formality control, we also report results from the two submissions to the IWSLT22 task, both implementing a supervised and a zero-shot approach:
1. Vincent et al. (2022a). This (winning) submission combines the TAGGING approach with formality-aware re-ranking and data augmentation. The authors augment the original formality-labelled training samples by matching sentence pairs from larger corpora against samples of specific formality (akin to the Moore-Lewis algorithm described in Moore and Lewis, 2010). Their zero-shot approach relies on heuristically finding a suitable sample of formality-annotated data similar to the provided set and performing the same algorithm above.
2. Rippeth et al. (2022) who fine-tune large pretrained multilingual MT models with additive control (Schioppa et al., 2021) on data with synthetic formality labels obtained via rulebased parsers and classifiers.
## 3.4 Implementation And Hyperparameters
![5_Image_1.Png](5_Image_1.Png)
We implement MTCUE and all its components in FAIRSEQ, and use HuggingFace (Wolf et al., 2020)
for vectorising contexts. We use hyperparameters recommended by FAIRSEQ, plus optimise the learning rate and the batch size in a grid search. We found that a learning rate of 0.0003 and a batch size of simulated 200K tokens worked best globally. Table 3 presents the architecture details and runtimes for the models. All training is done on a single A100 80GB GPU, one run per model. We use early stopping based on validation loss with a patience of 5.
## 4 Results
Translation quality Results in Table 4 show that MTCUE beats all non-contextual baselines in translation quality, achieving an average improvement of +1.51 BLEU/+3.04 COMET over BASE and +0.88/+1.58 over BASE-PM. It is also significantly better than NOVOTNEY-CUE
(+0.46/+0.66). MTCUE achieves comparable results to the parameter-matched TAGGING model, consistently outperforming it on all language directions from English, and being outperformed by it on directions into English. Since the primary difference between the two models is that MTCUE
sacrifices more parameters to process context, and TAGGING uses these parameters for additional processing of source text, we hypothesise that the difference in scores is due to the extent to which context is a valuable signal for the given language pair:
it is less important in translation into English. This is supported by findings from literature: English is a language that does not grammatically mark phenomena such as gender (Stahlberg et al., 2007).
The largest quality improvements with MTCUE
are obtained on EN-DE (+1.66/+4.14 vs BASEPM and +1.14/+1.70 vs TAGGING) and EN-FR
(+2.23/+3.32 vs BASE-PM and +0.80/+0.62 vs TAGGING) language pairs. Contrastively, the smallest improvements against BASE-PM are obtained on the RU-EN pair. MTCUE is outperformed by
| Model | EN→DE | EN→FR | EN→PL | EN→RU | DE→EN | FR→EN | PL→EN | RU→EN | Average | | | | | | | | |
|-----------------------|------------|------------|------------|------------|------------|------------|-----------------------|---------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|
| BLEU COMET | BLEU COMET | BLEU COMET | BLEU COMET | BLEU COMET | BLEU COMET | BLEU COMET | BLEU COMET BLEU COMET | | | | | | | | | | |
| Baselines *BASE 33.60 | 45.90 | 34.54 | 46.92 | 28.08 | 58.52 | 31.37 | 62.94 | 39.53 | 59.56 | 35.46 | 55.10 | 34.42 | 50.38 | 39.37 | 55.99 | 34.65 | 54.41 |
| *BASE-PM 34.36 | 46.77 | 35.31 | 48.87 | 28.66 | 60.97 | 32.40 | 64.55 | 40.32 | 60.88 | 36.16 | 56.28 | 35.03 | 51.77 | 40.04 | 56.86 | 35.28 | 55.87 |
| TAGGING 34.88 | 49.21 | 36.74 | 51.57 | 29.08 | 64.29 | 32.32 | 65.12 | 41.52 | 62.63 | 37.10 | 57.41 | 36.19 | 53.46 | 40.33 | 57.14 | 36.02 | 57.60 |
| NOVOTNEY-CUE 35.30 | 49.83 | 36.75 | 50.52 | 29.09 | 62.69 | 32.36 | 64.90 | 40.86 | 61.91 | 36.51 | 56.21 | 35.28 | 52.17 | 39.44 | 56.08 | 35.70 | 56.79 |
| Proposed MTCUE 36.02 | 50.91 | 37.54 | 52.19 | 29.36 | 63.46 | 33.21 | 65.21 | 40.95 | 61.58 | 36.57 | 56.87 | 35.68 | 52.48 | 39.97 | 56.92 | 36.16 | 57.45 |
TAGGING the most on PL-EN (−0.51/−0.98). As far as training efficiency, MTCUE trains significantly faster than NOVOTNEY-CUE, converging in a similar number of epochs but using significantly less GPU time, on par with TAGGING (Table 3).
Finally, all contextual models considered in this evaluation significantly outperform the parametermatched translation model (BASE-PM), clearly signalling that metadata and document context are an important input in machine translation within this domain, regardless of the chosen approach.
## Control Of Multiple Attributes About Dialogue Participants (Eamt22) Mtcue Achieves
80.25 zero-shot accuracy at correctly translating the speaker and interlocutor attributes, an improvement of 12.08 over the non-contextual baseline, also expressed in increased translation quality (25.22 vs 23.36 BLEU). Furthermore, it bests TAGGING at few-shot performance by 5 to 8 accuracy points, reaching above 90%
accuracy with only 190 of the 5.1M annotated samples (Figure 4). Both TAGGING and MTCUE
perform similarly with more supervised data. The TAGGING model achieves +2 to +3 accuracy points in the 1K to 100K range, while BLEU
remains comparable. We hypothesise that this happens because MTCUE relies strongly on its pre-training prior when context is scarce: this proves useful with little data, but becomes less relevant as more explicitly labelled samples are added. Finally, with full supervision, both models achieve above 99% accuracy.
Zero-shot control of formality (IWSLT22)
MTCUE appears to successfully control the formality of translations in a zero-shot fashion, achieving nearly 100% accuracy on the IWSLT22 test sets across two language pairs, beating all zero-shot models on the EN-RU pair and performing on par with the best supervised model for EN-DE. Notably, both baselines presented in Table 5 were built to
| Model | Supervision | Formal | Informal | Average |
|------------------------------|---------------|----------|------------|-----------|
| Non-context baseline − | 74.5 | 25.5 | 50.0 | |
| Rippeth et al. (2022) | Supervised | 99.4 | 96.5 | 98.0 |
| EN-DE Vincent et al. (2022a) | Supervised | 100.0 | 100.0 | 100.0 |
| MTCUE Zero-shot | 100.0 | 100.0 | 100.0 | |
| Non-context baseline − | 96.4 | 3.6 | 50.0 | |
| Rippeth et al. (2022) | Zero-shot | 100.0 | 1.1 | 50.5 |
| EN-RU Vincent et al. (2022a) | Zero-shot | 99.5 | 85.8 | 92.7 |
| MTCUE Zero-shot | 100.0 | 99.4 | 99.7 | |
target formality specifically, unlike MTCUE which is a general-purpose model.
Following MTCUE's success at controlling formality with sample contexts, we investigate the relationship between context embeddings and their corresponding formality control scores. We consider all 394 unique contexts from the OPENSUB-TITLES validation data, and another 394 document contexts (individual past sentences) at random (indomain). We also use an in-house dataset from a similar domain (dubbing of reality cooking shows with custom annotations of scene contents) and select another 394 metadata and 394 document contexts from there (out-of-domain). We run inference on the IWSLT22 test set with each context individually (1, 576 runs), and use UMAP (McInnes et al.,
2018) to visualise (i) the input embedding from MINILM-V2, (ii) the output vector of the context encoder and (iii) the corresponding formality score
(Figure 3).
We invite the reader to pay attention to the separation of dark and light points in Figure 3b that is not present in Figure 3a. There is a spatial property that arises in the context encoder and is shown by Figure 3b, namely a relationship between the feature vectors from context encoder and formality scores across both domains: contexts yielding translations of the same register tend to be clustered together. This is true for both in-domain data (cir-
(a) MINILM-V2 embeddings. (b) Output of MTCUE's context encoder.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
cles) and out-of-domain data (crosses), suggesting that after training this effect generalises to unseen contexts.
For further investigation, we sample a few contexts at random which yield 100% zero-shot accuracy (from the "ends" of the color scale) and find that these contexts tend to have semantic relationships with the type of formality they induce in translations. For example, contexts like "What's wrong with you?", "Wh-what's he doing now?"
yield all-informal translations while "Then why are you still in my office?" or "I can see you're very interested." result in all-formal ones. This confirms our hypothesis: MTCUE's context encoder aligns the semantic representation of the input context to the most likely formality it would produce, akin to a human translator deducing such information from available data. Outside of an evaluation scenario like the present one, MTCUE may therefore be able to predict from the given context what formality style should be used: an effect only facilitated by the context encoder.
To exemplify how the zero-shot performance of MTCUE manifests in practice, we present some examples of outputs for the two tasks in Appendix E.
## 5 Ablation Study
| Ablation | COMET | ZERO-SHOT ACCURACY | | | |
|--------------------|---------|----------------------|--------|-------|-------|
| EN→DE | EN→FR | EN→PL IWSLT22 (DE) | EAMT22 | | |
| Full MTCUE | 46.89 | 54.06 | 62.67 | 100.0 | 81.35 |
| no context encoder | 46.76 | 53.73 | 63.26 | 89.10 | 77.42 |
| no pos. embeddings | 46.68 | 53.81 | 62.47 | 91.65 | 70.91 |
| no MINILM-V2 | 45.32 | 53.42 | 62.55 | 50.00 | 70.16 |
| no metadata | 45.23 | 53.64 | 62.64 | 89.70 | 83.41 |
| no doc.-level data | 46.23 | 53.49 | 61.67 | 68.80 | 74.64 |
| random context | 42.17 | 51.94 | 61.74 | 49.90 | 68.44 |
| no context* | 41.22 | 50.07 | 58.94 | 50.00 | 67.53 |
We discuss the robustness of MTCUE with an ablation study on the model components as well as a complementary ablation on types of context (metadata vs document). We evaluate three language pairs (EN→DE,FR,PL) and report results from single runs (Table 6): COMET score on the OpenSubtitles18 data and zero-shot accuracy at the two contextual tasks (on the **validation** sets in all cases).
Removing the context encoder (output of the linear layer is combined with source straight away)
or the position embeddings has only a minor effect on the COMET score; replacing MINILM-V2 with a discrete embedding function hurts performance the most. Positional embeddings seem more important to the EAMT22 task than IWSLT22 - possibly because EAMT22 focuses on sentence-level phenomena, so the order of past context matters. Replacing MINILM-V2 with a discrete embedding function removes the zero-shot effect in both tasks. An interesting finding is that between metadata and document-level data, it is the latter that brings more improvements to contextual tasks; this means that our model potentially scales to domains without metadata. Finally, using random context degrades performance w.r.t. full model implying that the gains come from signals in data rather than an increase in parameters or training time.
## 6 Related Work
Although contextual adaptation has been discussed in other tasks (e.g. Keskar et al., 2019), in this section we focus on NMT, as well as set our work side by side with research that inspired our approach.
Existing studies on incorporating context into NMT have primarily focused on document-level context. These approaches include multi-encoder models (e.g. Miculicich et al., 2018), cache models
(Kuang et al., 2018), automatic post-editing (Voita et al., 2019a), shallow fusion with a document-level language model (Sugiyama and Yoshinaga, 2021),
data engineering techniques (Lupo et al., 2022) or simple concatenation models (Tiedemann and Scherrer, 2017). Another line of research aims to restrict hypotheses based on certain pre-determined conditions, and this includes formality (Sennrich et al., 2016a), interlocutors' genders (e.g. Vanmassenhove et al., 2018; Moryossef et al., 2019),
or a combination of both (Vincent et al., 2022b). Other conditions include translation length and monotonicity (Lakew et al., 2019; Schioppa et al., 2021), vocabulary usage (Post and Vilar, 2018) or domain and genre (Matusov et al., 2020). While wider contextual adaptation in NMT has been discussed theoretically, most empirical research falls back to gender (Rabinovich et al., 2017) or formality control (Niu et al., 2017). One exception is Michel and Neubig (2018b) who adapt NMT for each of many speakers by adding a "speaker bias" vector to the decoder outputs.
Our work is motivated by the CUE vectors
(Novotney et al., 2022) and their application in personalised language models for film and TV dialogue Vincent et al. (2023). CUE vectors represent context computed by passing sentence embeddings of the input context through a dedicated encoder.
Novotney et al. show that incorporating CUE in language modelling improves perplexity, while Vincent et al. use them to personalise language models for on-screen characters. In contrast, we reformulate CUE for contextual machine translation, provide a detailed analysis of incorporating CUE into the model, emphasise the importance of vectorising the context prior to embedding it, and examine the benefits for zero-shot and few-shot performance in contextual NMT tasks.
## 7 Conclusions
We have presented MTCUE, a new NMT architecture that enables zero- and few-shot control of contextual variables, leading to superior translation quality compared to strong baselines across multiple language pairs (English to others, cf. Table 4).
We demonstrated that using sentence embeddingbased vectorisation functions over discrete embeddings and leveraging a context encoder significantly enhances zero- and few-shot performance on contextual translation tasks. MTCUE outperforms the winning submission to the IWSLT22 formality control task for two language pairs, with zero-shot accuracies of 100.0 and 99.7 accuracy respectively, without relying on any data or modelling procedures for formality specifically. It also improves by 12.08 accuracy points over the non-contextual baseline in zero-shot control of interlocutor attributes in translation at the EAMT22 English-to-Polish task.
Our ablation study and experiments on formality in English-to-German demonstrated that the context encoder is an integral part of our solution. The context embeddings produced by the context encoder of the trained MTCUE can be mapped to specific effects in translation outputs, partially explaining the model's improved translation quality. Our approach emphasises the potential of learning from diverse contexts to achieve desired effects in translation, as evidenced by successful improvements in formality and gender tasks using film metadata and document-level information in the dialogue domain.
## Limitations
While we carried out our research in four language pairs (in both directions), we recognise that these are mainly European languages and each pair is from or into English. The choice of language pairs was limited by the data and evaluation tools we had access to, however as our methods are languageindependent, this research could be expanded to other pairs in the future.
Another limitation is that the work was conducted in one domain (TV subtitles) and it remains for future work to investigate whether similar benefits can be achieved in other domains, though the findings within language modelling with CUE in Novotney et al. (2022) who used a different domain suggest so.
## Ethics Statement
We do not foresee a direct use of our work in an unethical setting. However, as with all research using or relying on LMs, our work is also prone to the same unwanted biases that these models already contain (e.g. social biases). Therefore, when controlling contextual attributes, researchers should be aware of the biases in their data in order to understand the models' behaviour.
## Acknowledgements
This work was supported by the Centre for Doctoral Training in Speech and Language Technologies (SLT) and their Applications funded by UK Research and Innovation [grant number EP/S023062/1]. We acknowledge IT Services at The University of Sheffield for the provision of the High Performance Computing Service. This work was also supported by ZOO Digital.
## References
Antonios Anastasopoulos, Loïc Barrault, Luisa Bentivogli, Marcely Zanon Boito, Ondˇrej Bojar, Roldano Cattoni, Anna Currey, Georgiana Dinu, Kevin Duh, Maha Elbayad, Clara Emmanuel, Yannick Estève, Marcello Federico, Christian Federmann, Souhir Gahbiche, Hongyu Gong, Roman Grundkiewicz, Barry Haddow, Benjamin Hsu, Dávid Javorský, Vera Kloudová, Surafel Lakew, Xutai Ma, Prashant ˘
Mathur, Paul McNamee, Kenton Murray, Maria Nadejde, Satoshi Nakamura, Matteo Negri, Jan ˇ
Niehues, Xing Niu, John Ortega, Juan Pino, Elizabeth Salesky, Jiatong Shi, Matthias Sperber, Sebastian Stüker, Katsuhito Sudoh, Marco Turchi, Yogesh Virkar, Alexander Waibel, Changhan Wang,
and Shinji Watanabe. 2022. Findings of the IWSLT
2022 evaluation campaign. In Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 98–157, Dublin, Ireland (in-person and online). Association for Computational Linguistics.
Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018.
Achieving Human Parity on Automatic Chinese to English News Translation. *arXiv*.
Alex Henry, Prudhvi Raj Dachapally, Shubham Shantaram Pawar, and Yuxuan Chen. 2020. Query-key normalization for transformers. In *Findings of the* Association for Computational Linguistics: EMNLP
2020, pages 4246–4253, Online. Association for Computational Linguistics.
Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL:
A conditional transformer language model for controllable generation. *arXiv*, pages 1–18.
Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky.
2018. Sharp nearby, fuzzy far away: How neural language models use context. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 284–294, Melbourne, Australia. Association for Computational Linguistics.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
Shaohui Kuang, Deyi Xiong, Weihua Luo, and Guodong Zhou. 2018. Modeling coherence for neural machine translation with dynamic and topic caches. In *Proceedings of the 27th International Conference on* Computational Linguistics, pages 596–606, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Taku Kudo and John Richardson. 2018. SentencePiece:
A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium.
Association for Computational Linguistics.
Surafel Melaku Lakew, Mattia Di Gangi, and Marcello Federico. 2019. Controlling the output length of neural machine translation. In Proceedings of the 16th International Conference on Spoken Language Translation, Hong Kong. Association for Computational Linguistics.
Jindˇrich Libovický, Jindˇrich Helcl, and David Marecek. ˇ
2018. Input combination strategies for multi-source transformer decoder. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 253–260, Brussels, Belgium. Association for Computational Linguistics.
Pierre Lison, Jörg Tiedemann, and Milen Kouylekov.
2018. OpenSubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora.
In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC
2018), Miyazaki, Japan. European Language Resources Association (ELRA).
António Lopes, M. Amin Farajian, Rachel Bawden, Michael Zhang, and André F. T. Martins. 2020.
Document-level neural MT: A systematic comparison. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 225–234, Lisboa, Portugal. European Association for Machine Translation.
Lorenzo Lupo, Marco Dinarelli, and Laurent Besacier.
2022. Divide and rule: Effective pre-training for context-aware multi-encoder translation models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 4557–4572, Dublin, Ireland. Association for Computational Linguistics.
Evgeny Matusov, Patrick Wilken, and Christian Herold.
2020. Flexible customization of a single neural machine translation system with multi-dimensional metadata inputs. In Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 2: User Track), pages 204–
216, Virtual. Association for Machine Translation in the Americas.
Leland McInnes, John Healy, and James Melville. 2018.
Umap: Uniform manifold approximation and projection for dimension reduction.
Paul Michel and Graham Neubig. 2018a. Extreme adaptation for personalized neural machine translation.
ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), 2:312–318.
Paul Michel and Graham Neubig. 2018b. Extreme adaptation for personalized neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 312–318, Melbourne, Australia.
Association for Computational Linguistics.
Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 2947–2954, Brussels, Belgium. Association for Computational Linguistics.
Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In *Proceedings of the ACL 2010 Conference Short Papers*,
pages 220–224, Uppsala, Sweden. Association for Computational Linguistics.
Amit Moryossef, Roee Aharoni, and Yoav Goldberg.
2019. Filling Gender & Number Gaps in Neural Machine Translation with Black-box Context Injection.
pages 49–54.
Mathias Müller, Annette Rios, Elena Voita, and Rico Sennrich. 2018. A large-scale test set for the evaluation of context-aware pronoun translation in neural machine translation. In *Proceedings of the Third* Conference on Machine Translation: Research Papers, pages 61–72, Brussels, Belgium. Association for Computational Linguistics.
Maria Nadejde, Anna Currey, Benjamin Hsu, Xing Niu, Marcello Federico, and Georgiana Dinu. 2022.
CoCoA-MT: A dataset and benchmark for contrastive controlled MT with application to formality. In *Findings of the Association for Computational Linguistics:*
NAACL 2022, pages 616–632, Seattle, United States.
Association for Computational Linguistics.
Xing Niu, Marianna Martindale, and Marine Carpuat.
2017. A study of style in machine translation: Controlling the formality of machine translation output.
In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2814–2819, Copenhagen, Denmark. Association for Computational Linguistics.
Scott Novotney, Sreeparna Mukherjee, Zeeshan Ahmed, and Andreas Stolcke. 2022. CUE vectors: Modular training of language models conditioned on diverse contextual signals. In *Findings of the Association for* Computational Linguistics: ACL 2022, pages 3368–
3379, Dublin, Ireland. Association for Computational Linguistics.
Joe O'Connor and Jacob Andreas. 2021. What context features can transformer language models use? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 851–864, Online. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1314–1324, New Orleans, Louisiana.
Association for Computational Linguistics.
Ella Rabinovich, Shachar Mirkin, Raj Nath Patel, Lucia Specia, and Shuly Wintner. 2017. Personalized machine translation: Preserving original author traits.
15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017
- Proceedings of Conference, 1:1074–1084.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Elijah Rippeth, Sweta Agrawal, and Marine Carpuat.
2022. Controlling translation formality using pretrained multilingual language models. In Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 327–340, Dublin, Ireland (in-person and online). Association for Computational Linguistics.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing @ NeurIPS 2019.
Andrea Schioppa, David Vilar, Artem Sokolov, and Katja Filippova. 2021. Controlling machine translation for multiple attributes with additive interventions.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6676–6696, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016a. Controlling politeness in neural machine translation via side constraints. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference, pages 35–40.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016b. Improving neural machine translation models with monolingual data. 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016
- Long Papers, 1:86–96.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016c. Neural machine translation of rare words with subword units. *54th Annual Meeting of the* Association for Computational Linguistics, ACL 2016
- Long Papers, 3:1715–1725.
Dagmar Stahlberg, F Braun, L Irmen, and Sabine Sczesny. 2007. Representation of the sexes in language. *Social Communication*, pages 163–187.
Amane Sugiyama and Naoki Yoshinaga. 2021. Contextaware decoder for neural machine translation using a target-side document-level language model. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 5781–5791, Online. Association for Computational Linguistics.
Jörg Tiedemann and Yves Scherrer. 2017. Neural machine translation with extended context. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 82–92, Copenhagen, Denmark.
Association for Computational Linguistics.
Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2016. Measuring the effect of conversational aspects on machine translation quality. In *Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers*, pages 2571–2581, Osaka, Japan. The COLING
2016 Organizing Committee.
Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2018, pages 3003–3008.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in Neural Information Processing Systems*, pages 5999–6009.
Sebastian Vincent, Loïc Barrault, and Carolina Scarton. 2022a. Controlling formality in low-resource NMT with domain adaptation and re-ranking: SLTCDT-UoS at IWSLT2022. In *Proceedings of the* 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 341–350, Dublin, Ireland (in-person and online). Association for Computational Linguistics.
Sebastian Vincent, Rowanne Sumner, Alice Dowek, Charlotte Blundell, Emily Preston, Chris Bayliss, Chris Oakley, and Carolina Scarton. 2023. Personalised language modelling of screen characters using rich metadata annotations. In *arXiv:2303.16618*. Preprint.
Sebastian T. Vincent, Loïc Barrault, and Carolina Scarton. 2022b. Controlling extra-textual attributes about dialogue participants: A case study of English-toPolish neural machine translation. In Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, pages 121–130, Ghent, Belgium. European Association for Machine Translation.
Elena Voita, Rico Sennrich, and Ivan Titov. 2019a.
Context-aware monolingual repair for neural machine translation. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 877–886, Hong Kong, China. Association for Computational Linguistics.
Elena Voita, Rico Sennrich, and Ivan Titov. 2019b.
When a good translation is wrong in context: Contextaware machine translation improves on deixis, ellipsis, and lexical cohesion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1198–1212, Florence, Italy. Association for Computational Linguistics.
Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2021a. MiniLMv2: Multi-head selfattention relation distillation for compressing pretrained transformers. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2140–2151, Online. Association for Computational Linguistics.
Yue Wang, Cuong Hoang, and Marcello Federico.
2021b. Towards modeling the style of translators in neural machine translation. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1193–1199, Online. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
## A Data Preprocessing
Parsing OPENS**UBTITLES** To prepare OPENSUBTITLES (specifically the document-level part of the corpus), we follow the setup described in Voita et al. (2019b). There are timestamps and overlap values for each source-target sample in the corpus; we only take into account pairs with overlap >= 0.9 and we use two criteria to build any continuous document: (1) no omitted pairs (due to poor overlap) and (2) no distance greater than seven seconds between any two consecutive pairs.
To generate train/validation/test splits, we use generated lists of held-out IMDB IDs based on various published test sets (Müller et al., 2018; Lopes et al., 2020; Vincent et al., 2022b) to promote reproducibility. These lists can be found within the GitHub repository associated with this paper.
Embedding contexts Since a lot of metadata is repeated, and models are trained for multiple epochs, we opt for the most efficient way of embedding and storing data which is to use a memorymapped binary file with embeddings for unique contexts, and an index which maps each sample to its embedding. This saves more than 90% space w.r.t. storing a matrix of all embeddings, and trains over 3× faster than embedding batches on-the-fly.
## B Model Details
MTCUE is trained from a pre-trained machine translation model (corresponding to the BASE
model) which is the transformer NMT architecture within FAIRSEQ. We follow model specifications and training recommendations set out by FAIRSEQ in their examples for training a translation model8. We train a model for each of the eight language directions on the source-target pairs from OPENSUBTITLES. We train the model until a patience parameter of 5 is exhausted on the validation loss.
## C Observations On Training And Hyperparameters
We shortly describe here our findings from seeking the optimal architecture for MTCUE and training settings in the hope that this helps save the time of researchers expanding on our work.
- Reducing the number of context encoder layers led to inferior performance.
- Freezing the source encoder when fine-tuning MTCUE from a translation model led to inferior performance,
- Training MTCUE from scratch − significantly increased training time while having a minor effect on performance.
8https://github.com/facebookresearch/
fairseq/tree/main/examples/translation\#
iwslt14-german-to-english-transformer, accessed 1/5/23.
- Other context combination strategies (sequential and flat attention in Libovický et al., 2018) led to similar results.
- Some alternatives to QK-NORM to combat the problem of the exploding dot-product were successful but had a negative impact on performance:
- using layer normalisation after the linear layer is applied to vectorised contexts,
- using SmallEmb9 which initialises the embedding layer (in our case, the linear proj.
layer) to tiny numbers and adds layer normalisation on top.
- Zero-shot performance at the IWSLT22 task is generally consistent (at around 98.0−100.0 accuracy) though may vary depending on the selected checkpoint. We found that training MTCUE for longer (i.e. more than 20 epochs) may improve translation quality but degrade the performance on e.g. this task.
- We found that MTCUE is generally robust to some hyperparameter manipulation on the OPENSUBTITLES dataset, and recommend performing a hyperparameter search when training the model on new data. For simplicity, in this paper we use a single set of hyperparameters for all language directions, though for some pairs the results may improve by manipulating parameters such as batch size and context dropout.
## D Formality
To evaluate the performance of any tested model on the formality task we had to come up with a fair method of choosing a context to condition on, since in a zero-shot setting the model organically learns the tested attributes from various contexts rather than specific cherry-picked sentences.
To do so, we sampled some metadata from the validation set of the OPENSUBTITLES data and picked eight contexts (four for the *formal* case and four for the *informal* case) which either used formal or informal language themselves or represented a domain where such language would be used. We also added two generic prompts: *Formal conversation* and *Informal chit-chat*. The full list of prompts was as follows:
## - Formal:
1. *Formal conversation* 2. Hannah Larsen, meet Sonia Jimenez. One of my favourite nurses.
3. In case anything goes down we need all the manpower alert, not comfortably numb.
4. *Biography, Drama,* 5. A musician travels a great distance to return an instrument to his elderly teacher
- Informal:
1. *Informal chit-chat* 2. *I'm gay for Jamie.*
3. *What else can a pathetic loser do?*
4. *Drama, Family, Romance* 5. *Animation, Adventure, Comedy*
We then ran the evaluation as normal with each context separately, and selected the highest returned score for each attribute.
## E Examples Of Model Outputs (Zero-Shot)
We include examples of translations produced zeroshot by MTCUE in Table 7. We would like to draw attention particularly to the top example for the EAMT22 task ("I just didn't want you to think you had to marry me"). The phrase *to marry someone* can be translated to Polish in several ways, indicating that the addressee is to be a wife (ozeni´c si˛e z ˙
kims´), a husband (wyjs´c za kogo ´ s [za m ˛a ´ z]˙ ) or neutral (wzi ˛a´c *slub* ´ ). While the reference in this example uses a neutral version, both the baseline model and MTCUE opted for feminine/masculine variants. However, the gender of the speaker is feminine, so the phrase "*... had to marry me*" should use either the neutral version (wzi ˛a´c *slub* ´ ) or the feminine one (*ozeni´c si˛e* ˙ ). The baseline model incorrectly picks the masculine version while MTCUE is able to pick the correct one based on the context given.
MTCUE also correctly translates the gender of the interlocutor: both in the top example (*myslał* ´ vs myslała ´ ) and the bottom one (as´ vs es´, even though a synonymous expression is used in translation, agreement remains correct). Finally, the IWSLT22 example shows how MTCUE produces correct possessive adjectives for each formality.
| EAMT22 | |
|------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Source | I just didn't want you to think you had to marry me. |
| Context | I am a woman. I am talking to a man |
| Reference | Bo nie chciałam, zeby ˙ s my ´ sl´ ał, ze ci˛e zmuszam do ˙ slubu. ´ "Because I didn't wantfeminine you to thinkmasculine I am forcing you into a wedding." |
| Baseline | Po prostu nie chciałem, zeby ˙ s my ´ sl´ ała, ze musisz ˙ za mnie wyjs´c´. "I just didn't wantmasculine you to thinkfeminine you had to marryfeminine me." |
| MTCUE | Nie chciałam, zeby ˙ s my ´ sl´ ał, ze musisz ˙ si˛e ze mn ˛a ozeni ˙ c´. "I didn't wantfeminine you to thinkmasculine you had to marrymasculine me." |
| Source | So then you confronted Derek. |
| Context | I am talking to a woman |
| Reference | A wi˛ec doprowadziłas´ do konfrontacji z Derekiem. "So then you ledfeminine to a confrontation with Derek." |
| Baseline | Wi˛ec wtedy skonfrontowałes´ si˛e z Derekiem. "So then you confrontedmasculine Derek." |
| MTCUE | Wi˛ec skonfrontowałas´ si˛e z Derekiem. "So then you confrontedfeminine Derek." IWSLT22 |
| Source | I got a hundred colours in your city. |
| MTCUE (formal) | Ich habe 100 Farben in Ihrer Stadt. |
| MTCUE (informal) | Ich hab 100 Farben in deiner Stadt. |
| Table 7: Examples of MTCUE's outputs (zero-shot) versus a non-contextual Transformer baseline. | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Unnumbered section after section 7 (Conclusions)
✗ A2. Did you discuss any potential risks of your work?
There are no relevant risks associated with our work
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract + section 1 (Introduction)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Section 2 (proposed architecture); section 3.1 (used the OpenSubtitles corpus); section 3.2 (used two evaluation suites); section 2.2 (used sentence embedding models); section 3.4 (used software for implementation)
✓ B1. Did you cite the creators of artifacts you used?
Yes (sections 3.1, 3.2, 2.2, 3.4)
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
For created artifacts: section 1 For used artifacts: section 3.1
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
section 3.1. For sentence-transformers we did not explicitly discuss this but made all the necessary steps requested by the authors, such as citing the library and relevant papers.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Table 1, Table 2, sections 3.1, 3.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?**
Section 3 (experimental setup), section 4 (results), section 5 (ablation study)
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3.4, Table 4
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3, section 3.4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Sections 2.2, 3.1, 3.2
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
bommasani-2023-evaluation | Evaluation for Change | https://aclanthology.org/2023.findings-acl.522 | Evaluation is the central means for assessing, understanding, and communicating about NLP models. In this position paper, we argue evaluation should be more than that: it is a force for driving change, carrying a sociological and political character beyond its technical dimensions. As a force, evaluation{'}s power arises from its adoption: under our view, evaluation succeeds when it achieves the desired change in the field. Further, by framing evaluation as a force, we consider how it competes with other forces. Under our analysis, we conjecture that the current trajectory of NLP suggests evaluation{'}s power is waning, in spite of its potential for realizing more pluralistic ambitions in the field. We conclude by discussing the legitimacy of this power, who acquires this power and how it distributes. Ultimately, we hope the research community will more aggressively harness evaluation to drive change. | # Evaluation For Change
Rishi Bommasani Center for Research on Foundation Models Stanford University [email protected]
## Abstract
Evaluation is the central means for assessing, understanding, and communicating about NLP
models. In this position paper, we argue evaluation should be more than that: it is a force for driving change, carrying a sociological and political character beyond its technical dimensions. As a force, evaluation's power arises from its *adoption*: under our view, evaluation succeeds when it achieves the desired change in the field. Further, by framing evaluation as a force, we consider how it competes with other forces. Under our analysis, we conjecture that the current trajectory of NLP suggests evaluation's power is *waning*, in spite of its potential for realizing more *pluralistic* ambitions in the field. We conclude by discussing the legitimacy of this power, who acquires this power and how it distributes. Ultimately, we hope the research community will more aggressively harness evaluation to drive change.
## 1 Introduction
Evaluation plays a defining role in NLP research; in fact, evaluation has a very rich history. While this genealogy can be traced in many ways, since this piece (roughly) coincides with the 5th anniversary of the passing of one NLP's beloved pioneers and the first recipient of the ACL Lifetime Achievement Award, we look to Aravind Joshi's legacy.
Best known for grammar formalism and discourse
(see Webber, 2018), his research journey reflects broader field-wide trends towards evaluation. In early works (e.g. Joshi, 1969; Joshi et al., 1972; Grosz et al., 1983), evaluation went entirely unmentioned. Yet, over time, Aravind's work involved more evaluation (e.g. Joshi and Schabes, 1989), implicitly building new norms for evaluation in grammar formalism and discourse (Miltsakaki et al., 2004; Prasad et al., 2008, 2014). Liberman
(2005) cites Joshi's standards for evaluation in conveying Joshi's signature belief in multidisciplinary approaches to human language.
Joshi's life and 5 decades of scholarship teaches us evaluation is important, but how should we reason about evaluation? Here, we present two perspectives that frame evaluation in considerably different ways. Under the first account, evaluation is technical in nature, functioning as a lens to study models. The motivation for this lens may depend on the specific evaluation, stakeholder, or both:
evaluation may allow us to derive scientific insight.
Or it can transparently document technology for broader audiences (e.g. practitioners, colleagues in other fields, policymakers, the public). Regardless, to determine if an evaluation is successful, under this account, the lens must yield the desired understanding about models.
In this work, we argue for a second perspective, which we believe is partially acknowledged but considerably less salient than the first perspective.
Under our second account, evaluation is political in nature, functioning as a force to drive change. In contrast to the first account, this means evaluation pushes the research community in some direction, possibly referring to a specific social or scientific objective, with the emphasis being on future model development more so than existing models. Critically, under this account, to determine if an evaluation is successful, the force must yield the desired change in the community. By separating these two accounts, our goal is neither to suggest they are at odds nor that they are meaningfully separable, but to shed conceptual clarity on the merits of powercentric analysis.
In pushing for this position of viewing evaluation as a force, we explore what this force influences, what other forces it competes with, how it accrues power, whether its power is legitimate, and who it empowers. Motivated by the growing impact of language technology and our field, the abundant discord on the status quo, and the uncertainty on what lies ahead, we believe evaluation's potential for change presents a vital path forward.
## 2 Evaluation As A Force
If evaluation is a force, what domain does it act upon? And where does its power come from?
Domain. We will restrict our scope to how evaluation influences NLP research. Specifically, evaluation concretizes desired behavior for systems, thereby communicating an objective for model design. This allows for the community to coordinate on goals for modeling research. For this goal-setting to succeed, future research should then go on to make progress on the proposed evaluation. That is, successful evaluation requires that the evaluation be prioritized, redistributing research attention such that it is allocated towards making progress on the evaluation.
Adoption constructs power. As this suggests, the adoption of an evaluation (by others) generates its power and determines its success. It is in this sense that our account for evaluation success deviates from a purely technical/intrinsic characterization. Most evaluations are concrete instantiations of a broader agenda: for these evaluations to be effective, they must shift power, namely towards addressing this agenda and materially making progress. In spite of this, we generally find that evaluations in NLP research do not even mention how adoption will arise, and if evaluation creators will take any overt actions to accelerate adoption.
Accelerating adoption. If the power of evaluations come from adoption, and evaluation creators are incentivized to accrue such power to advance their broader agenda, are there ways to accelerate adoption? We observe at least two such approaches, though they have not been considered in this way to our knowledge. As a softer means for acquiring adoption/power, evaluations may be used as shared tasks (e.g. SemEval; see Parra Escartín et al., 2017; Nissim et al., 2017) or be built as part of workshops/conferences (e.g. BIG-bench; see WELM, 2021; Srivastava et al., 2022), which leans into the relationship between coordinating research and convening researchers. More aggressively, explicit competitions with prizes or other stronger incentives can more directly drive adoption, perhaps most famously in the Netflix Prize, which remarkably accelerated and shifted research on recommender systems (see Hallinan and Striphas, 2016).
Authority as a standard. As evaluations accrue influence, they eventually become reified as high-status standards like ImageNet, WMT, and SQuAD (Dotan and Milli, 2020; Dehghani et al.,
2021). While it is difficult to directly assess the power these evaluations have (e.g. how would research have changed counterfactually in their absence; see Liu et al., 2021), strong norms emerge for modeling work to evaluate on these standards.
And, consequently, improvements on these evaluations function as stand-ins for more fundamental progress (Porter, 1995; Liao et al., 2021; Raji et al.,
2021). In fact, their authority is made clear in how serious improvements were seen as watershed moments, ushering in new paradigms. Famous examples include the performance of AlexNet
(Krizhevsky et al., 2012) on ImageNet, which initiated the deep learning revolution, and Transformers (Vaswani et al., 2017) on WMT, which, by outperforming specialized machine translation approaches by a considerable margin, marked the dawn of the current dominance of Transformers.
Related work. This work is not the first to bring questions of power, values, reflection, and change to the fore in relation to evaluation/benchmarking
(Spärck Jones and Galliers, 1995; Welty et al.,
2019; Dotan and Milli, 2020; Ethayarajh and Jurafsky, 2020; Linzen, 2020; Scheuerman et al., 2021; Kiela et al., 2021; Dehghani et al., 2021; Bowman and Dahl, 2021; Raji et al., 2021; Koch et al., 2021; Denton et al., 2021; Paullada et al., 2021; Liu et al.,
2021; Hutchinson et al., 2021; Jacobs and Wallach, 2021; Birhane et al., 2022; Liang et al., 2022b, inter alia). Prior work establishes that evaluations embed values, carry influence, encode broader power structures, and the nature of evaluation as ranking aligns with broader themes of hierarchy. They make clear how other disciplines can provide guidance on what we see in NLP, but also how our evaluation practices are distinctive (e.g. competitive tendencies in benchmarking, differences in standards for measure validity).
While we draw significant inspiration from these works, our work also significantly diverges in its objective. Rather than trying to make visible the tacit assumptions, norms, and infrastructure that animate evaluation's power, we instead set our sights on how evaluation's power can animate change. In this regard, our work more closely mirrors the aesthetic of Abebe et al. (2020), as can be seen in the similar titles.
## 3 Competing Forces
Having argued for where evaluation draws power from, how powerful is it? While difficult to state in absolute terms, we instead consider what other forces are in play and how they interact/compete.
Coexisting forces. NLP research is a fabric stitched through myriad social interactions: conversations with colleagues, talks at conferences, academic Twitter, scholarship from adjacent disciplines, and much more. Most of these interactions are poorly conceptualized as forces: while they exert influence, they are generally diffuse rather than concentrated and lack strong directionality. For this short-form analysis, we juxtapose evaluation with the force of *resources*. By resources, we refer to assets like money, compute, and engineering support, choosing to treat them as monolithic (rather than disaggregating) for brevity.
Language models. Given the central position language models occupy in modern NLP, we consider language models as a case study to relate evaluation and resources. Our thesis is resources, to a far greater extent than evaluation, dictate research on language models, which more broadly influences NLP research given the pervasive dependence on language models. Influential language models have near-exclusively been developed by resource-rich institutions. Further, we argue a resource-allocation mindset drives decisionmaking in their development. Namely, the use of scaling laws (e.g. Kaplan et al., 2020; Hoffmann et al., 2022) indicates development is framed as an efficient resource allocation problem. Evaluation does play a small role here: scaling laws relate resources (x-axis) with evaluated model performance (y-axis). But the evaluation scope is narrow: scaling laws generally center accuracy for a single task (generally upstream language model perplexity/loss), with *predictability* of this relationship being the principal concern (Ganguli et al.,
2022, cf. Wei et al. (2022)).
In contrast, evaluation currently does not exert similar influence over language model development. Namely, while influential language models are similar in that they were developed by resourcerich institutions, they strikingly differ in the benchmarks they are evaluated on. Across all datasets evaluated for across language modeling works at the time, Liang et al. (2022b) find that RTE is the unique dataset evaluated for in more than 50% of the 32 language modeling works they consider (e.g.
GPT-3, GPT-NeoX, BLOOM, PaLM, Gopher, OPT,
GLM)1, with some works sharing no evaluation datasets in common. Given this status quo, evaluations currently fail to achieve the widespread adoption required to drive change.2 Contrasting properties. Which forces orient NLP research is consequential: different forces profile differently. Resources are distributed very unevenly, so resources orienting progress implies a small subset of the community expresses outsized influence in shaping the field's trajectory. Further, by the nature of how these resource disparities came to be, these resource-rich actors tend to have specific incentives (e.g. commercial interest)
and demographics (e.g. poor diversity), potentially causing them to advance particular agendas (see Rogaway, 2015). In contrast, we believe evaluation structurally is better equipped to enable broader participation (e.g. BIG-bench) and, critically, pluralism. Different values can be simultaneously foregrounded in evaluations (e.g. HELM (Liang et al., 2022b) highlights values/desiderata such as accuracy, robustness, fairness, uncertainty, and efficiency). For example, insofar as scaling laws drive language model development, greater pluralism would be achieved if scaling laws were studied, fit, and applied for a broader array of evaluation targets than just upstream accuracy/perplexity.
## 4 Legitimacy
Since evaluation accrues power, is this power legitimate? And who does this power distribute to?
Legitimacy. Evaluations are generally built by a small number of researchers, but could orient work across the broader research community. Consequently, in arguing for the greater use of evaluation as a means for shifting power, we should question whether this implicitly recommends *value imposition*: imposing values of the few onto the many.
However, recall that evaluation's power derives not from its creation but its adoption. Consequently, for this power to emerge requires the consensual action of the early adopters, who choose to use the evaluation. To an extent, this (voluntary) choice suggests that the power of evaluation is generally and, at least, initially legitimate.
1This is likely a direct side effect of RTE being the unique dataset in both GLUE and SuperGLUE (Wang et al., 2019a,b).
2Recent high-profile evaluation efforts (e.g. BIG-bench, the HuggingFace Evaluate library, HELM) may change this.
If the power of evaluation is legitimate, then what does this imply when evaluations are shown to have issues with respect to their validity, reliability, relevance, or appropriateness (Gururangan et al.,
2018; Kaushik and Lipton, 2018; Ethayarajh, 2020; Blodgett et al., 2021; Aribandi et al., 2021; Birhane and Prabhu, 2021, *inter alia*)? Here, we recognize that while the initial adoption of an evaluation is in most cases clearly legitimate, the subsequent sustained adoption can be more complicated.
In particular, we emphasize that evaluations tend to exhibit *inertia*: once an evaluation is widely adopted, it is hard for the evaluation to lose this status or for other evaluations to eclipse it (e.g. due to reviewing norms; Dehghani et al., 2021), even when there are strong reasons to demote or deprecate the evaluation (Peng et al., 2021). Most directly, we point to the strong norms of comparison in NLP, whereby model developers are expected to compare their models to prior models in headto-head comparisons. While generally a useful norm, this does promote a certain conservatism. Notably, when prior models (i.e. those that are to be compared to) are not public (Bommasani et al.,
2023) or laborious to re-evaluate on new datasets, developers of new models can most easily be compare to old models on the evaluations used in prior work. In this regard, paradigms where evaluations are continuously updated and refreshed (e.g. the evaluation rounds in ANLI (Nie et al., 2020) and versions in HELM (Liang et al., 2022b); inherently dynamic evaluations like DynaBench (Kiela et al., 2021)) more directly ensure the sustained power of specific evaluations is legitimate.
Distribution of power. Even if an evaluation's power is acquired legitimately, we should further question how the power distributes over different members of the community, especially as other forces (especially resources) are inequitably distributed. Koch et al. (2021) show the distribution of evaluation developers is also uneven, aligning strongly with institutional privilege (e.g. elite academic institutions like Stanford and Princeton, massive commercial organizations like Microsoft and Google). In part, this is likely a byproduct of the fact that evaluations themselves can be quite resource-intensive, especially when this scale is a virtue: ImageNet (Deng et al., 2009), especially for its time, was exceedingly costly in both money and time; large-scale model evaluation on HELM costs almost 40k USD in addition to 20k A100 GPU
hours (Liang et al., 2022b).
With that said, we have significant optimism that evaluation can realize more pluralistic visions.
Specifically, (i) the rise of foundation models in NLP has shifted the field towards few-shot evaluations (Brown et al., 2020; Bragg et al., 2021),
which means evaluations need not include largescale training subsets which constituted much of the cost for evaluations historically (e.g. 80%, or 80k+, of the examples in SQuAD (Rajpurkar et al.,
2016) were allocated for training). This suggests that their development should be more broadly accessible (Bommasani et al., 2021, §4.4), though the dynamics of their adoption are less clear. Further, (ii) the practice of community-driven evaluation design has been successfully implemented in several instances: the EleutherAI LM Harness
(Gao et al., 2021), GEM (Gehrmann et al., 2021),
GEMv2 (Gehrmann et al., 2022), BIG-Bench (Srivastava et al., 2022), the Hugging Face Evaluate library (von Werra et al., 2022), with examples like Universal Dependencies (UD; Nivre et al., 2016; de Marneffe et al., 2021) even pre-dating them for many years. In most cases, these efforts did not push a very clear directional change/agenda in research priorities (UD as a partial exception), but we believe future efforts could more explicitly exert power while learning from these prior efforts. Finally, (iii) the community has grown to more properly recognize and value evaluation-type contributions (e.g. the NeurIPS datasets and benchmarks track, cf. Rogers (2020)). That is, while we argue evaluation's power is currently waning relative to resources, suggesting a trend towards less pluralism, we simultaneously believe the conditions are ripe for renewed commitment to evaluation to reverse this trajectory.
## 5 Conclusion
Evaluation wields power: we believe the community is largely aware of this, yet we foreground this power to understand how evaluation drives change.
This perspective leads us to three conclusions: (i)
adoption imbues evaluation with its power, (ii) evaluation's power relative to other competing social forces appears to be diminishing, and yet (iii) evaluation has attractive qualities, especially under current conditions, as a force for change relative to other forces with growing power. Overall, we hope the community reflects on the mantra "evaluation for change".
## Limitations
This work puts forth a position: by the nature of a position paper, the work is deliberately intended to be evocative and opinionated, in some places not having unequivocal evidence for certain claims.
This presents a clear limitation: the analysis presented may diverge from the realities of NLP at present or in the future, namely if the assumptions/conditions presented themselves prove to be untrue in practice. Nonetheless, we believe centering power and change, and understanding evaluation as a political and sociological phenomenon, is likely to be useful under all conditions.
Further, in understanding the qualities of evaluation relative to other social forces, we directly suggest that evaluation is more readily operationalized in more pluralistic ways than other key forces (primarily resources). While initial efforts indicate the potential for such holistic approaches that reflect many different desiderata (Liang et al., 2022b) as well as participatory approaches that permit contribution from different entities (e.g. Srivastava et al.,
2022), it is still unclear how much adoption such approaches will get, and therefore how much power they will acquire. That is, the extent to which evaluation can realize this pluralistic vision still largely remains an unresolved aspiration than a readily realizable certainty. And, conversely, we do note that while current practices potentially put pluralism and resources at odds, they may be mutually compatible in other regimes (e.g. decentralized training through the pooling of shared/volunteered compute
(Yuan et al., 2022), open-source software development (Wolf et al., 2020; Gao et al., 2021; von Werra et al., 2022)).
Finally, we do not discuss other forces that we believe have not exhibited strong influence on NLP
research thus far, in favor of allocating focus to evaluation and resources, which have had clear influence. To enumerate some of these other (potential) forces, we specifically note (i) research norms, (ii) policy and regulation, and (iii) auditing/advocacy. For (i), we note that while the NLP
research community has many established norms
(e.g. reproducibility checklists, peer review guidelines, conference organization structure, policies on respectful conduct), most of these do not directly/significantly influence what research topics different researchers work on. We do note that is possible in the future that certain norms (e.g. the access to training data or model checkpoints; Liang et al., 2022a) would influence what research is conducted (e.g. we may have not seen as much work on the learning dynamics of language models and/or memorization of training data due to the relative inaccessibility of intermediary checkpoints and training data until recently). For (ii), we note that policy and regulatory efforts have had little to no salient impact on the deployment of most language technologies, let alone NLP research, to our knowledge. With that said, much as efforts like GDPR
and privacy legislation has impacted scientific research on privacy (e.g. work that operationalizes the right to be forgotten as in Ginart et al., 2019),
similar trends could occur in NLP research (e.g. in response to the EU AI Act).3 Akin to (ii), for (iii),
we also have seen fairly little impact from auditing/advocacy work on NLP research to our knowledge. But, much as work on auditing/advocacy around face recognition (Buolamwini and Gebru, 2018; Raji and Buolamwini, 2019; Raji et al., 2020, inter alia) influenced research in the computer vision community, we could see similar trends in NLP (e.g. in response to auditing/advocacy intervention around language models).
## Ethics Statement
We do not find serious risks or ethical concerns with this work. We do note this work advances a specific position, which we clearly identify. It should not be assumed there is consensus in the community (or beyond) on any account for evaluation, let alone the account on power that we espouse. In this regard, we actively solicit response and interrogation of the positions presented in this work, especially given myriad relevant analyses of evaluation/measurement/benchmarking exist in other parts of AI, computer science, linguistics, and other disciplines.
## Acknowledgements
I would like to thank Alex Hanna, Ben Recht, Chris Manning, Chris Potts, Claire Cardie, Dan Jurafsky, 3As described in Bommasani et al. (2023), we note the NIST AI Risk Management Framework (https://www.ni st.gov/itl/ai-risk-management-framework) and the mandate for NIST to develop AI testbeds under the CHIPS
and Science Act (https://www.congress.gov/bill/117t h-congress/house-bill/4346/text) could change this status quo in the United States. Similarly, the draft EU AI Act outlines requirements for benchmarking foundation models on public benchmarks: https://www.europarl.europa.eu
/news/en/press-room/20230505IPR84904/ai-act-a-s tep-closer-to-the-first-rules-on-artificial-int elligence.
Deb Raji, Emily Denton, Henrik Kugelberg, Jacob Andreas, Jacob Steinhardt, John Hewitt, Judy Shen, Kawin Ethayarajh, Nelson Liu, Rediet Abebe, Rohan Taori, Sam Bowman, Stella Biderman, Tal Linzen, Yann Dubois, and Yoav Goldberg for being specific inspirations, whose writings and thoughts helped develop my current position on evaluation, with special thanks to Percy Liang. Thanks to Jason Wei for feedback on the initial version of this work.
I would like to thank the CRFM community; the experience of designing and building HELM (Liang et al., 2022b) in particular helped sharpen my belief in this philosophy towards evaluation. I am supported by the NSF Graduate Research Fellowship Program under grant number DGE-1655618.
## References
Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G. Robinson. 2020. Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20, page 252–260, New York, NY, USA. Association for Computing Machinery.
Vamsi Aribandi, Yi Tay, and Donald Metzler. 2021.
How reliable are model diagnostics? In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 1778–1785, Online. Association for Computational Linguistics.
Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. 2022. The values encoded in machine learning research. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT '22, page 173–184, New York, NY, USA. Association for Computing Machinery.
Abeba Birhane and Vinay Uday Prabhu. 2021. Large image datasets: A pyrrhic win for computer vision?
In *2021 IEEE Winter Conference on Applications of* Computer Vision (WACV), pages 1536–1546.
Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 1004–1015, Online. Association for Computational Linguistics.
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S.
Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, S. Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S.
Chen, Kathleen A. Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, O. Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir P. Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Benjamin Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian F. Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jack Ryan, Christopher Rè, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishna Parasuram Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei A. Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2021.
On the opportunities and risks of foundation models.
ArXiv.
Rishi Bommasani, Daniel Zhang, Tony Lee, and Percy Liang. 2023. Improving transparency in ai language models: A holistic evaluation. Foundation Model Issue Brief Series.
Samuel R. Bowman and George Dahl. 2021. What will it take to fix benchmarking in natural language understanding? In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4843–4855, Online. Association for Computational Linguistics.
Jonathan Bragg, Arman Cohan, Kyle Lo, and Iz Beltagy. 2021. FLEX: Unifying evaluation for few-shot NLP. In Advances in Neural Information Processing Systems.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In *Proceedings of* the 1st Conference on Fairness, Accountability and Transparency, volume 81 of *Proceedings of Machine* Learning Research, pages 77–91. PMLR.
Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. *Computational Linguistics*,
47(2):255–308.
Mostafa Dehghani, Yi Tay, Alexey A. Gritsenko, Zhe Zhao, Neil Houlsby, Fernando Diaz, Donald Metzler, and Oriol Vinyals. 2021. The benchmark lottery.
ArXiv, abs/2107.07002.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255.
Emily Denton, Alex Hanna, Razvan Amironesei, Andrew Smart, and Hilary Nicole. 2021. On the genealogy of machine learning datasets: A critical history of imagenet. *Big Data & Society*,
8(2):20539517211035955.
Ravit Dotan and Smitha Milli. 2020. Value-laden disciplinary shifts in machine learning. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
Kawin Ethayarajh. 2020. Is your classifier actually biased? measuring fairness under uncertainty with bernstein bounds. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2914–2919, Online. Association for Computational Linguistics.
Kawin Ethayarajh and Dan Jurafsky. 2020. Utility is in the eye of the user: A critique of NLP leaderboards.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 4846–4853, Online. Association for Computational Linguistics.
Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Nelson Elhage, Sheer El Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Scott Johnston, Andy Jones, Nicholas Joseph, Jackson Kernian, Shauna Kravec, Ben Mann, Neel Nanda, Kamal Ndousse, Catherine Olsson, Daniela Amodei, Tom Brown, Jared Kaplan, Sam McCandlish, Christopher Olah, Dario Amodei, and Jack Clark. 2022. Predictability and surprise in large generative models. In *2022 ACM Conference on Fairness, Accountability, and Transparency*, FAccT '22, page 1747–1764, New York, NY, USA. Association for Computing Machinery.
Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation.
Version v0. 0.1. Sept.
Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondˇrej Dušek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM benchmark: Natural language generation, its evaluation and metrics. In *Proceedings of the* 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 96–120, Online. Association for Computational Linguistics.
Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna V.
Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh D. Dhole, Khyathi Raghavi Chandu, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qinqin Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja vStajner, Sébastien Montella, Shailza, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin P. Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Yi Xu, Yisi Sang, Yixin Liu, and Yufang Hou.
2022. Gemv2: Multilingual nlg benchmarking in a single line of code. *ArXiv*, abs/2206.11249.
Antonio Ginart, Melody Guan, Gregory Valiant, and James Y Zou. 2019. Making ai forget you: Data deletion in machine learning. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein.
1983. Providing a Unified Account of Definite Noun Phrases in Discourse. In 21st Annual Meeting of the Association for Computational Linguistics, pages 44–
50, Cambridge, Massachusetts, USA. Association for Computational Linguistics.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith.
2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics.
Blake Hallinan and Ted Striphas. 2016. Recommended for you: The netflix prize and the production of algorithmic culture. *New Media & Society*, 18(1):117–
137.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Oriol Vinyals, Jack William Rae, and Laurent Sifre. 2022. An empirical analysis of compute-optimal large language model training.
In *Advances in Neural Information Processing Systems*.
Ben Hutchinson, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. 2021. Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In *Proceedings of the 2021 ACM Conference on Fairness,*
Accountability, and Transparency, FAccT '21, page 560–575, New York, NY, USA. Association for Computing Machinery.
Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and fairness. In *Proceedings of the 2021 ACM*
Conference on Fairness, Accountability, and Transparency, FAccT '21, page 375–385, New York, NY,
USA. Association for Computing Machinery.
Aravind K. Joshi. 1969. Properties of Formal Grammars with Mixed Type of Rules and their Linguistic Relevance. In *International Conference on Computational Linguistics COLING 1969: Preprint No. 47*,
Sånga Säby, Sweden.
Aravind K. Joshi, S. Rao Kosaraju, and H.M. Yamada. 1972. String adjunct grammars: I. local and distributed adjunction. *Information and Control*,
21(2):93–116.
Aravind K. Joshi and Yves Schabes. 1989. An evaluation of lexicalization in parsing. In *Speech and* Natural Language: Proceedings of a Workshop Held at Cape Cod, Massachusetts, October 15-18, 1989.
Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B.
Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei. 2020.
Scaling laws for neural language models. *ArXiv*,
abs/2001.08361.
Divyansh Kaushik and Zachary C. Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 5010–
5015, Brussels, Belgium. Association for Computational Linguistics.
Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021.
Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110–4124, Online. Association for Computational Linguistics.
Bernard Koch, Emily Denton, Alex Hanna, and Jacob Gates Foster. 2021. Reduced, reused and recycled: The life of a dataset in machine learning research. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In *Advances in Neural* Information Processing Systems, volume 25. Curran Associates, Inc.
Percy Liang, Rishi Bommasani, Kathleen A. Creel, and Rob Reich. 2022a. The time is now to develop community norms for the release of foundation models.
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher R'e, Diana Acosta-Navas, Drew A.
Hudson, E. Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel J. Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan S.
Kim, Neel Guha, Niladri S. Chatterji, O. Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas F. Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda.
2022b. Holistic evaluation of language models.
ArXiv, abs/2211.09110.
Thomas Liao, Rohan Taori, Inioluwa Deborah Raji, and Ludwig Schmidt. 2021. Are we learning yet? a meta review of evaluation failures across machine learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track
(Round 2).
Mark Liberman. 2005. Franklin Medal to Aravind Joshi.
Language Log.
Tal Linzen. 2020. How can we accelerate progress towards human-like linguistic generalization? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5210–
5217, Online. Association for Computational Linguistics.
Nelson F. Liu, Tony Lee, Robin Jia, and Percy Liang. 2021. Can small and synthetic benchmarks drive modeling innovation? a retrospective study of question answering modeling approaches.
ArXiv:2102.01065.
Eleni Miltsakaki, Rashmi Prasad, Aravind Joshi, and Bonnie Webber. 2004. The Penn Discourse Treebank. In *Proceedings of the Fourth International* Conference on Language Resources and Evaluation
(LREC'04), Lisbon, Portugal. European Language Resources Association (ELRA).
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4885–4901, Online. Association for Computational Linguistics.
Malvina Nissim, Lasha Abzianidze, Kilian Evang, Rob van der Goot, Hessel Haagsma, Barbara Plank, and Martijn Wieling. 2017. Sharing is caring: The future of shared tasks. *Computational Linguistics*,
43(4):897–904.
Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Man- ˇ
ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman.
2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659–1666, Portorož, Slovenia. European Language Resources Association
(ELRA).
Carla Parra Escartín, Wessel Reijers, Teresa Lynn, Joss Moorkens, Andy Way, and Chao-Hong Liu. 2017.
Ethical considerations in NLP shared tasks. In *Proceedings of the First ACL Workshop on Ethics in Natural Language Processing*, pages 66–73, Valencia, Spain. Association for Computational Linguistics.
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, and Alex Hanna.
2021. Data and its (dis)contents: A survey of dataset development and use in machine learning research.
Patterns, 2(11):100336.
Kenneth L Peng, Arunesh Mathur, and Arvind Narayanan. 2021. Mitigating dataset harms requires stewardship: Lessons from 1000 papers. In *Thirtyfifth Conference on Neural Information Processing* Systems Datasets and Benchmarks Track (Round 2).
Theodore M. Porter. 1995. *Trust in Numbers*. Princeton University Press, Princeton.
Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0.
In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08),
Marrakech, Morocco. European Language Resources Association (ELRA).
Rashmi Prasad, Bonnie Webber, and Aravind Joshi.
2014. Reflections on the Penn Discourse TreeBank, comparable corpora, and complementary annotation.
Computational Linguistics, 40(4):921–950.
Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM
Conference on AI, Ethics, and Society, AIES '19, page 429–435, New York, NY, USA. Association for Computing Machinery.
Inioluwa Deborah Raji, Emily Denton, Emily M. Bender, Alex Hanna, and Amandalynne Paullada. 2021.
AI and the everything in the whole wide world benchmark. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks* Track (Round 2).
Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. 2020. Saving face: Investigating the ethical concerns of facial recognition auditing. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES '20, page 145–151, New York, NY,
USA. Association for Computing Machinery.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Phillip Rogaway. 2015. The moral character of cryptographic work. Cryptology ePrint Archive, Paper 2015/1162. https://eprint.iacr.org/2015/1 162.
Anna Rogers. 2020. Peer review in nlp: resource papers.
Morgan Klaus Scheuerman, Alex Hanna, and Emily Denton. 2021. Do datasets have politics? disciplinary values in computer vision dataset development. *Proc.*
ACM Hum.-Comput. Interact., 5(CSCW2).
Karen Spärck Jones and Julia R. Galliers. 1995. *Evaluating Natural Language Processing Systems: An* Analysis and Review. Number 1083 in Lecture Notes in Computer Science. Springer Verlag, Berlin.
Aarohi Srivastava, Abhinav Rastogi, Abhishek B
Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Annasaheb Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Santilli, Andreas Stuhlmuller, Andrew M. Dai, Andrew D. La, Andrew Kyle Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakacs, Bridget R.
Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Ozyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Stephen Howald, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, C'esar Ferri Ram'irez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Tatiana Ramirez, Clara Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Daniel H Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Gonz'alez, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, D. Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth P. Donoway, Ellie Pavlick, Emanuele Rodolà, Emma FC Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fan Xia, Fatemeh Siar, Fernando Mart'inez-Plumed, Francesca Happ'e, François Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo JaimovitchL'opez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Han Sol Kim, Hannah Rashkin, Hanna Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hubert Wong, Ian Aik-Soon Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, John Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, J. Brooker Simon, James Koppel, James Zheng, James Zou, Jan Koco'n, Jana Thompson, Jared Kaplan, Jarema Radom, Jascha Narain Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jenni Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Oluwadara Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Jane W
Waweru, John Burden, John Miller, John U. Balis, Jonathan Berant, Jorg Frohberg, Jos Rozen, José Hernández-Orallo, Joseph Boudeman, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Ochieng' Omondi, Kory Wallace Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia ContrerasOchando, Louis-Philippe Morency, Luca Moschella, Luca Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Col'on, Luke Metz, Lutfi Kerem cSenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Madotto Andrea, Maheen Saleem Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, M Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew Leavitt, Matthias Hagen, M'aty'as Schubert, Medina Baitemirova, Melissa Arnaud, Melvin Andrew McElrath, Michael A. Yee, Michael Cohen, Mi Gu, Michael I. Ivanitskiy, Michael Starritt, Michael Strube, Michal Swkedrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Monica Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, T MukundVarma, Nanyun Peng, Nathan Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas S. Roberts, Nicholas Doiron, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter W. Chang, Peter Eckersley, Phu Mon Htut, PiBei Hwang, P. Milkowski, Piyush S. Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, QING LYU,
Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ram'on Risco Delgado, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan Le Bras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib J. Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Sam Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi S. Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo hwan Lee, Spencer Bradley Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Rose Biderman, Stephanie C. Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M.
Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq A. Ali, Tatsuo Hashimoto, Te-Lin Wu, Theo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, T. N. Kornev, Timothy Telleen-Lawton, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler O'Brien Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Venkatesh Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, W Vossen, Xiang Ren, Xiaoyu F Tong, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yang Song, Yasaman Bahri, Ye Ji Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yu Hou, Yushi Bai, Zachary Seid, Zhao Xinran, Zhuoye Zhao, Zi Fu Wang, Zijie J.
Wang, Zirui Wang, Ziyi Wu, Sahib Singh, and Uri Shaham. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *ArXiv*, abs/2206.04615.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Leandro von Werra, Lewis Tunstall, Abhishek Thakur, Alexandra Sasha Luccioni, Tristan Thrush, Aleksandra Piktus, Felix Marty, Nazneen Rajani, Victor Mustar, Helen Ngo, Omar Sanseviero, Mario vSavsko, Albert Villanova, Quentin Lhoest, Julien Chaumond, Margaret Mitchell, Alexander M. Rush, Thomas Wolf, and Douwe Kiela. 2022. Evaluate&evaluation on the hub: Better best practices for data and model measurements.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In *Advances in Neural Information* Processing Systems, volume 32. Curran Associates, Inc.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *International Conference on Learning Representations*.
Bonnie Webber. 2018. Obituary: Aravind K. Joshi.
Computational Linguistics, 44(3):387–392.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H.
Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language models. *Transactions* on Machine Learning Research. Survey Certification.
WELM. 2021. Workshop on Enormous Language Models (WELM).
Chris Welty, Praveen K. Paritosh, and Lora Aroyo. 2019.
Metrology for ai: From benchmarks to instruments.
ArXiv, abs/1911.01875.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Binhang Yuan, Yongjun He, Jared Quincy Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Re, and Ce Zhang. 2022. Decentralized training of foundation models in heterogeneous environments.
In *Advances in Neural Information Processing Systems*.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations (pg 5)
✓ A2. Did you discuss any potential risks of your work?
Ethics (pg 5)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Yes (Abstract, Intro)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✗ **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
No response.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No response.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
kim-etal-2023-reconstruction | Reconstruction Probing | https://aclanthology.org/2023.findings-acl.523 | We propose reconstruction probing, a new analysis method for contextualized representations based on reconstruction probabilities in masked language models (MLMs). This method relies on comparing the reconstruction probabilities of tokens in a given sequence when conditioned on the representation of a single token that has been fully contextualized and when conditioned on only the decontextualized lexical prior of the model. This comparison can be understood as quantifying the contribution of contextualization towards reconstruction{---}the difference in the reconstruction probabilities can only be attributed to the representational change of the single token induced by contextualization. We apply this analysis to three MLMs and find that contextualization boosts reconstructability of tokens that are close to the token being reconstructed in terms of linear and syntactic distance. Furthermore, we extend our analysis to finer-grained decomposition of contextualized representations, and we find that these boosts are largely attributable to static and positional embeddings at the input layer. | # Reconstruction Probing
Najoung Kim,† Jatin Khilnani,∆ Alex Warstadt,δ **and Abed Qaddoumi**ρ
†Boston University ∆University of Pittsburgh δETH Zurich ρIndependent [email protected] [email protected] [email protected] [email protected]
## Abstract
We propose *reconstruction probing*, a new analysis method for contextualized representations based on reconstruction probabilities in masked language models (MLMs). This method relies on comparing the reconstruction probabilities of tokens in a given sequence when conditioned on the representation of a single token that has been fully contextualized and when conditioned on only the decontextualized lexical prior of the model. This comparison can be understood as quantifying the contribution of contextualization towards reconstruction—the difference in the reconstruction probabilities can only be attributed to the representational change of the single token induced by contextualization. We apply this analysis to three MLMs and find that contextualization boosts reconstructability of tokens that are close to the token being reconstructed in terms of linear and syntactic distance. Furthermore, we extend our analysis to finer-grained decomposition of contextualized representations, and we find that these boosts are largely attributable to static and positional embeddings at the input layer.
## 1 Introduction
Model building in contemporary Natural Language Processing usually starts with a neural network pretrained on the objective of context reconstruction
("language modeling"). Contextualized representations of complex linguistic expressions from such models have been shown to encode rich lexical and structural information (Tenney et al., 2019b; Rogers et al., 2020), making these models an effective starting point for downstream applications.
Probing pretrained language models aims to understand the linguistic information they encode, and how well it aligns with our understanding of human language (see Belinkov 2022 for a review). The methodologies employed include supervised classifiers targeting specific linguistic properties of interest (Ettinger et al. 2016; Giulianelli et al. 2018; Tenney et al. 2019a; Conia and Navigli 2022), similarity-based analyses (Garí Soler and Apidianaki, 2021; Lepori and McCoy, 2020), clozetype tests (Goldberg, 2019; Pandit and Hou, 2021),
and causal intervention-based methods (Vig et al.,
2020; Elazar et al., 2021; Geiger et al., 2021). This methodological diversity is beneficial given the high variability of conclusions that can be drawn from a study using a single method (Warstadt et al.,
2019)—converging evidence is necessary for a more general picture.
We contribute to this line of research with a new analysis method that we name *reconstruction probing*, which relies on token probabilities obtained from context reconstruction, applicable to models pretrained on objectives of this kind.1 Our method is characterized by two core properties. First, it is causal: rather than asking "what features can we extract from the contextualized representations?", we ask "what effect does contextual information have on the model predictions?" through intervention at the input level. Second, our method is behavioral:
it relies on the context reconstruction objective that the model was trained on. This obviates the need to train specialized probes, which can be difficult to interpret due to the added confound of task-specific supervision.
Our method aims to probe how much information the contextualized representation of a **single**
token contains about the other tokens that co-occur with it in a given sequence in masked language models. Our approach is to measure the difference between the reconstruction probability of a co-occurring token in the sequence given the full contextualized representation being probed, and the reconstruction probability of the same co-occurring token only from the lexical priors of the model.
This method can be generalized to compare two arbitrary representations where one representation 1Code and data available at https://github.com/
najoungkim/mlm-reconstruction.
Reconstructing: **Buddy chased the cat**
![1_image_0.png](1_image_0.png)
Buddy0,C=(chased the cat)_OUT [MASK]1_OUT, [MASK]2_OUT, [MASK]3_OUT
[MASK]2
[MASK]3
is expected to contain strictly more features than the other (e.g., a static embedding of a token vs. an embedding of the same token created by summing the static embedding and its positional embedding in context). Any difference between the reconstruction probabilities can be attributed to the presence/absence of those features.
Using this method, we find that the contextualized representation of a token contains more information about tokens that are closer in terms of linear and syntactic distance, but do not necessarily encode identities of those tokens. A follow-up analysis that decomposes contextualized representations furthermore shows that the gains in reconstructability we find are largely attributable to static and positional embeddings at the input layer.
## 2 Proposed Approach
Pretrained Transformer models such as BERT (Devlin et al., 2019) learn to construct contextual representations through context reconstruction objectives like masked language modeling (MLM; e.g.,
predicting the token in place of [MASK] in The [MASK] *sat on the mat*). Often, the models are also trained to reconstruct a randomly substituted token (e.g., predicting the token in place of *door* in The cat sat door the mat, created by randomly substituting a word in *The cat sat on the mat*). The classifier that makes these predictions can only make use of a single token representation from the final layer, meaning these representations are optimized to contain information about other tokens of the sequence and the position of the token itself insofar as this information can help to resolve the identity of the token. Our approach aims to quantify how much the contextualization of these tokens contributes to changing the MLM predictions.
## 2.1 Metric
We operationalize *contextual informativeness* of a token representation as its contribution to predicting other tokens in the same sequence—i.e.,
the contribution to the MLM probability, or *reconstruction probability*. We quantify the contribution of a more informative token representation j
++
towards reconstructing a different token i, by comparing the reconstruction probability P(i|j
++) to the reconstruction probability of i given a less informative token representation j, P(i|j).
For example, you can obtain the contextualized representation of *Buddy* in the input sequence *Buddy chased Cookie* by passing this through a model. If Buddy*contextual* encodes information helpful for predicting *chased*,
the masked language modeling probability P(*chased*|C[[MASK]1,Buddy*contextual*]) would be higher than P(*chased*|C[[MASK], ∅])—the lexical prior of the model for *chased*.
2 The difference between these probabilities is measured 2We use C[[MASK]pos, SOURCE] to refer to the contextualized representation of the [MASK] token at position pos at the output layer of the model, which is the input to the final classifier that produces the probability distribution for masked token prediction. See Section 2.2 for a full description of how we compute reconstruction probabilities.
SOURCE MODEL **RECONSTRUCTION MODEL**
[MASK]OUT
[MASK]OUT
[MASK]OUT BuddyOUT
chasedOUT
theOUT
catOUT
Layer N
![2_image_0.png](2_image_0.png)
theL2 catL2
[MASK]L2
[MASK]L2
[MASK]L2 BuddyL2
…
…
Layer 2 theL1 catL1
[MASK]L1
[MASK]L1
[MASK]L1 BuddyL1 Layer 1 BuddyOUT
![2_image_1.png](2_image_1.png)
Layer N
![2_image_2.png](2_image_2.png)
Layer 2 Layer 1 Buddy [MASK] [MASK] [MASK]
in terms of the log odds ratio given the base reconstruction probability q (predicting from less context) and the contextualized reconstruction probability p (predicting from more context):
$$\mathbb{L}\mathbb{O R}(p,q)=\ln\left({\frac{p/(1-p)}{q/(1-q)}}\right)$$
(1)
The probabilities p and q are defined with respect to SOURCE and RECONSTRUCTION (shortened as RE-CON) tokens. SOURCE tokens refer to tokens that are revealed to the model at prediction time (e.g.,
Buddy in the running example). RECON tokens are tokens in the original sequence the model is asked to predict (e.g., *chased* in the running example). In obtaining probabilities p and q, the RECON tokens are replaced with [MASK] tokens, only leaving the SOURCE token revealed to the model (more detailed description is given in Section 2.2). MLM
probability of the token in the original sequence is computed for each [MASK] token in the probe input—for instance, for Buddy*contextual* [MASK]
[MASK], we compute the probability of *chased* at position 1 given this sequence, and *Cookie* at position 2 given this sequence. We compute Eq. 1 for every pair of tokens (ti, tj ) in a given sequence, where tiis SOURCE and tj is RECON. This value represents the degree of change in the probability of the reconstruction token tj induced by the contextualization of the source token ti.
## 2.2 Obtaining The Reconstruction Probabilities
We use the metric proposed above to gauge the contribution of a contextualized representation of a single token in reconstructing its context, over and above the lexical prior (i.e., completely contextindependent) of the model as illustrated in Figure 1.
We describe below how the reconstruction probabilities from a fully contextualized representation and from the lexical prior of the model are obtained.
$$\mathrm{(1)}$$
Fully Contextualized To obtain a fully contextualized representation of a token in a particular sequence (e.g., *Buddy chased Cookie*), we first pass the original, unmasked sequence of tokens through a masked language model. Here, we save each contextualized token representation at each layer of the model (e.g., BuddyL1, *Buddy*L2, . . . , *Buddy*Lm where m is the number of layers). Then, we create n (n = |seq|) versions of the input sequence where only a single token is revealed (*Buddy* [MASK] [MASK], [MASK] *chased* [MASK], [MASK]
[MASK] *Cookie*). We pass each sequence through the same masked language model, but at each layer, we replace the representation of the unmasked token with the stored contextualized representation of that token (see Figure 2 for an illustration).
Then, in order for the masked language modeling head to predict each [MASK] token in the sequence, it can only rely on the information from the representation of the single unmasked token
(SOURCE), where the SOURCE token representation is contextualized with respect to the original,
![3_image_0.png](3_image_0.png)
fully unmasked sequence. For each [MASK] token in the sequence, we take the probability of the token in the same position in the original sequence as the reconstruction probability. For example, P(*chased*|C[[MASK]1,Buddy*contextual*]) and P(*Cookie*|C[[MASK]2,Buddy*contextual*]) are the reconstruction probabilities of *chased* and *Cookie*,
respectively, given the representation of fully contextualized *Buddy*.
Lexical Prior Only Baseline We pass through a fully masked version of the input sequence as above, but do not add the positional embeddings at the input layer. The reconstruction probability that we obtain here corresponds to the probability of predicting the token in the original sequence in the absence of any lexical information and positional information. We expect this probability to reflect a general prior of the model over the vocabulary, for instance based on frequency in the training corpus.
## 3 Experiment Setup 3.1 Models
We analyzed three Transformer-based masked language models widely used for obtaining contextualized representations: BERT (Devlin et al., 2019),
RoBERTa (Liu et al., 2019) and DistilBERT (Sanh et al., 2019). BERT and RoBERTa were both pretrained using the masked language modeling objective (BERT also on Next Sentence Prediction), and DistilBERT is a more compact version of BERT obtained through knowledge distillation. DistilBERT
has been claimed to retain much of the downstream task performance of BERT despite being substantially smaller (Sanh et al., 2019), and has been shown to be highly similar to BERT in terms of constituency trees that can be reconstructed from linear probes (Arps et al., 2022).
## 3.2 Data
We used sentences from the Multi-Genre Natural Language Inference (MNLI; Williams et al. 2018)
dataset for this analysis. We selected MNLI because it contains sentences of varying lengths from a range of domains, and is not a part of the pretraining data of the models we are probing. We then sampled 10K premise sentences from the nonspoken genres of the dataset (i.e., excluding TELE-PHONE and FACE-TO-FACE). We excluded spoken data as it is less typical of the data domain the models were trained on, and we excluded hypothesis sentences because they were generated by crowdworkers given the naturally-occurring premises.
## 3.3 Procedure
For each of the 10K sentences, we created two different sets of probe inputs as illustrated in Figure 1.
We passed the probe inputs to the models to obtain the two different reconstruction probabilities (from lexical prior only vs. from a fully contextualized source token) of each of the tokens in the input, as described in Section 2.2. Finally, we computed the log odds ratios between the two reconstruction probabilities using Eq. 1 to quantify the contribution of contextualization for all possible (SOURCE,
RECON) token pairs in the original sentence.
## 4 Analyses 4.1 Is Token Identity Exactly Recoverable From Contextualized Representations?
The RECON token is among the top 10 MLM predictions of the model only a small percent of the time (BERT: 22.1%. RoBERTa: 7.9%, DistilBERT: 8.2%), even though the SOURCE token provided to the model has been contextualized with all co-occurring tokens revealed. This observation suggests that the information encoded in the
![4_image_0.png](4_image_0.png)
contextualized representations is a degree more abstract than directly encoding the identities of cooccurring tokens in the same sequence. This is in line with Klafka and Ettinger's (2020) finding that the features of co-occurring tokens rather than their identities are often more recoverable from the contextual representations.
## 4.2 Is Reconstructability Greater When Tokens Are In A Syntactic Relation?
We hypothesize that the contextual information in an embedding should disproportionally reflect the syntactic neighbors of the word. To test this hypothesis, we partition reconstructability scores based on the syntactic relation between the SOURCE and RE-CON tokens as follows:3(1) SOURCE/**RECON** is head: Cases where there is a single dependency arc between two tokens, the closest dependency relation possible with the exception of subword tokens.
Reconstructing cat from *chased* in Figure 5 would be a case of SOURCE is head, and *chased* from cat would be RECON is head. (2) SOURCE/**RECON**
is ancestor: Cases where there is more than one dependency arc connecting the two tokens. Reconstructing the from *chased* would be a case of SOURCE is ancestor, and *chased* from the would be RECON is ancestor. (3) **subword**: SOURCE/RECON
tokens are subwords of the same lexical item. Bud and *\#\#dy* is an example. (4) **No relation**: None of the above relations holds. For example, tokens Bud and the are not in a dependency relation.
$$\overbrace{\mathrm{{Bud}}}^{\mathrm{{\small{~\begin{array}{l l l}{\mathrm{~\#dyd}}}&{\mathrm{~chased~}}}&{\mathrm{~the~\quadcat~}}}\end{array}}^{\mathrm{{\small{~\begin{array}{l l l}{\mathrm{~\#dyd}}}&{\mathrm{~chased~}}}&{\mathrm{~the~\quadcat~}}}$$
Figure 5: The dependency parse of the sentence Buddy chased the cat.
Our results in Figure 3 confirm our hypothesis.
In general, we find that the degree to which contextual information improves reconstruction depends on the existence of a syntactic relation between the SOURCE and RECON as expected. In all models, tokens in a subword or head-dependent relation are more reconstructable from each other compared to tokens with no relation. Furthermore, among tokens that are in a dependency relation, the closer the relation, the higher the reconstruction boost: reconstruction boost is the greatest for tokens in a subword relation, then for tokens in a head-dependent relation, and then for tokens in ancestor-descendant relation. These trends were consistent across all models we evaluate, with the exception of DistilBERT where reconstruction boost when SOURCE
is head was greater than tokens in a subword relation. The models showed more variation in whether ancestor relations boosted reconstructability significantly. While tokens in an ancestor-descendant relation (excluding direct dependents) were more reconstructable than tokens not in a dependency relation in BERT, this was not the case for RoBERTa and DistilBERT. We also did not find a large or consistent effect of whether the SOURCE token or the RECON token is the ancestor (including direct headdependent relations). Thus we cannot conclude that ancestors tend to contain more information about descendants than vice-versa.
## 4.3 Finer-Grained Syntactic Properties
In the next set of analyses, we study how finegrained syntactic properties of the words affect reconstructability, focusing on cases where there is a syntactic relation between SOURCE and RECON.
Dependency Relations One natural way to break down the results is by the label of the dependency relation that holds between SOURCE and RECON
when such a relation exists. However, we did not find overarching trends; results were generally idiosyncratic, although boost for token pairs in ROOT
and PRT (particle) relations was high across all models. See Appendix A for full results.
Functional Relations Next, we zoom in on relations between functional heads and their contentword dependents (Figure 4). Table 1 lists all the dependency arcs we use to identify functional heads.4 First, we find that reconstructability is generally high for these pairs. Second, auxiliary-verb relations are associated with particularly high reconstructability for all models. One possible explanation for this finding is the fact that there is always morphological agreement between auxiliaries and verbs, unlike most other functional relations. Third, among functional relations, reconstructability is always lowest for complementizer-verb relations
(labeled *mark*). We speculate that the complementizer might encode contextual information about the entire complement clause, which often includes many more content words than just the head verb.
We hypothesized that functional heads encode more information about their dependents in context than vice-versa, due to function words carrying less information than content words, but their contextual representations are equal in size, leaving more space for information about the rest of the sentence. Results from BERT support the hypothesis for all relations. On the other hand, no consistent asymmetry was observed for RoBERTa, and for DistilBERT, the observed pattern mostly contradicts our hypothesis. The large difference between BERT and DistilBERT results goes against prior 4While function words are typically considered heads of content words in linguistic theory, the opposite is often true in dependency labeling schemes.
| Relation | FW is ... | Example |
|------------|-------------|-----------------------------|
| aux | Dependent | The dog is sleeping. |
| auxpass | Dependent | The dog was taken out. |
| case | Dependent | The dog 's bone is gone. |
| det | Dependent | The dog barked. |
| mark | Dependent | I think that the dog ate. |
| pcomp | Head | I dream about dogs playing. |
| pobj | Head | I played with the dog. |
results that suggest that the syntactic trees recoverable from these two models are highly similar
(Arps et al., 2022).
## 4.4 Linear And Structural Distance
We also hypothesized that the distance between two tokens (both in linear and structural terms) would affect reconstruction. Linear distance is the difference between the linear indices of SOURCE and RECON: if they are the i th and j th tokens respectively, their linear distance is |i − j|. Structural distance is the number of arcs in the directed path between SOURCE and RECON tokens (if there is a path). For example, in Figure 5 the structural distance between the and *chased* is 2.
![5_image_0.png](5_image_0.png)
Linear Distance Predictably, we find that information encoded in contextualized representations is biased towards nearby tokens in linear space
(Figure 6, row 1). In other words, we find that reconstructability generally decreases with increase
Sequence to reconstruct Buddy chased Cookie -
| Ablated sequence | Probe input (SOURCE == 'Buddy') |
|--------------------|-----------------------------------|
Fully contextualized Buddy*contextual* chased*contextual* Cookie*contextual* Buddy*contextual* [MASK] [MASK]
Static lexical embedding (+position) Buddy*static* chased*static* Cookie*static* Buddy*static* [MASK] [MASK]
Static lexical embedding (-position) {Buddy*static*, chased*static*, Cookie*static*} {Buddy*static*, [MASK], [MASK]}
All mask (+position) [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] All mask (-position) (Lexical prior only) {[MASK], [MASK], [MASK]} {[MASK], [MASK], [MASK]}
in linear distance. For all models, the sharpest decrease is observed between 1- and 2-token distances. Beyond this, reconstructability decreases approximately linearly in BERT, and more gradually in RoBERTa and DistilBERT.
Structural Distance The second row of Figure 6 shows the decline in reconstructability as the number of intervening nodes in the dependency path between the tokens increases when comparing reconstruction. This trend is strictly monotonic in BERT, but there is an small increase starting from dependency depth 7 in RoBERTa and DistilBERT.
Due to the high variance in the deeper depth cases, it is unclear whether this is a genuine effect of contextualization.
## 5 Decomposing Contextualization
While we examined the effect of contextualization compared to the lexical prior only baseline, our method allows for a finer-grained decomposition of the components of contextualization. In pretrained Transformer models, the input representation of a token is a function of the static lexical embedding and a (context-specific) positional embedding.
Using our method, we can study the individual influence of the lexical embedding, positional embedding, and remaining sequence-specific contextualization (i.e., everything that happens beyond the input layer, *full contextualization* henceforth).
We create various ablated versions of a fully contextualized sequence, as shown in the **Ablated**
sequence column of Table 2. The reconstruction probabilities from these ablated sequences allow us to probe the contribution of the various components of contextualized language models. **Fully**
contextualized and **All mask (-position)** in Table 2 correspond to the reconstruction probabilities described and compared in Section 2.2, and the rest are intermediate ablations.
## 5.1 Results
Surprisingly, we find that there is often no clear benefit to reconstruction of providing the model with the contextualized embeddings at each layer, over just providing the input embedding (lexical + positional embeddings) of the source token (Figure 7, bottom). While BERT does gain reconstructability from full contextualization for subwords and when SOURCE is a head/ancestor, contextualization is generally harmful or at least not helpful to reconstruction for RoBERTa and DistilBERT. This indicates that the positive reconstruction boost observed in Figure 3 must be driven by static lexical and positional embeddings. Indeed, there are generally positive gains in reconstructability in models provided with the lexical embeddings of the SOURCE tokens compared to models given only
[MASK] tokens (Figure 7, top), and also in models provided with positional embeddings on top of lexical embeddings (Figure 9, middle column; Appendix B.3). We provide full comparisons between ablations and their interpretation in Appendix B.
## When Is Full Contextualization Helpful/Harmful?
To better understand the effect of full contextualization, we manually examined token pairs where the greatest differences in reconstruction probabilities with the static lexical + positional and fully contextual SOURCE tokens. In BERT and DistilBERT, the majority (52% and 80%) of the 100 most helpful scenarios of full contextualization involved reconstruction of an apostrophe in a contraction from single-character or bi-character tokens (e.g., m, t, re). As the source token is highly ambiguous on its own, contextualization seems to provide additional information that these (bi)character tokens are a part of a contraction (e.g., I'm, wasn't, *we're*. In RoBERTa, we found no interpretable pattern.
Cases where full contextualization negatively affected reconstruction were often when SOURCE
and RECON formed a common bigram (e.g., (*prix*,
![7_image_0.png](7_image_0.png)
grand), (*according*, to), (*\#\#ritan*, pu), (*United*,
States)). Since the RECON token is predictable from SOURCE alone, full contextualization seems to only dilute the signal.
Although we found that reconstruction is often better given only input embeddings (i.e., static +
positional embeddings) than fully contextualized embeddings, we take caution with the interpretation that full layerwise contextualization is in general harmful to the models, especially given prior evidence (Tenney et al., 2019a) that transformations across layers yield meaningful changes. One possible interpretation is that the idiosyncrasy of the procedure for transferring the contextualized source token falls outside the setting in which these models were trained, adding noise to the process.
## 6 Related Work
Our research question is similar to Klafka and Ettinger (2020) which use supervised classifiers to investigate how much information about other tokens in context is contained in the contextualized representation(s) of a token. Our approach addressed a similar question through reconstruction probability given more/less informative token representations. Our findings about better reconstructability between tokens in a syntactic dependency relation echo prior work that show sensitivity of MLMs to part-of-speech and other syntactic relations (Tenney et al., 2019b; Goldberg, 2019; Htut et al., 2019; Kim and Smolensky, 2021). A novel finding is that some of the syntactic dependency between tokens can be traced back to information in the input embeddings, complementing the dynamic layerwise analysis in work such as Tenney et al. (2019a)
and Jawahar et al. (2019). This result aligns with Futrell et al. (2019)'s observation that syntactic dependency is reflected in the corpus distribution as encoded in static embeddings. Existing work that analyzes static embeddings from contextualized models (Bommasani et al., 2020; Chronis and Erk, 2020; Sajjad et al., 2022) mostly concerns the *distillation* of static embeddings rather than isolating the contribution of static embeddings in contextualized prediction as in our work. More broadly, our work shares goals with intervention-based methods such as Geiger et al. (2021) and Wu et al. (2020), but we examine what the effect of our intervention is on masked language modeling probabilities rather than on separate downstream tasks. Karidi et al.
(2021) employs the most similar methodology to ours, in their use of predictions from the masked language modeling objective directly for probing.
However, their primary analysis concerns the role of contextualization in word sense disambiguation.
## 7 Conclusion
We proposed *reconstruction probing*, a novel method that compares reconstruction probabilities of tokens in the original sequence given different amounts of contextual information. Overall, reconstruction probing yields many intuitive results. We find that the information encoded in these representations tend to be a degree more abstract than token identities of the neighboring tokens—often, the exact identities of co-occurring tokens are not recoverable from the contexutalized representations.
Instead, reconstructability is correlated with the closeness of the syntactic relation, the linear distance, and the type of syntactic relation between the SOURCE and RECON tokens. These findings add converging evidence to previous probing studies about the implicit syntactic information of contextual embeddings (Tenney et al. 2019b). Furthermore, our method is generalizable to comparing reconstruction probabilities from any pair of representations that differ in the degree of informativeness. We extended our analysis to finer-grained decomposition of the components that constitute contextualized representations using this method, finding that most of the reconstruction gains we saw were attributable to information contained in static lexical and positional embeddings at the input layer. This calls for deeper investigations into the role of token representations at the input layer, complementing a large body of existing work on layerwise analysis of contextualized language models.
## Limitations
As we discussed in Section 5.1, further work is needed to investigate whether the negative effect of full contextualization beyond static + positional embeddings at the input layer is an idiosyncrasy of the embedding transfer procedure, or if this is a true effect. In future work, an experimental setup that is closer to the training setup, such as masking only the RECON token instead of all tokens and transferring the SOURCE could be adopted, in order to reduce the noise potentially introduced by the distributional change in the inputs. Regardless, we believe that findings regarding the information content of the representation at the input layer (static
+ positional embeddings) are novel and meaningful, and the quantification method we propose for comparing two representations in terms of their predictive utility is a generalizable methodological contribution.
We furthermore note that our attempts to conduct evaluation on newer masked language models were made challenging due to several technical issues in the library (e.g., masked language modeling being unavailable in DeBERTa (He et al., 2021):
https://github.com/huggingface/
transformers/pull/18674).
## Acknowledgments
We thank Sebastian Schuster, Grusha Prasad, Sophie Hao, and the members of the NYU Computation and Psycholinguistics lab for helpful discussions. This research was conducted through the NYU IT High Performance Computing resources, services, and staff expertise.
## References
David Arps, Younes Samih, Laura Kallmeyer, and Hassan Sajjad. 2022. Probing for constituency structure in neural language models. In *Findings of the Association for Computational Linguistics: EMNLP 2022*,
pages 6738–6757, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. *Computational Linguistics*, 48(1):207–219.
Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020.
Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4758–
4781, Online. Association for Computational Linguistics.
Gabriella Chronis and Katrin Erk. 2020. When is a bishop not like a rook? when it's like a rabbi! multiprototype BERT embeddings for estimating semantic relationships. In *Proceedings of the 24th Conference on Computational Natural Language Learning*, pages 227–244, Online. Association for Computational Linguistics.
Simone Conia and Roberto Navigli. 2022. Probing for predicate argument structures in pretrained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 4622–4632, Dublin, Ireland. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2021. Amnesic probing: Behavioral explanation with amnesic counterfactuals. Transactions of the Association for Computational Linguistics, 9:160–
175.
Allyson Ettinger, Ahmed Elgohary, and Philip Resnik.
2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134–139, Berlin, Germany. Association for Computational Linguistics.
Richard Futrell, Peng Qian, Edward Gibson, Evelina Fedorenko, and Idan Blank. 2019. Syntactic dependencies correspond to word pairs with high mutual information. In *Proceedings of the Fifth International* Conference on Dependency Linguistics (Depling, SyntaxFest 2019), pages 3–13, Paris, France. Association for Computational Linguistics.
Aina Garí Soler and Marianna Apidianaki. 2021. Let's play mono-poly: BERT can reveal words' polysemy level and partitionability into senses. Transactions of the Association for Computational Linguistics, 9:825– 844.
Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts. 2021. Causal abstractions of neural networks. In *Advances in Neural Information Processing Systems*, volume 34, pages 9574–9586. Curran Associates, Inc.
Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. In *Proceedings of the 2018 EMNLP*
Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240–248, Brussels, Belgium. Association for Computational Linguistics.
Yoav Goldberg. 2019. Assessing BERT's syntactic abilities. *arXiv:1901.05287*.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: Decoding-enhanced BERT with disentangled attention. In International Conference on Learning Representations.
Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R. Bowman. 2019. Do attention heads in BERT track syntactic dependencies?
arXiv:1911.12246.
Ganesh Jawahar, Benoît Sagot, and Djamé Seddah.
2019. What does BERT learn about the structure of language? In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 3651–3657, Florence, Italy. Association for Computational Linguistics.
Taelin Karidi, Yichu Zhou, Nathan Schneider, Omri Abend, and Vivek Srikumar. 2021. Putting words in BERT's mouth: Navigating contextualized vector spaces with pseudowords. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10300–10313, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Najoung Kim and Paul Smolensky. 2021. Testing for grammatical category abstraction in neural language models. In *Proceedings of the Society for Computation in Linguistics 2021*, pages 467–470, Online.
Association for Computational Linguistics.
Josef Klafka and Allyson Ettinger. 2020. Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4801–4811, Online. Association for Computational Linguistics.
Michael Lepori and R. Thomas McCoy. 2020. Picking BERT's brain: Probing for linguistic dependencies in contextualized embeddings using representational similarity analysis. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 3637–3651, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A robustly optimized BERT pretraining approach. *arXiv:1907.11692*.
Onkar Pandit and Yufang Hou. 2021. Probing for bridging inference in transformer language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4153–4163, Online. Association for Computational Linguistics.
Anna Rogers, Olga Kovaleva, and Anna Rumshisky.
2020. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842–866.
Hassan Sajjad, Firoj Alam, Fahim Dalvi, and Nadir Durrani. 2022. Effect of post-processing on contextualized word representations. In *Proceedings of the* 29th International Conference on Computational Linguistics, pages 3127–3142, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In The 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, NeurIPS 2019.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a.
BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593–
4601, Florence, Italy. Association for Computational Linguistics.
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? Probing for sentence structure in contextualized word representations. In International Conference on Learning Representations.
Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. 2020. Investigating gender bias in language models using causal mediation analysis. In Advances in Neural Information Processing Systems, volume 33, pages 12388–12401. Curran Associates, Inc.
Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investigating BERT's knowledge of language: Five analysis methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 2877–2887, Hong Kong, China. Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020.
Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 4166–4176, Online. Association for Computational Linguistics.
## A Dependency Relations B Detailed Decomposition Analysis B.1 Creating Ablated Sequences Fully Contextualized See Section 2.2. B.2 Representations Compared
Figure 8 shows the full reconstructability boost results for all dependency arc labels in our dataset.
Static embedding (+position) We pass through the masked language model the n versions of the input sequence described above, each of which has a single token revealed, at the input layer only.
Again, for each [MASK] token in the input sequence, we take the probability of the token in the same position in the original sequence as the reconstruction probability. This value corresponds to the probability of predicting the token in the original sequence given only the static lexical information of the source token and the positional information of the source and recon tokens.
Static embedding (-position) We pass through the n single token-revealed versions of the input sequence as described above, but at the input layer, we do not add the positional embeddings. The reconstruction probability obtained, then, corresponds to the probability of predicting the token in the original sequence given only the static lexical information of the source token and no positional information of any of the tokens.
All mask (+position) We pass through a fully masked version of the input sequence that consists of the same number of [MASK] tokens and obtain the reconstruction probability of the tokens in the original sequence. Hence, in this scenario, there is no source. The value obtained through this input corresponds to the probability of predicting the token in the original sequence in the absence of any lexical information. Note that the model still has access to the positional embeddings of the recon token, which may still be weakly informative for token prediction.
All mask (-position) See Section 2.2, 'Lexical prior only baseline'.
By comparing the reconstruction probabilities described above using Eq. 1, we can gauge the effect of the additional contextual information on performing masked language modeling. For example,
![11_image_0.png](11_image_0.png)
if we compare **Fully contextualized** and **Static**
embedding (+position), we can quantify the benefit of having the contextualization that happens through applying the model weights to the static representation of the input. If we compare **Static**
embedding (+position) and **Static embedding (-**
position), we can quantify the benefit of positional embeddings (when given the same static lexical information). We make six different comparisons illustrated in Table 3, each comparison serving a different analytic role.
## B.3 Further Discussion
We furthermore hypothesized the reconstruction boost from the availability of positional embeddings to be sensitive to the presence of a syntactic relation between SOURCE and RECON. This hypothesis is borne out in BERT and RoBERTA, but not in DistilBERT, suggesting that positional embeddings in DistilBERT are qualitatively different
(Figure 9, left column).
## C License And Terms For Use
License information for scientific artifacts used in this paper is as follows: MNLI (MIT License),
BERT (Apache-2.0 License), RoBERTa (MIT License), and DistilBERT (Apache-2.0 License). Our own code follows the GPL-3.0 License. All of the publicly available artifacts are used in ways that comply with their licenses.
## D Model And Implementation Details
The models that we used in this paper are all pretrained checkpoints from HuggingFace
(Wolf et al., 2020). Specifically, they are:
bert-large-uncased (340M parameters), roberta-large (355M parameters), and
Base Augmented What Base vs. Augmented can tell us All mask (-position) All mask (+position) Reflects the effect of positional information in the absence of any lexical information
other than the most general lexical priors of the model.
All mask (-position) Static (-position) Reflects the effect of static lexical information in the absence of positional information.
All mask (+position) Static (+position) Reflects the effect of static lexical information in the presence of positional information. Static (-position) Static (+position) Reflects the effect of positional information in the presence of full lexical information. Static (+position) Fully contextualized Reflects the effect of the contextualization through the layers of the model, beyond the input layer.
All mask (-position) Fully contextualized Comparison between the least and most contextualized reconstruction scenarios. Reflects
the overall change induced by contextualization over the lexical priors of the model.
distilbert-base-uncased (66M parameters). We inherited tokenization and any applicable hyperparameter settings from the specifications of the pretrained checkpoints. Computing reconstruction probabilities took around 3 CPU days for each model.
![13_image_0.png](13_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5.1 and a separate limitations section
A2. Did you discuss any potential risks of your work?
Not applicable. Primarily evaluation work on syntactic relations between token representations—no particular risk scenario envisioned.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Our own code for analysis described in Section 2, pretrained models (Section 3.1), the MNLI dataset
(Section 3.2).
✓ B1. Did you cite the creators of artifacts you used?
Analysis code is our own. Citations are provided in the relevant sections: pretrained models (Section 3.1), the MNLI dataset (Section 3.2).
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix C
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix C
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. We did not collect our own data.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sections 3.2 and 3.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Sections 3–5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix D
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Experimental setup is discussed throughout Sections 3–5 and in Appendix D.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sections 4–5, Appendix A–B
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix D
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
bulla-etal-2023-towards | Towards Distribution-shift Robust Text Classification of Emotional Content | https://aclanthology.org/2023.findings-acl.524 | Supervised models based on Transformers have been shown to achieve impressive performances in many natural language processing tasks. However, besides requiring a large amount of costly manually annotated data, supervised models tend to adapt to the characteristics of the training dataset, which are usually created ad-hoc and whose data distribution often differs from the one in real applications, showing significant performance degradation in real-world scenarios. We perform an extensive assessment of the out-of-distribution performances of supervised models for classification in the emotion and hate-speech detection tasks and show that NLI-based zero-shot models often outperform them, making task-specific annotation useless when the characteristics of final-user data are not known in advance. To benefit from both supervised and zero-shot approaches, we propose to fine-tune an NLI-based model on the task-specific dataset. The resulting model often outperforms all available supervised models both in distribution and out of distribution, with only a few thousand training samples. | # Towards Distribution-Shift Robust Text Classification Of Emotional Content
Luana Bulla Institute of Science and Technology of Cognition, National Research Council [email protected] Aldo Gangemi University of Bologna [email protected]
## Misael Mongiovì
Institute of Science and Technology of Cognition, National Research Council [email protected]
## Abstract
Supervised models based on Transformers have been shown to achieve impressive performances in many natural language processing tasks. However, besides requiring a large amount of costly manually annotated data, supervised models tend to adapt to the characteristics of the training dataset, which are usually created ad-hoc and whose data distribution often differs from the one in real applications, showing significant performance degradation in real-world scenarios. We perform an extensive assessment of the out-of-distribution performances of supervised models for classification in the emotion and hate-speech detection tasks and show that NLI-based zeroshot models often outperform them, making task-specific annotation useless when the characteristics of final-user data are not known in advance. To benefit from both supervised and zero-shot approaches, we propose to finetune an NLI-based model on the task-specific dataset. The resulting model often outperforms all available supervised models both in distribution and out of distribution, with only a few thousand training samples.
## 1 Introduction
Supervised text classification based on Transformers has recently achieved considerable performances, benefiting many applications in social (Mozafari et al., 2020) technological (Callaghan et al., 2021) and biomedical (Jin and Szolovits, 2020) domains, just to mention a few. However, these systems rely on large amounts of manually annotated data that are often expensive to obtain. Furthermore, to guarantee reasonable performances, supervised systems need to be trained on data that have the same distribution as the one in the deployed scenario (Koh et al., 2021). This requires a careful choice of data to annotate that is sometimes impossible to achieve because of the difficulty to infer in advance the characteristics of runtime data, and considering the potential evolution of data features during the system's lifetime (D'Amour et al., 2020). Recent work has shown that Transformers are more robust than other machine learning models to change in domain and distribution (Hendrycks et al., 2020). However, the decrease in performance due to the distribution shift is still a major issue of supervised models (Yang et al., 2022b).
Figure 1 shows the degradation in performances of models when applied to a different distribution. We consider three emotion classification tasks
(with different taxonomies) and a hate speech detection task, and report in-distribution (ID) performances, when the model is validated on the same dataset (light blue bars), in comparison with outof-distribution (OOD) performances, i.e. validated on different datasets (dark red bars). The drop in performance is significant, often overcoming 30% and sometimes reaching almost 50%. This makes models trained on certain data barely generalizable to other data, limiting drastically their scope.
Recent zero-shot models (Yin et al., 2019; Liu et al., 2021) have gained popularity thanks to their ability to reduce the dependency on task-specific annotated data by enabling models to predict previously unseen labels. For instance, models trained for Next Sentence Prediction (NLP) or Natural Language Inference (NLI) tasks can be applied to infer 8256 whether a certain textual label is associated with a sentence (Yin et al., 2019). Although supervised task-specific trained models typically outperform zero-shot approaches in the training dataset, it is reasonable to question how they compare when the supervised approach is trained on a different dataset.
In this work, we make a comprehensive assessment of the OOD performances of emotion detection and hate speech detection models in comparison with a NLI-based zero-shot model. Surprisingly, our results show that the zero-shot approach almost always outperforms the supervised models, suggesting that labeling a large amount of data is not beneficial when the data distribution is not a-priori known. To take advantage of both approaches we propose to adapt and fine-tune a NLI model with task-specific data. We show that a small amount of training data is sufficient to achieve performances that are often superior to the top-performing supervised models available, either ID or OOD.
Our contribution can be summarized as follows:
(1) we perform a comprehensive assessment of the OOD performance of supervised models for classification (multi-class, multi-label, binary) of emotive content in comparison to an NLI-based approach that does not require specific training, and we show that the latter often achieves higher performance; (2) we propose fine-tuning an NLI model on task-specific data and show experimentally that this solution achieves competitive performances both ID and OOD with only a few thousand samples; (3) we extensively discuss our results and give useful indications for achieving significant ID and OOD performances with a small annotation cost.
## 2 Related Works
Developing models that are robust to domain and distribution shift is one of the most intriguing yet challenging tasks in various machine learning applications (Koh et al., 2021) including computer vision (Ibrahim et al., 2022; Yang et al., 2022a; Larson et al., 2022) and NLP (Csordás et al., 2021; Malinin et al., 2021; Hendrycks et al., 2020). We refer to Zhou et al. (Zhou et al., 2022) for an extensive survey on domain generalization. While some works offer a more theoretical perspective on the topic (Arora et al., 2021a; Ren et al., 2019), general work in the NLP field has been focused mainly on developing benchmarks for evaluating the outof-distribution robustness of models. Hendrycks
![1_image_0.png](1_image_0.png)
et al. (Hendrycks et al., 2020) studies OOD generalization for seven NLP datasets in the tasks of sentiment classification, semantic similarity, reading comprehension, and textual implication and show that pre-trained Transformers adapt better to OOD data. Yang et al. (Yang et al., 2022b) propose a unified benchmark called GLUE-X to evaluate OOD robustness in NLP systems. They collect 13 datasets covering tasks such as sentiment analysis, natural language inference, sentence pair similarity, textual similarity, and linguistic acceptability. For each task, they select a dataset for training and other datasets for OOD evaluation. The study shows that better OOD accuracy is needed for NLP tasks, due to the noticeable loss of performance with respect to the ID settings. Both works do not compare the performances to zero-shot approaches and do not propose specific methods for increasing OOD robustness. Furthermore, they do not consider the tasks of emotion detection and hate-speech detection.
Approaches to deal with the distribution shift problem include OOD detection (Arora et al.,
2021b) and Mixture of Experts (MoE) models (Guo et al., 2018). OOD detection aims at recognizing OOD text to give awareness of the potential degradation in performances, while MoE models tend to combine domain-specific models to improve performances in multi-domain contexts. Both these approaches are out of the scope of our work since they do not specifically focus on assessing and improving the performances of models over unseen domains and data distribution changes.
Specific studies on text classification related to ours usually focus on domain generalization by training on text in one domain and testing on a different domain within the same dataset. Although related, these approaches do not consider domainindependent differences that occur across datasets concerning e.g. text features (e.g. length), linguistics features (e.g. use of slang) and annotation processes. PADA (Ben-David et al., 2022)
generates domain-related features and adds them to the text to enable the model to adapt to different domains. Other studies refer to specific tasks such as moral value classification (Liscio et al., 2022) and sentiment analysis (Fu and Liu, 2022; Zhang et al., 2022a; Li et al., 2022; Luo et al.,
2022; Liu and Zhao, 2022). Despite not considering the model generalization across datasets, and being often application-specific, these methods do not make any assessment with zero-shot learning nor consider building upon them to improve OOD performance.
To the best of our knowledge, the only studies on distribution shift that consider the emotion and hate-speech detection tasks are the work of Toraman et al. (Toraman et al., 2022), which evaluates how BERT generalizes across abusive language detection datasets, and Zeng et al. (Zeng et al., 2022)
that propose a CNN-based broad learning model for cross-domain emotion classification. The first study on abusive language detection does not compare with zero-shot models nor proposes a method for OOD generalization. The Zeng et al. work considers a multi-domain dataset obtained by collecting data from Chinese E-commerce platforms and performs the assessment across domains. Again they do not perform a comparison with zero-shot models nor evaluation across datasets.
Another line of research related to our work concerns zero-shot and prompt-based models. Pushp et al. (Pushp and Srivastava, 2017) propose and evaluate three LSTM architectures for zero-shot classification that combine text embedding with label embedding to determine whether the label is related to the input text. Yin et al. (Yin et al., 2019) provide datasets, a standard for evaluation and state-of-theart baselines for zero-shot classification. Barker et al. (Barker et al., 2021) propose performing supervised classification on known labels, then applying NLI for cases that do not qualify for previously known labels. Zhang et al. (Zhang et al., 2022b)
propose a meta-learning framework to learn to calibrate the class and sample representations from both seen and virtual unseen classes. Other studies focus on the impact of different prompts on performances (Liu et al., 2021). In particular, we highlight the work of Plaza-del-Arco et al. (Plazadel Arco et al., 2022) which compares different ways to build the hypothesis prompt for NLI-based emotion detection. Although we adopt an NLIbased zero-shot model in our work, taking inspiration from the work of Yin et al. (Yin et al., 2019)
and Plaza-del-Arco et al. (Plaza-del Arco et al.,
2022), no other work that we are aware of makes an extensive comparison of OOD performances of supervised models with zero-shot models and fine-tuning of the latter, finding the sweet spot between no-specific training and a fully supervised approach.
To the best of our knowledge, there are no extensive state-of-the-art studies focusing on the OOD
robustness (across datasets) of supervised models in emotion classification tasks. In general, we are not aware of any study that compares OOD performances of supervised models for text classification with zero-shot models and assesses the best way to fine-tune a zero-shot model.
## 3 Materials And Methods
We describe in detail the benchmark data and models of our work. In Section 3.1 we discuss datasets.
Section 3.2 focuses on the analysis of the stateof-the-art supervised models and the NLI-based systems we employ. Table 1 summarizes the material of our study, including classification tasks, taxonomies, datasets and supervised models.
## 3.1 Datasets
We conduct our experimental study on ten datasets for multi-class, multi-label and binary classification. Specifically, we focus on datasets for emotion and hate speech detection.
For the multi-class emotion classification task, we apply five distinct benchmarks and study two different taxonomies. The first set includes the range of the Primary Emotions of Parrott theory (Parrott, 2001) (i.e. "love", "joy", "sadness",
"anger", " fear", " and "surprise") and is covered by Emotion corpus (Saravia et al., 2018) and a scaleddown version of the GoEmotion dataset (Demszky et al., 2020) (GoE-Parrott). We consider GoEmotion for its wide range of content, labels and data qualities, which make it suitable for fitting emotion taxonomies of other datasets used in our study. For this reason, we generate two additional customized versions of this benchmark. The first one (GoEEkman) is designed for multi-class detection based
| Task | Typology | Taxonomy | Datasets | Models |
|------------------------------------------------------------------|--------------------------------|------------------------------------|------------------------------------|------------------------------------|
| Parrott Emotion | Multi-class | Love, joy, sadness, | E-T5 | |
| GoE-Parrott | | | | |
| anger, fear, surprise | Emotion (Saravia et al., 2018) | E-Bert GoE-Bert | | |
| Disgust, joy, sadness, | | | | |
| Ekman Emotion | Multi-class | anger, fear, surprise and neutral | E-BERTweet (Pérez et al., 2021) | |
| E-DistilRoBERTa (Hartmann, 2022) Emo-Bert | | | | |
| GoE-Ekman | | | | |
| EmoEvent (Plaza-del Arco et al., 2020) XED ("Ohman et al., 2020) | | | | |
| Disgust, joy, sadness, | | | | |
| Multi Emotion | Multi-label | anger, fear, love, | M-GoE | M-Bert |
| M-Emotion (Mohammad et al., 2018) | M-GoE Bert | | | |
| optimism and surprise | Din-Gen (Vidgen et al., 2020) | LFTW-RoBERTa (Vidgen et al., 2020) | | |
| Binary-HS | Binary | Hate, Not Hate | YouTube (Ljubešic et al. ´ , 2021) | YT-Bert (Ljubešic et al. ´ , 2021) |
| WSF-HS (De Gibert et al., 2018) | | | | |
on Ekman's theory of emotions (Ekman, 1992)
("disgust", "joy", "sadness", "anger", "fear" and
"surprise", plus an additional "neutral" label) and adapts to the XED dataset ("Ohman et al., 2020)
and the tweet-based EmoEvent corpus (Plaza-del Arco et al., 2020). The second one (M-GoE), allows us to fit the M-Emotion corpus (Mohammad et al., 2018), a tweet-based restricted dataset for multi-label classification. By taking all emotions that overlap between GoEmotion and M-Emotion, we obtain a third taxonomy based on eight labels
(i.e. "disgust", "joy", "sadness", "anger", "fear",
"love", "optimism" and "surprise"). In the second stage, we focus on the binary hate-speech detection task. In this scenario, we employ the Dynamically Generated dataset (Vidgen et al., 2020) (Din-Gen),
the YouTube HS corpus (Ljubešic et al. ´ , 2021)
(YouTube), and the WSF-HS dataset (De Gibert et al., 2018). The former is built by an iterative annotation process, starting from a collection of previously released hate speech datasets, the second is composed of YouTube comments captured during the time of the COVID-19 pandemic, while the third focuses on a random collection of offensive forum posts. Further details on the employed datasets are given in the supplemental material.
## 3.2 Reference Models
We employ a group of supervised models designed to address multi-class, multi-label, and binary classification, in order to evaluate their OOD performances, i.e. their performances on a different dataset than the one used for training. As a comparison, we examine the results of three alternative NLI-based system configurations, seeing how unsupervised models perform in this context. The following paragraphs provide further information on the first and second groups (Sect. 3.2.1, 3.2.2).
## 3.2.1 Supervised Models
We use eight models for emotion detection, six of which are focused on a multi-class classification scenario and the other two ones on multi-label classification. To perform an ODD evaluation on all datasets available and since no trained model is suited to some of them, we trained four standard BERT classifiers on the missing datasets (i.e.
GoE-Parrott, M-GoE, and M-Emotion) obtaining checkpoints that we name GoE-Bert, M-GoE Bert, M-Bert. The classifiers employ the pre-trained BERT-base checkpoint and apply a dropout layer, a linear layer and then a softmax on the pooled output embedding of the CLS token. We also train the same BERT-based architecture on the EmoEvent dataset (Plaza-del Arco et al., 2020) (Emo-Bert)
to compare it to BERTweet, which has been pretrained on tweet data. The tune of hyperparameters was conducted on the validation set through grid search taking into consideration a range of parameters ranging from 0.1 to 0.4 for the dropout, among 10−5, 3·10−5and 5·10−5for the learning rate, and between 32 and 64 for the batch size. The number of epochs was set to 10. For each configuration, we performed a single run. For the multi-class classification task, we also employ the BERT-based E-Bert model1and the T5-based (E-T5) system2. Both of them are trained on the Emotion dataset (Saravia et al., 2018) and explore the Parrott theory perspective. From Ekman's taxonomy, we consider the RoBERTa-based E-BERTweet (Pérez et al., 2021),
and E-DistilRoBERTa (Hartmann, 2022) models, which are trained on the EmoEvent corpus (Plazadel Arco et al., 2020) and on a combination of six emotional datasets (Hartmann, 2022), respectively.
For binary hate-speech detection, we employ YT-Bert (Ljubešic et al. ´ , 2021) and LFTW-
RoBERTa (Vidgen et al., 2020). The former has been trained on the YouTube corpus (Ljubešic´
et al., 2021) while the latter refers to the Din-Gen dataset (Vidgen et al., 2020).
## 3.2.2 Nli-Based Classifiers
Inspired by the work of Yin et al. (Yin et al., 2019),
we employ pre-trained NLI models as ready-made zero-shot sequence classifiers. We create a hypothesis for each potential label and use the input text as an NLI premise. For the hypothesis construction, we use the prompt "This sentence expresses
<label>". We use "discrimination and hate" as label for hate speech. Different prompts are also employed for the prompt analysis in Section 4.4. To determine which emotion is prevalent in the input text from an NLI perspective we take the emotion that corresponds to the highest-scoring entailment output. To manage neutrality and for binary classification, we apply a 0.5 cut-off on the normalized entailment score. For multi-label classification, we take all emotions that correspond to a normalized entailment score above or attained to 0.5. Since there is no specific training phase, the approach is particularly useful when there are no high-quality task-specific annotated samples. Furthermore, the method is applicable to a variety of document types in different domains.
We consider three checkpoints as NLI models: MNLI-Bart-large3, MNLI-RoBERTa-large 4 and MNLI-DeBERTa-large5, all trained on the MultiNLI (MNLI) dataset (Williams et al., 2017).
For more details on the different configurations of the NLI models on the taxonomies and datasets examined (Sect. 3.1), we refer to Sect. 4.
## 3.2.3 Fine-Tuning Nli-Based Classifiers
We propose optimizing NLI models (we take MNLI-RoBERTa-large as reference) on taskspecific datasets to take advantage of both zero-shot and supervised methods. We replace the last linear layer of the NLI model to fit the classification taxonomy. The resulting architecture is fine-tuned on the target dataset. During fine-tuning the parameters of the last linear classification layer are learned from scratch, while the remaining parameters are tuned. The tune of hyperparameters was conducted on the validation set through grid search taking into consideration a range of parameters ranging from 0.1 to 0.4 for the dropout, among 10−5, 3·10−5and 3huggingface.co/facebook/Mnli-Bart-large 4huggingface.co/roberta-large-mnli 5huggingface.co/deberta-large-mnli 5 · 10−5for the learning rate, and between 32 and 64 for the batch size. The number of epochs was set to 10. For each configuration, we performed a single run.
## 4 Results And Evaluation
We assess the OOD performances of supervised models in comparison with NLI-based classification. We group our evidence by looking at three main classification problems: multi-class (where an item can be associated with only one label), multilabel (where an item can be associated with more than one label) and binary. We set out to investigate performances on emotion-domain-specific detection tasks as a unifying framework across all experiments. We present the details of our experimental settings and the results in Sections 4.1, 4.2 and 4.3.
All output data are available on GitHub6. We also evaluate the performances of different prompts in Section 4.4. In the last parts of our experimental analysis we evaluate the NLI-with-fine-tuning method discussed in Section 3.2.3 at varying the number of training samples and in comparison with fully supervised systems (Sect. 4.5). Unless differently specified, all performances reported refer to the F1-score. For multi-class and multi-label classification, we consider the weighted F1. All experiments have been run on a server with 2 CPU Intel Xeon Gold 6238R 2.20GHz with 640GB RAM and two GPU A100 40GB. As a rough estimation, our experiments took in total about 30 GPU days.
## 4.1 Multi-Class Classification
We examine the models' performances in multiclass emotion detection considering two different taxonomies. The former, based on the Parrott theory, considers six different emotions (joy, love, sadness, surprise, anger and fear), while the latter focuses on Ekman's theory with the addition of a seventh category for expressions devoid of emotional connotations. By expanding the taxonomic coverage, we intend to test the models' ability to discriminate between more or less semantically complex labels in uncorrelated datasets.
In the first scenario, we compare OOD performances of supervised models E-T57, E-Bert8, and GoE-Bert9, discussed in Sect. 3.2.1, with 6https://github.com/LuanaBulla/
Text-Classification-of-Emotional-Content 7huggingface.co/t5-base-finetuned-emotion 8huggingface.co/bert-base-uncased-emotion 9link hidden for blind review
| Models | GoE-Parrott | Emotion | | | |
|-----------------------------------------------------|---------------|-----------|--------|-----------|-------|
| E-T5 | 0.51 | - | | | |
| E-Bert | 0.47 | - | | | |
| GoE-Bert | - | 0.27 | | | |
| MNLI-BART-large | 0.63 | 0.51 | | | |
| MNLI-RoBERTa | 0.72 | 0.52 | | | |
| MNLI-DeBERTa | 0.74 | 0.54 | Models | M-Emotion | M-GoE |
| Multi-E Bert | - | 0.48 | | | |
| M-GoE Bert | 0.44 | - | | | |
| MNLI-BART-large | 0.45 | 0.53 | | | |
| MNLI-RoBERTa | 0.46 | 0.58 | | | |
| MNLI-DeBERTa | 0.49 | 0.63 | | | |
| Table 4: F1-score for each supervised and NLI-based | | | | | |
| Models | GoE-Ekman | EmoEvent | XED |
|-----------------|-------------|------------|-------|
| E-BERTweet | 0.68 | - | 0.47 |
| Emo-Bert | 0.65 | - | 0.42 |
| E-DistilRoBERTa | - | 0.18 | 0.47 |
| MNLI-BART-large | 0.64 | 0.44 | 0.39 |
| MNLI-RoBERTa | 0.74 | 0.49 | 0.42 |
| MNLI-DeBERTa | 0.66 | 0.53 | 0.45 |
the NLI-based systems discussed in Sect. 3.2.2
(i.e. MNLI-Bart-large, MNLI-RoBERTa-large and MNLI-DeBERTa-large) on GoE-Parrot and Emotion datasets discussed in Sect. 3.1. The results (Table 2) show that MNLI-DeBERTa performs better in both cases, with an F1-score of 0.74 on GoEParrott and 0.54 on Emotions. Moreover, NLIbased systems always outperform supervised systems by a wide margin.
In the second scenario, we consider the EmoEvent corpus, the GoE-Ekman dataset and the XED dataset (discussed in Sect. 3.1) as benchmarks. The supervised models are E-BERTweet, E-DistilRoBERTa, Emo-Bert (all discussed in 3.2).
Table 3, shows that the top-performing system is NLI-based on two over three cases. On EmoEvent every NLI-based system outperforms the supervised model by a wide margin. On XED, results are comparable (0.45 for NLI-based vs. 0.47 for supervised models). In this dataset, all models show suboptimal performances, which might indicate a lower quality of the data.
## 4.2 Multi-Label Classification
To evaluate the performance of models in a multilabel emotion scenario, we adopt a seven-base taxonomy - joy, disgust, love, optimism, sadness, surprise, anger and fear - and test on the M-Emotion and M-GoE datasets (Sect. 3.1). We train M-Bert and M-GoE Bert (Sect. 3.2) on the above corpora to compare supervised vs. NLI-based models. As shown in Table 4, MNLI-DeBERTa achieves the best performances in both M-Emotion and M-GoE
datasets, with an F1-score of 49% and 63%, respectively. Again all NLI-based models always outperform supervised models.
## 4.3 Binary Classification
In order to assess how models react to the datashift problem in a binary classification context, we test their ability to detect hate speech from datasets that are not included in their training phase. Results are reported in Table 5. We use as a benchmark the datasets Din-Gen, YouTubeand WSF-HS,
discussed in Sect. 3.1. The supervised models were trained on Din-Gen and YouTube, respectively. We compare the performances of the supervised models (LFTW-RoBERTa and YT-Bert)
with NLI-based architectures (i.e. MNLI-Bartlarge, MNLI-RoBERTa, MNLI-DeBERTa), presenting the outcome again in terms of weighted F1-score for each model. As shown in Table 5, on Din-Gen and YouTube all NLI-based classifiers outperform supervised models, with the top performances achieved by MNLI-DeBERTa and MNLI-RoBERTa, respectively, with an F1-score of 0.72 and 0.62. On the WSF-HS dataset, LFTWRoBERTa achieves the best performances with a 67% F1-score. The good performances of LFTWRoBERTa suggest that WSF-HS data have characteristics in common with Din-Gen (training dataset for LFTW-RoBERTa). This explanation is supported by results reported in Sect. 4.5, where the NLI model fine-tuned on Din-Gen is shown to sig-
| Models | Din-Gen | YouTube | WSF-HS |
|-----------------|-----------|-----------|----------|
| LFTW-RoBERTa | - | 0.35 | 0.67 |
| YT-Bert | 0.62 | - | 0.45 |
| MNLI-BART-large | 0.70 | 0.59 | 0.39 |
| MNLI-RoBERTa | 0.68 | 0.62 | 0.38 |
| MNLI-DeBERTa | 0.72 | 0.54 | 0.40 |
nificantly outperform LFTW-RoBERTa. Moreover, in this dataset, NLI-based methods perform worse than in other datasets, probably due to a more prominent imbalance among hate and non-hate content (only 12% of hate content), with respect to the other datasets.
## 4.4 Prompt Analysis
To assess the variability of NLI-based classifiers with different prompts, we performed a comprehensive prompt configuration analysis on all NLIbased systems taken into consideration in our study.
We consider the prompt as a hypothesis for NLI
that is given to evaluate its degree of entailment by the text item, considered as a premise. We consider three different settings, each linked to a separate prompt. In the first case, we emphasize a factual point of view that explicitly uses the content of the sentence as the subject (Prompt 1: "This sentence expresses <label>" - we use "discrimination and hate" as a label for hate speech). In the second instance, we use the label provided by the taxonomy as is (Prompt 2: "<label>"). As a third option, we assume a more individualized track, which speaks directly to emotionality and subjectivity (Prompt 3: "I feel <label>" for emotion, "This is hateful content" for hate speech). In most cases, the performances of different prompts are similar (within 5%, with a few exceptions). Prompt 1 usually outperforms the other prompts. This is expected since Prompt 1 puts the focus on the content, while Prompt 2 is not well semantically connected with the text and Prompt 3 puts the focus outside of the content (the subject is "I"). Detailed results in terms of F1-score are given in the supplemental material.
## 4.5 Fine-Tuned Nli Analysis
Following the methodology detailed in Section 3.2.3, we fine-tune checkpoint MNLI-RoBertalarge on eight different dataset configurations to solve multi-class, multi-label and binary classification. Our analysis (details and tables in the supplemental material) shows that the NLI-fine-tuned system always outperforms supervised models on the same dataset (ID) when the whole dataset is employed for training, with only two exceptions, probably due to the presence of insufficient data
(i.e. on M-Emotion) and the comparison to a model specifically pre-trained on the same kind of data
(i.e. BERTweet, pre-trained on tweets). However, this adaptability has a disadvantage in terms of ODD results, which do not always show performances achieved by zero-shot architectures.
To find a good trade-off between ID and OOD
performances, we scale down the training set and study how the model behaves as the training sample size increases. We start with a random sample of 100 items and expand it exponentially by doubling its size at each step until we reach the full size10. Figure 2 reports the results on the multiclass setting with the Parrott taxonomy. The model is trained on GoE-Parrot. The figure shows the trend of NLI fine-tuned at varying the number of training samples both ID (on GoE-Parrot itself)
and OOD (on Emotion). We also report the ID
and OOD performances of the native supervised model (GoE-Bert) and of NLI without fine-tuning
(dashed lines). For small training samples, both ID
and OOD performances degrade w.r.t. NLI without fine-tuning, since the last classification layer has not had enough data to adapt. When the training data increase, both ID and OOD performances rapidly rise. While ID performances always increase, OOD performances reach a plateau and then start to decrease. This suggests that the model is over-adapting to the specific dataset and hence became less generalizable to other datasets.
Not all cases show the same behavior: sometimes both ID and OOD performances always increase (for small datasets), and sometimes both reach a plateau. Table 6 gives a general performance overview of all datasets with a training size of 3200 items. The fine-tuned NLI-based classifier
(MNLI-RoB-FineTuned) outperforms all native supervised systems (we report the top performer) in an OOD setting in 7 over 10 cases, with comparable results in the remaining cases, while achieving top ID performances on four over eight datasets. We note that in two over three cases of slightly worse OOD performance of MNLI-RoB-FineTuned, NLI
10we provide specific configuration details for each classification task in the supplement
| Models | GoE Parrott | Emotion | GoE Ekman | EmoEvent | XED | M-Emotion | M-GoE | Din-Gen | YouTube | WSF HS |
|--------------------------|---------------|-----------|-------------|------------|-------|-------------|---------|-----------|-----------|----------|
| Top-Supervised (ID) | 0.80 | 0.94 | 0.66 | 0.76 | - | 0.72 | 0.82 | 0.83 | 0.71 | - |
| MNLI-RoB-FineTuned (ID) | 0.90 | 0.91 | 0.85 | 0.60 | - | 0.49 | 0.82 | 0.79 | 0.82 | - |
| Top-Supervised (ODD) | 0.51 | 0.27 | 0.68 | 0.18 | 0.47 | 0.44 | 0.48 | 0.62 | 0.35 | 0.67 |
| MNLI-RoB-FineTuned (ODD) | 0.57 | 0.56 | 0.66 | 0.48 | 0.46 | 0.45 | 0.61 | 0.60 | 0.65 | 0.84 |
| MNLI-RoBERTa | 0.72 | 0.52 | 0.74 | 0.49 | 0.42 | 0.46 | 0.58 | 0.68 | 0.62 | 0.38 |
![7_image_0.png](7_image_0.png)
without fine-tuning (MNLI-RoBERTa) achieves the best performance. In general, the zero-shot MNLI-RoBERTa model outperforms the refined NLI approach in the OOD scenario on five over ten datasets. However, in an ID setting, the former does not reach the performance of the latter.
The fine-tuned approach with limited training data represents a good trade-off between ID and OOD
performances. The complete plots for all datasets are available in the supplemental material.
## 4.6 Discussion
Our investigation compares supervised and NLIzero-shot models with a focus on different typologies of emotion detection and hate speech recognition both in and out-of-distribution. This provides a comprehensive explanation of the limitations and advantages of both methodologies for the two scenarios. According to our results, the supervised models show good ID performance at the price of a significant drop in OOD performance. In contrast, unsupervised zero-shot systems excel in OOD settings but do not outperform supervised models in ID contexts. A reasonable compromise between the two methodologies is the NLI-fine-tuned method, which improves ODD results compared to supervised systems and achieves good performance compared to the zero-shot approach in an ID setting.
In a situation where limited training data are available, the fine-tuned NLI system has the advantage of achieving a good trade-off between ID
and OOD performance, with less training data than supervised models. Using a zero-shot NLI-based system is preferable in situations where the final data distribution is unknown. Furthermore, it requires less implementation time and no training dataset.
Our experimental analysis is not without limitations. We focused on emotive content (emotion classification and hate speech detection), therefore our results might not be extendable to other domains. Emotions have a certain degree of subjectivity that can affect the annotation process by making data annotator-dependent. In other fields, this might not be the case. Moreover, our analysis is limited to ten datasets that we believe are representative of the work in this field. However, many other datasets are available, especially in the hate speech domain, and a wider evaluation might lead to a more definitive conclusion. Another limitation of our work is that we only considered NLI-based approaches as zero-shot models. Other zero-shot approaches might perform better, as pointed out by Ma et al. (Ma et al., 2021). In our experimental analysis, NLI-based approaches perform better than as reported in the Ma et al. work, and the difference might depend on implementation details. In any case, supposed better performances of other zero-shot approaches can only strengthen our conclusion, i.e. that it is possible to improve OOD
performances by limiting or completely avoiding task-specific training, which often requires a considerable annotation cost.
## 5 Conclusion
We made an extensive experimental analysis of the OOD performances of supervised Transformerbased classifiers trained on task-specific data with emotive content, in comparison with zero-shot approaches based on NLI that do not require specific training. Our results show that, although nospecific-training approaches are not able to perform as well as supervised models in the same dataset, they often achieve the best performance w.r.t. supervised models evaluated on a different dataset. We found that a mixed approach consisting in fine-tuning NLI-based classifiers with limited data reaches a good trade-off between ID and OOD
performances.
## Acknowledgement
We acknowledge financial support from the H2020 projects TAILOR: Foundations of Trustworthy AI
- Integrating Reasoning, Learning and Optimization - EC Grant Agreement number 952215 - and SPICE: Social Cohesion, Participation and Inclusion through Cultural Engagement - EC Grant Agreement number 870811, as well as from the Italian PNRR MUR project PE0000013-FAIR.
## References
Udit Arora, William Huang, and He He. 2021a. Types of out-of-distribution texts and how to detect them.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 10687–10701, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Udit Arora, William Huang, and He He. 2021b. Types of out-of-distribution texts and how to detect them.
arXiv preprint arXiv:2109.06827.
Ken Barker, Parul Awasthy, Jian Ni, and Radu Florian.
2021. Ibm mnlp ie at case 2021 task 2: Nli reranking for zero-shot text classification. In Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), pages 193–202.
Eyal Ben-David, Nadav Oved, and Roi Reichart. 2022.
Pada: Example-based prompt learning for on-the-fly adaptation to unseen domains. *Transactions of the* Association for Computational Linguistics, 10:414–
433.
Max Callaghan, Carl-Friedrich Schleussner, Shruti Nath, Quentin Lejeune, Thomas R Knutson, Markus Reichstein, Gerrit Hansen, Emily Theokritoff, Marina Andrijevic, Robert J Brecha, et al. 2021. Machinelearning-based evidence and attribution mapping of 100,000 climate impact studies. Nature climate change, 11(11):966–972.
Róbert Csordás, Kazuki Irie, and Jürgen Schmidhuber. 2021. The devil is in the detail: Simple tricks improve systematic generalization of transformers.
arXiv preprint arXiv:2108.12284.
Ona De Gibert, Naiara Perez, Aitor García-Pablos, and Montse Cuadros. 2018. Hate speech dataset from a white supremacy forum. *arXiv preprint* arXiv:1809.04444.
Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi.
2020. Goemotions: A dataset of fine-grained emotions. *arXiv preprint arXiv:2005.00547*.
Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D
Hoffman, et al. 2020. Underspecification presents challenges for credibility in modern machine learning.
Journal of Machine Learning Research.
Paul Ekman. 1992. *Are there basic emotions?* American Psychological Association.
Yanping Fu and Yun Liu. 2022. Contrastive transformer based domain adaptation for multi-source cross-domain sentiment classification. *KnowledgeBased Systems*, 245:108649.
Jiang Guo, Darsh Shah, and Regina Barzilay. 2018.
Multi-source domain adaptation with mixture of experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4694–4703.
Jochen Hartmann. 2022. Emotion english distilrobertabase. https://huggingface.co/j-hartmann/emotionenglish-distilroberta-base/.
Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020.
Pretrained transformers improve out-of-distribution robustness. *arXiv preprint arXiv:2004.06100*.
Mark Ibrahim, Quentin Garrido, Ari Morcos, and Diane Bouchacourt. 2022. The robustness limits of sota vision models to natural variation. *arXiv preprint* arXiv:2210.13604.
Di Jin and Peter Szolovits. 2020. Advancing pico element detection in biomedical text via deep neural networks. *Bioinformatics (Oxford, England)*,
36(12):3856–3862.
Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga,
Richard Lanas Phillips, Irena Gao, et al. 2021. Wilds:
A benchmark of in-the-wild distribution shifts. In *International Conference on Machine Learning*, pages 5637–5664. PMLR.
Stefan Larson, Gordon Lim, Yutong Ai, David Kuang, and Kevin Leach. 2022. Evaluating out-ofdistribution performance on document image classifiers. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
Tian Li, Xiang Chen, Zhen Dong, Weijiang Yu, Yijun Yan, Kurt Keutzer, and Shanghang Zhang. 2022.
Domain-adaptive text classification with structured knowledge from unlabeled data. *arXiv preprint* arXiv:2206.09591.
Enrico Liscio, Alin Dondera, Andrei Geadau, Catholijn Jonker, and Pradeep Murukannaiah. 2022. Crossdomain classification of moral values. In *Findings* of the Association for Computational Linguistics:
NAACL 2022, pages 2727–2745.
Ning Liu and Jianhua Zhao. 2022. A bert-based aspectlevel sentiment analysis algorithm for cross-domain text. *Computational Intelligence and Neuroscience*,
2022.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
arXiv preprint arXiv:2107.13586.
Nikola Ljubešic, Igor Mozeti ´ c, Matteo Cinelli, and ˇ
Petra Kralj Novak. 2021. English YouTube hate speech corpus. Slovenian language resource repository CLARIN.SI.
Yun Luo, Fang Guo, Zihan Liu, and Yue Zhang. 2022.
Mere contrastive learning for cross-domain sentiment analysis. *arXiv preprint arXiv:2208.08678*.
Tingting Ma, Jin-Ge Yao, Chin-Yew Lin, and Tiejun Zhao. 2021. Issues with entailment-based zero-shot text classification. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 786–796.
Andrey Malinin, Neil Band, German Chesnokov, Yarin Gal, Mark JF Gales, Alexey Noskov, Andrey Ploskonosov, Liudmila Prokhorenkova, Ivan Provilkov, Vatsal Raina, et al. 2021. Shifts: A dataset of real distributional shift across multiple large-scale tasks. *arXiv preprint arXiv:2107.07455*.
Saif M. Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018.
Semeval-2018 Task 1: Affect in tweets. In *Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)*, New Orleans, LA, USA.
Marzieh Mozafari, Reza Farahbakhsh, and Noël Crespi.
2020. Hate speech detection and racial bias mitigation in social media based on bert model. *PloS one*,
15(8):e0237861.
Emily "Ohman, Marc P'amies, Kaisla Kajava, and J"org Tiedemann. 2020. Xed: A multilingual dataset for sentiment analysis and emotion detection. In The 28th International Conference on Computational Linguistics (COLING 2020).
W Gerrod Parrott. 2001. *Emotions in social psychology:*
Essential readings. psychology press.
Flor Miriam Plaza-del Arco, María-Teresa MartínValdivia, and Roman Klinger. 2022. Natural language inference prompts for zero-shot emotion classification in text across corpora. In Proceedings of the 29th International Conference on Computational Linguistics, pages 6805–6817.
Flor Miriam Plaza-del Arco, Carlo Strapparava, L Alfonso Urena Lopez, and M Teresa Martín-Valdivia.
2020. Emoevent: A multilingual emotion corpus based on different events. In *Proceedings of the* 12th Language Resources and Evaluation Conference, pages 1492–1498.
Pushpankar Kumar Pushp and Muktabh Mayank Srivastava. 2017. Train once, test anywhere: Zeroshot learning for text classification. arXiv preprint arXiv:1712.05972.
Juan Manuel Pérez, Juan Carlos Giudici, and Franco Luque. 2021. pysentimiento: A python toolkit for sentiment analysis and socialnlp tasks.
Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark Depristo, Joshua Dillon, and Balaji Lakshminarayanan. 2019. Likelihood ratios for outof-distribution detection. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc.
Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. 2018. CARER: Contextualized affect representations for emotion recognition. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 3687–3697, Brussels, Belgium. Association for Computational Linguistics.
Cagri Toraman, Furkan ¸Sahinuç, and Eyup Halit Yılmaz.
2022. Large-scale hate speech detection with crossdomain transfer. *arXiv preprint arXiv:2203.01111*.
Bertie Vidgen, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2020. Learning from the worst: Dynamically generated datasets to improve online hate detection. *arXiv preprint arXiv:2012.15761*.
Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.
Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, et al. 2022a. Openood: Benchmarking generalized out-of-distribution detection. *arXiv preprint* arXiv:2210.07242.
Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, and Yue Zhang. 2022b. Glue-x: Evaluating natural language understanding models from an outof-distribution generalization perspective. *arXiv* preprint arXiv:2211.08073.
Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. arXiv preprint arXiv:1909.00161.
Rong Zeng, Hongzhan Liu, Sancheng Peng, Lihong Cao, Aimin Yang, Chengqing Zong, and Guodong Zhou. 2022. Cnn-based broad learning for crossdomain emotion classification. Tsinghua Science and Technology, 28(2):360–369.
Kai Zhang, Qi Liu, Zhenya Huang, Mingyue Cheng, Kun Zhang, Mengdi Zhang, Wei Wu, and Enhong Chen. 2022a. Graph adaptive semantic transfer for cross-domain sentiment classification. arXiv preprint arXiv:2205.08772.
Yiwen Zhang, Caixia Yuan, Xiaojie Wang, Ziwei Bai, and Yongbin Liu. 2022b. Learn to adapt for generalized zero-shot text classification. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 517–527.
Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. 2022. Domain generalization: A
survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In Sect. 4.6 Discussion we discuss the limitation of our work
A2. Did you discuss any potential risks of your work?
Not applicable. Our work mostly a comparison and improvement of text classification methods. It does not present potential risks
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Sect.1 summarize the paper's main claims. At the end of the introduction we describe the contribution of our work
✗ A4. Have you used AI writing assistants when working on this paper?
no, excluding the spell-checker integrated on overleaf
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4 Presents The Experimental Results
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We used previously proposed models and we refer to them for the parameters. We report in sect. 4 comutational budget and computing infrastructure The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We extensively discuss the experimental setup in sect. 3 and 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We specify in sect. 3 that all results refer to a single run
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In Sect. 3.2 we give all details of our implementation including comprehensive references to all employed models, datasets and software
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
kabra-etal-2023-multi | Multi-lingual and Multi-cultural Figurative Language Understanding | https://aclanthology.org/2023.findings-acl.525 | Figurative language permeates human communication, but at the same time is relatively understudied in NLP. Datasets have been created in English to accelerate progress towards measuring and improving figurative language processing in language models (LMs). However, the use of figurative language is an expression of our cultural and societal experiences, making it difficult for these phrases to be universally applicable. In this work, we create a figurative language inference dataset, {pasted macro {`}DATASETNAME{'}}, for seven diverse languages associated with a variety of cultures: Hindi, Indonesian, Javanese, Kannada, Sundanese, Swahili and Yoruba. Our dataset reveals that each language relies on cultural and regional concepts for figurative expressions, with the highest overlap between languages originating from the same region. We assess multilingual LMs{'} abilities to interpret figurative language in zero-shot and few-shot settings. All languages exhibit a significant deficiency compared to English, with variations in performance reflecting the availability of pre-training and fine-tuning data, emphasizing the need for LMs to be exposed to a broader range of linguistic and cultural variation during training. Data and code is released at \url{https://anonymous.4open.science/r/Multilingual-Fig-QA-7B03/} | # Multi-Lingual And Multi-Cultural Figurative Language Understanding
Anubha Kabra1∗, Emmy Liu1∗, Simran Khanuja1∗, Alham Fikri Aji2, Genta Indra Winata3, Samuel Cahyawijaya4, Anuoluwapo Aremu5, Perez Ogayo1**, Graham Neubig**1 1Carnegie Mellon University 2MBZUAI 3Bloomberg 4HKUST 5Masakhane
## Abstract
Figurative language permeates human communication, but at the same time is relatively understudied in NLP. Datasets have been created in English to accelerate progress towards measuring and improving figurative language processing in language models (LMs). However, the use of figurative language is an expression of our cultural and societal experiences, making it difficult for these phrases to be universally applicable. In this work, we create a figurative language inference dataset, MABL, for seven diverse languages associated with a variety of cultures: Hindi, Indonesian, Javanese, Kannada, Sundanese, Swahili and Yoruba. Our dataset reveals that each language relies on cultural and regional concepts for figurative expressions, with the highest overlap between languages originating from the same region. We assess multilingual LMs' abilities to interpret figurative language in zeroshot and few-shot settings. All languages exhibit a significant deficiency compared to English, with variations in performance reflecting the availability of pre-training and fine-tuning data, emphasizing the need for LMs to be exposed to a broader range of linguistic and cultural variation during training. 1
## 1 Introduction
When you are feeling happy, do you think that you are "warm" or "cold"? If you are a monolingual English speaker, you will likely answer "warm", and use expressions like "this really warmed my heart". However, if you are a native Hindi speaker, you may answer "cold", and use expressions like ɞदल को ठंडक पढ़ना ("coldness spreads in one's heart" ) (Sharma, 2017). Linguistic communication often involves figurative (i.e., non-literal) language (Shutova, 2011; Fussell and Moss, 2008;
| Figurative Expression | Inference Omah iku apik banget. | |
|----------------------------------|---------------------------------------------------------------------|----------------------|
| Omah iku kaya istana | (The house is very nice.) | |
| yo (The house is like a palace.) | Omah iku elek banget. (The house is very ugly.) Rambutnya keriting. | |
| id | Rambutnya seperti bihun. | (Her hair is curly.) |
| (Her hair is like vermicelli.) | Rambutnya lurus. (Her hair is straight.) जीवन अǵा है। | |
| hi | जीवन मीठा गुलकन्द है। | (Life is good.) |
| (Life is sweet Gulkand. ) | जीवन बुरा है। (Life is bad.) ಅದು ಗರಿಗರಿಯಾಗಿದೆ | |
| kn ಅದು ದೋಸೆಯಂತೆ ಗರಿಗರಿಯಾಗಿತ್ತು. | (It is crisp.) | |
| (It was crispy like a dosa.) | ಅದು ಗರಿಗರಿಯಾಗಿರಲಿಲ್ಲ (It was not crisp.) Maneno yake yanaponya. | |
| sw Maneno yake ni sumu. | (His words heal.) | |
| (His words are like poison.) | Maneno yake yanaangamiza. (His words are devastating.) | |
Table 1: Examples of figurative expressions and respective inferences from the collected data. Correct answers are highlighted in green.
Lakoff and Johnson, 1981), which is laden with implicit cultural references and judgements that vary cross-culturally. Differences in figurative expressions used in different languages may be due to cultural values, history, or any number of other factors that vary across where the languages are spoken.2 Understanding figurative language therefore relies on understanding what concepts or objects are considered culturally significant, as well as their sentiment in that culture.
Better understanding of figurative language would benefit tasks such as hate speech detection or sentiment classification (ElSherief et al.,
2021; van Aken et al., 2018). However, state-ofthe-art language models have been shown to frequently misinterpret both novel figurative expressions and conventionalized idioms, indicating the need for improved methods (Dankers et al., 2022; 2The Hindi example is most likely attributable to climatic conditions, as cold may be seen as comparatively more positive in an area where extreme heat is more common (Sharma, 2017)
Liu et al., 2022). Most empirical results probing language models' abilities with respect to figurative language have been based on data in English, meaning there is a comparative lack of resources and study in other languages (Chakrabarty et al.,
2022; Liu et al., 2022; Pedinotti et al., 2021a).
We find English figurative language datasets may not have cultural relevance for other languages (§2). This is a general challenge in NLP,
as assumptions of common knowledge and important topics to talk about vary from culture to culture (Hershcovich et al., 2022). In order to better train multilingual models to interpret figurative language, as well as to understand linguistic variation in figurative expressions, we construct a multilingual dataset, MABL (Metaphors Across Borders and Languages), of 6,366 figurative language expressions in seven languages (§3). Examples are shown in Table 1.
We use the dataset to conduct a systematic analysis of figurative language patterns across languages and how well they are captured by current multilingual models (§4). We find that figurative language is often very culturally-specific, and makes reference to important entities within a culture, such as food, mythology, famous people, or plants and animals native to specific regions.
We benchmark multilingual model performance
(§5) and analyze model failures (§6), finding that zero-shot performance of multilingual models is relatively poor, especially for lower-resource languages. According to (Liu et al., 2021), main factors which poses challenges on the performance in such cases are cross-lingual transfer and concept shift across languages. However, we observe that concept shift seems to play a larger role due to culturally specific examples. Adding a few examples in the target language can improve performance of larger models, but this is more beneficial for lower-resource languages. This highlights the importance of including culturally relevant training data, particularly data that highlights not just the existence of a concept, but also how people view that concept within that culture.
## 2 Linguistic And Cultural Biases Of Existing Figurative Language Datasets
To confirm the importance of building a multilingual, multi-cultural figurative language dataset, we first performed a pilot study to examine the feasibility of instead translating an existing figurative language dataset, Fig-QA (Liu et al., 2022), from
| Lang. | fr | hi | ja |
|-----------------------|------|------|------|
| Incorrect | 13% | 40% | 21% |
| Culturally irrelevant | 17% | 20% | 17% |
English into other languages. While there are wellknown problems with using translation to create multilingual datasets for tasks such as QA (Clark et al., 2020), it is still worth examining these issues in the context of figurative language in particular. We used the Google Translate Python API to translate the development set into languages that the authors of this paper understood.3 These were French, Japanese, and Hindi. Each annotator annotated 100 examples for both correctness (whether or not the translation was accurate), and cultural relevance (whether or not the expression was one that would make sense to a native speaker from the culture where the language is predominant).
As seen in Table 2, the number of incorrect examples is large, particularly for Hindi and Japanese. This is mainly due to expressions that don't translate directly (such as a "sharp" conversation in English). Culturally irrelevant examples are due to implicitly assumed knowledge. For instance, a crowdworker from the US generated the example "it's as classic as pancakes for breakfast" with the meaning "it's very classic". However, most people from Japan would not see pancakes as a traditional breakfast, and the meaning "it's not classic" would be more appropriate.
The shift in topics discussed in cultures associated with different languages can be captured by native speakers familiar with that culture, motivating our collection of natural figurative language examples from native speakers.
## 3 The Mabl Dataset 3.1 Language Selection
We choose the following seven languages: Hindi
(hi), Yoruba (yo), Kannada (kn), Sundanese (su),
Swahili (sw), Indonesian (id), and Javanese (jv).
The factors we considered while choosing these languages are as follows :
i) We aimed to include a range of languages representing the different classes in the resourcebased taxonomy of languages, proposed by Joshi et al. (2020), subject to annotator availability.
3https://pypi.org/project/googletrans/
| Language | #Samples |
|------------|------------|
| id | 1140 |
| sw | 1090 |
| su | 600 |
| jv | 600 |
| hi | 1000 |
| kn | 1190 |
| yo | 730 |
ii) We chose languages with a sizeable speaker population as shown in Table 5.
iii) Our languages come from 5 typologically diverse language families spoken in 4 different countries, which allows us to include a wide range of linguistic and cultural diversity in our data.
Details about the characteristics of each language in terms of available training data and number of speakers can be found in Table 5. Additional information on linguistic properties of these languages can be found in Appendix A.
## 3.2 Dataset Collection
To create culturally relevant examples, we crowdsourced sample collection to two or more native speakers in the seven languages. The workers were asked to generate paired metaphors that began with the same words, but had different meanings, as well as the literal interpretations of both phrases.
Workers were not discouraged from generating novel metaphors, but with the caveat that any examples should be easily understood by native speakers of that language, e.g., "it's as classic as pancakes for breakfast" would not be valid if pancakes are not a breakfast food in the country in which that language is spoken.
Instructions given to annotators can be found in Appendix B. After collection, each sample was validated by a separate set of workers who were fluent in that language. Any examples that were incoherent, offensive, or did not follow the format were rejected. The number of samples collected per language can be seen in Table 3. Examples of collected data can be seen in Table 1. We note that because of the limited number of samples in each language, we view the samples collected as a test set for each language, meaning there is no explicit training set included with this release.
## 4 Dataset Analysis 4.1 Concepts Expressed
In the structure mapping theory of metaphor, figurative language involves a **source** and **target** concept, and a comparison is made linking some features of the two (Gentner, 1983). Following Liu et al. (2022), we refer to the source as the "subject" and target as "object" .4 We expect objects referenced to be quite differently cross-culturally. We confirm this by translating sentences from our dataset into English, then parsing to find objects. The number of unique concepts per language, including examples, is listed in Appendix C. This may overestimate the number of unique concepts, as some concepts may be closely related (e.g., "seasonal rain" vs. "rainy season").
Despite this, we are able to identify many culturally specific concepts in these sentences, such as specific foods (hi: samosa, hi: sweet gulkand, id: durian, id: rambutan), religious figures (kn: buddha's smile, sw: king soloman), or references to popular culture (id: shinchan, yo: aníkúlápó movie, en: washington post reporter).
We observe that, excluding pronouns, only 6 objects are present in all languages. These are {"sky",
"ant", "ocean", "fire", "sun", "day"}. Of course, variations of all these concepts and other generic concepts may exist, since we only deduplicated objects up to lemmatization, but this small set may indicate that languages tend to vary widely in figurative expressions. Appendix D indicates the Jaccard similarity between objects in each language, which is an intuitive measure of set similarity. The equation is also given below for sets of objects from language A () and langugage B ().
$$J(L_{A},L_{B})=\frac{|L_{A}\cap L_{B}|}{|L_{A}\cup L_{B}|}$$
$\mathbf{M}$
|(1)
The most similar language based on concepts present is highlighted in Table 4. Languages from the same region tend to group together. The set of concepts in English is actually most similar to Swahili.5 Upon inspection, there were many general terms related to nature, as well as many references to Christianity in the Swahili data, which may explain the similarity to English.6 4This terminology may be confusable with subject and object in linguistics, but was used because the source and target tend to appear in these linguistic positions in a sentence.
5There are no particularly closely related languages to English in our dataset 6Authors of this paper examined unique concepts expressed in English, Swahili, and Kannada. Swahili sentences had
| Lang. | Speakers | Training Data (in GB) | Class | |
|---------|------------|-------------------------|---------|----|
| (M) | XLM-R | mBERT | | |
| en | 400 | 300.8 | 15.7 | 5 |
| hi | 322 | 20.2 | 0.14 | 4 |
| id | 198 | 148.3 | 0.52 | 3 |
| jv | 84 | 0.2 | 0.04 | 1 |
| kn | 44 | 3.3 | 0.07 | 1 |
| su | 34 | 0.1 | 0.02 | 1 |
| sw | 20 | 1.6 | 0.03 | 2 |
| yo | 50 | - | 0.012 | 2 |
## 4.2 Commonsense Categories
We follow the commonsense categories defined in Liu et al. (2022) to categorize knowledge needed to understand each sentence: physical object knowledge (obj), knowledge about visual scenes (vis), social knowledge about how humans generally behave (soc), or more specific cultural knowledge
(cul). The same sentence can require multiple types of knowledge. Table 6 shows the prevalence of each type of commonsense knowledge as documented by annotators. Social and object knowledge are the most dominant types required, with Yoruba having an especially high prevalence of social examples. Not many examples were marked as cultural. This may be due to differences in what annotators viewed as cultural knowledge: some knowledge may be considered to fall under the object or social category by annotators, but these same examples may seem culturally specific to people residing in the United States because the objects referenced are not necessarily relevant to English speakers in the US.
18/481 Christianity related concepts, while English had 13/954. Kannada did not have any Christianity related concepts but rather concepts related to Hinduism.
Lang. hi id jv kn su sw yo en Most similar kn jv sw hi jv hi sw sw Table 6: Proportion of common-sense categories.
| Lang. | Object | Visual | Social | Cultural |
|---------|----------|----------|----------|------------|
| hi | 52.4 | 16.4 | 42.0 | 9.2 |
| id | 45.8 | 5.7 | 45.6 | 7.5 |
| jv | 34.0 | 15.0 | 43.3 | 10.0 |
| kn | 63.3 | 17.1 | 20.3 | 15.2 |
| su | 34.3 | 8.6 | 33.3 | 24.0 |
| sw | 48.0 | 20.2 | 32.2 | 5.6 |
| yo | 37.3 | 6.1 | 81.0 | 10.7 |
## 4.3 Cross-Lingual Concept Distribution
To better understand the linguistic and cultural distribution of examples, we extract sentence-level representations from two models: i) XLM-Rlarge
(Conneau et al., 2019), our best performing baseline model; and ii) LaBSE (Feng et al., 2020), a language-agnostic sentence embedding model, optimized for cross-lingual retrieval. We observed that XLM-R clusters by language, whereas LaBSE
clusters sentences from multiple languages together, based on conceptual similarity (as shown in Figure 2). Since LaBSE is optimized for crosslingual sentence similarity, we chose the latter to conduct further analysis.
First, we probe different edges of the cluster and observe concepts along each edge, as visualized in Figure 1. For each concept, we observe sentences from various languages clustering together. Further, these sentences portray cultural traits pertaining to each language. For example, *rice* is commonly mentioned in languages from Indonesia, given that it is a staple food product there.8 Other examples include sentences in Hindi such as This house is as old as a diamond (*diamonds* have a significant historical background in India)
or Your house is worth lakhs (*lakh* is an Indian English term).9 To qualitatively study cultural references, we further analyse metaphors belonging to universal concepts such as food, *weather/season*, and *friendship*, searching for sentences containing these keywords.10 We obtain 230 sentences containing *food*,
111 sentences containing *weather/season* and 307 sentences containing *friend*. A few examples are as shown in Table 7. We observe multiple regional and cultural references, which may not be under-
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
standable by non-native speakers. For example, annotators make references to the *weather/season* with *Peacock* and *frying fish on asphalt* which are innate comparisons in su. With reference to *food*,
Indian food commonly uses *Neem* and *Tamarind* as referenced by metaphors in kn and hi. *Neem* is a bitter medicinal herb and *Tamarind* is used to add sourness to food. Finally, we see references to mythological and fictional characters across*friendship* metaphors, where annotators draw from their attributes to describe friendships.
## 5 Evaluation And Results 5.1 Zero-Shot 5.1.1 Zero-Shot Evaluation
Here, we simply fine-tune the Multilingual Pretrained Language Models (MPLMs) on the English labelled data and evaluate on all target languages.
This was performed in the standard format of inputting each example as [CLS] [sentence] [SEP]
[meaning1] [SEP] [meaning2] and using a linear layer on the [CLS] token to classify the answer.
## 5.1.2 Zero-Shot Transfer Results
We present zero-shot evaluation results in Table 8, noting that there can be two contributors to the gap in performance in these seven languages as compared to English. First, since our fine-tuning language is English, there can be a drop in performance simply due to cross-lingual transfer. Second, there is a concept shift in these metaphors, as evidenced by our analysis in Section 4. To discern the contribution of both, we machine-translate the target test sets to en (we refer to this as translatetest). The difference between translate-test and zero-shot, can be thought of as the cross-lingual transfer gap, while the rest of the difference between translate-test and en test performance can be attributed to the concept shift. Due to possible MT errors, the results here represent upper bounds for concept shift and cross-lingual shift, which is
| weather/season | References to | | | | |
|---------------------|---------------------|--------------------------|-----------------|----|------------|
| References to | food | References to friendship | | | |
| The Indian Ocean | That food is | My friend's father | | | |
| is sparkling like a | as sweet as Neem | jv | is like a raden | | |
| su | Christmas season. | kn | | | |
| Peacock this | werkudara. | | | | |
| The weather is | Hotel food | He guided his | | | |
| the rainy season. | hi | | | | |
| also warm like | tamarind. | hi | | | |
| was like | friend like | | | | |
| kn | Krishna. | | | | |
| The weather | His waist is | | | | |
| looks like you can | His friend is | | | | |
| fry fish on the | the width of | | | | |
| su | asphalt. | sw | a baobab. | sw | abunuwasi. |
| The taste of | He asks the help of | | | | |
| Tina and Ravi's | this food is | his friends just like | | | |
| monsoon season. | jv | | | | |
| love is like | id | the king of Tanah | | | |
| hi | like boiled tempeh. | Djawo Kingdom. | | | |
## Further Discussed In Section 6.1. The Concept Shift Gap Is Generally Greater
than the cross-lingual gap. As reported in Table 8, the concept shift gap is greater than the cross-lingual transfer gap for all languages except Swahili, across all models. This result for sw corroborates our findings in Section 4, where we observe that en shares the greatest proportion of object concepts with sw. Given Swahili's extremely low-representation in MPLMs (Table 5), and its high concept overlap with English, we cover most of the gap by simply translating sw to en. For Indonesian (id), we observe that zero-shot performance itself is close to en performance (83.6%) for XLM-R, since id is well-represented in this model
(Table 5). Hence, translating to en does not help, and the model needs to be competent in better understanding the cultural references specific to id.
In mBERT however, id is poorly represented, and translating to en does help improve performance.
## Performance Increases As Model And Training Data Size Increase, But Moreso For Higher
resource languages. The smallest model examined, mBERT, has relatively poor performance for all languages, as all languages have < 60% accuracy. Hindi and Indonesian, the two highestresource languages in our dataset, show a high gain in performance when using a larger model, increasing to 67.58% and 78.09% accuracy respectively.
This is especially true for Indonesian, which has a relatively high amount of training data as shown in Table 5. However, lower resource languages tend to show a more modest gain in performance.
## 5.2 Few-Shot
5.2.1 Few-shot evaluation While it is common to fine-tune MPLMs on English, given its widespread use and availability, several past works have shown how this is suboptimal (Lin et al., 2019; Debnath et al., 2021)
and choosing optimal transfer languages is an important research question in itself (Dhamecha et al., 2021). While the design of an ideal allocation of annotation resources is still unknown, Lauscher et al. (2020) demonstrate the effectiveness of investing in few-shot (5-10) in-language task-specific examples, which provides vast improvements over the zero-shot setup. We include between 2-50 labelled pairs of sentences from each target language, in addition to the English labelled data, for fine-tuning the model.
Training details for all models can be found in Appendix E.
## 5.2.2 Few-Shot Results
Figure 3 presents the effects of few-shot transfer for each language. Generally, the performance gain is modest. This aligns with results from Lauscher et al. (2020), who found that performance gains were quite small on XNLI. As our task is also an NLI task, we may expect similar improvements. However, we find collecting some cultural examples could disproportionately help low-resource languages.
Augmenting with few examples usually does not help much We observed that with a few exceptions, the increase in accuracy on the test set gained was small (< 1%). This is likely because of the diversity of facts needed in order to improve performance. As noted in Section 4.1 and Table 1, this dataset contains many unique cultural references that do not repeat, limiting the utility of seeing a few examples.
Lower-resource languages benefit more greatly from augmentation However, there are a few exceptions to this trend. In particular, adding 50 paired Kannada examples to XLM-Rlarge improved performance by 3.83%. Swahili also improves by 1.10% with 50 additional examples for XLM-Rbase, and Sundanese improves by 2.33% with 50 examples for mBERTbase.
## 5.3 Evaluation Of Large Language Models
In addition to the three MPLMs we examine in detail, we also examine the zero-shot performance of large pretrained language models. We choose to
| Model | Language | Zero-shot | Translate-test | Cross-Lingual | Concept Shift |
|------------------|-------------|--------------|------------------|-----------------|-----------------|
| Performance | (to EN) | Transfer Gap | Gap | | |
| en𝑑𝑒𝑣 | 81.50 ±2.41 | 81.50 ±2.41 | 0.00 | 0.00 | |
| hi | 67.58 ±1.38 | 67.82 ±1.52 | 0.24 | 13.68 | |
| id | 78.09 ±1.14 | 77.51 ±0.91 | -0.58 | 3.99 | |
| jv | 60.93 ±1.95 | 68.13 ±1.66 | 7.20 | 13.37 | |
| kn | 58.08 ±2.10 | 63.67 ±0.98 | 5.59 | 17.83 | |
| su | 60.40 ±1.98 | 70.07 ±0.92 | 9.67 | 11.43 | |
| sw | 58.16 ±0.73 | 75.29 ±2.05 | 17.13 | 6.21 | |
| yo | - | - | - | - | |
| XLM-Rlarge | en𝑑𝑒𝑣 | 75.26 ±0.95 | 75.26 ±0.95 | 0.00 | 0.00 |
| hi | 62.48 ±0.31 | 63.29 ±0.84 | 0.81 | 11.97 | |
| id | 68.88 ±0.71 | 66.54 ±1.22 | -2.34 | 9.26 | |
| jv | 53.67 ±0.54 | 58.17 ±0.82 | 4.50 | 17.09 | |
| kn | 54.67 ±1.31 | 57.86 ±1.10 | 3.20 | 17.40 | |
| su | 52.41 ±1.79 | 61.33 ±0.68 | 8.93 | 13.93 | |
| sw | 52.73 ±1.38 | 65.77 ±1.82 | 13.04 | 7.31 | |
| yo | - | - | - | - | |
| XLM-Rbase | en𝑑𝑒𝑣 | 70.88 ±2.46 | 70.88 ±2.46 | 0.00 | 0.00 |
| hi | 51.32 ±0.94 | 59.45 ±1.77 | 8.13 | 11.43 | |
| id | 56.56 ±1.66 | 63.30 ±1.12 | 6.74 | 7.58 | |
| jv | 55.06 ±1.70 | 60.76 ±2.31 | 5.70 | 10.12 | |
| kn | 52.63 ±1.15 | 56.70 ±0.77 | 4.07 | 14.18 | |
| su | 52.87 ±1.67 | 59.37 ±2.37 | 6.51 | 11.51 | |
| sw | 52.12 ±1.09 | 63.57 ±0.78 | 11.45 | 7.31 | |
| yo | 50.52 ±1.04 | 50.60 ±1.28 | 0.08 | 20.28 | |
| mBERTbase | en𝑑𝑒𝑣 | 74.86 | 74.86 | 0.00 | 0.00 |
| hi | 50.60 | 59.62 | 9.02 | 15.24 | |
| id | 64.21 | 66.93 | 2.72 | 7.93 | |
| jv | 51.00 | 62.17 | 11.17 | 12.70 | |
| kn | 50.08 | 57.85 | 7.76 | 17.02 | |
| su | 49.67 | 58.33 | 8.67 | 16.53 | |
| sw | 54.83 | 65.33 | 10.51 | 9.53 | |
| yo | 50.27 | 48.77 | -1.51 | 26.10 | |
| text-davinci-003 | | | | | |
examine GPT-3 (text-davinci-003) and BLOOM176B. As these models are autoregressive rather than masked models, we follow the standard procedure of prediction via choosing the answer with a higher predicted probability (Jiang et al., 2021).
The performance of GPT-3 is not very good on most languages when tested zero-shot, but we note that it has a reasonable zero-shot performance on the English development set (74.86%), higher than the reported results of text-davinci-002. (Liu et al., 2022). There is a high concept shift gap as with the other models but also a comparatively higher cross-lingual gap as this model is much stronger in English.
## 6 Error Analysis 6.1 Effect Of English Mt
As noted in Section 5.1, there are two major factors that can cause difficulty in cross-lingual transfer: language shift and concept shift. We try to approximate these effects by translating the test set in each language to English. However, this is done with machine translation, so there may be errors.
Despite this, translation can still benefit the model if the original language was low-resource. We can divide the model performance into four cases as shown in Table 9.
Translate-EN
![6_image_0.png](6_image_0.png)
$\begin{array}{l}\textbf{Correct}\\ \textbf{Incorrect}\end{array}$
$\mathbf{\hat{s}}$
Table 9: Confusion matrix of examples that were answered correctly by XLM-Rlarge before and after translation to English, across all languages combined.
First, there are easy examples (53%) which are answered correctly in both the original language and translated versions. Next there are linguisti-
![7_image_0.png](7_image_0.png)
cally challenging examples (19%) which are originally answered incorrectly, but switch to being answered correctly after being translated to English.11 There are difficult-to-translate or incorrectly translated examples (15%). It's likely that these errors can be completely eliminated with a careful enough translation. Lastly, there are hard examples (12%) which are answered incorrectly before and after being translated. These contain many inherently difficult examples, and examples with specific cultural terms. Examples of each type can be found in Appendix G.
## 6.2 Cultural Examples
We examine the accuracy of XLM-Rlarge on the commonsense categories in Section 4.2. Overall, there is a small difference in accuracy between cultural examples and the overall accuracy, with overall accuracy at 63.99% and accuracy on cultural examples at 61.68%. Accuracy for all languages can be found in Appendix H. This is a preliminary analysis, but may indicate that references to explicit named entities may not be the only issue for the model with regard to culture.
## 7 Related Work 7.1 Figurative Language
English-centric: Most previous inference tasks on figurative language have been in English
(Chakrabarty et al., 2022; Liu et al., 2022; Pedinotti et al., 2021a). Further, research on figurative language in English centers around training models to detect the presence of metaphors in text (Leong et al., 2020; Stowe and Palmer, 2018; 11Linguistically challenging here means that the language is more challenging for an LM to perform well in, not that the linguistic structure is very difficult.
Tsvetkov et al., 2014). This is done using datasets primarily consisting of idioms and conventionalized metaphors. However, recognizing common metaphorical phrases may not truly test a model's ability to interpret figurative language. There is limited research on understanding metaphors, which mostly looks at linking metaphorical phrases to their literal meanings through paraphrase detection (Bizzoni and Lappin, 2018) or generation (Shutova, 2010; Mao et al., 2018).
Some studies investigate LMs' ability to understand metaphors, but they do not consider the fact that metaphors have different meanings based on context (Pedinotti et al., 2021b; Aghazadeh et al.,
2022). Most recently, Liu et al. (2022) released a dataset which requires a model to infer the correct meaning of metaphor, rather than simply identifying or paraphrasing it, hence calling to test deeper semantic understanding.
Extension to Multilingual: Research in corpus linguistics (Díaz-Vera and Caballero, 2013; Kövecses, 2004; Charteris-Black and Ennis, 2001) suggests that there significant variation in metaphorical language between cultures. There has been some work in detecting metaphors in multilingual text (Tsvetkov et al., 2013; Shutova et al., 2017).
These works have focused on three relatively highresource languages: English, Russian and Spanish. Both focused on cross-lingual techniques to identify metaphors from newspapers and dictionaries. Hence, there hasn't been any large-scale multilingual dataset of figurative language constructed, which would allow one to study cultural variations across metaphors. We fill this gap with the release of our dataset.
## 8 Conclusion
Despite being relatively widespread, figurative language is relatively under-studied in NLP. This is especially true for non-English languages. To enable progress on figurative language processing, we create MABL, a figurative inference dataset across seven languages. We find considerable variation in figurative language use across languages, particularly in the unique objects that people invoke in their comparisons, spanning differences in food, mythology and religion, and famous figures or events. This variation is likely due to differences in cultural common-ground between the countries in which these languages are spoken. We find that multilingual models have considerable room for improvement on this task, and cross-cultural shift may play a significant role in the performance degradation from English. We encourage the NLP
community to further examine the role that culture plays in language, and note that figurative language can be used as a testbed to examine crosslinguistic and cross-cultural variations.
## 9 Limitations
First, despite our pursuit of attempting to understand figurative language use across cultures, we have barely scratched the surface in terms of diverse representation. Due to limited scope, budget, and resources, we collect data from 2-3 annotators per language, for seven languages. Further, culture can vary greatly within a language (Hershcovich et al., 2022). Therefore, until we can represent all of the worlds' people and their languages, there will always be room for improvement.
We also acknowledge that the syntax captured in the dataset may not be the most diverse, as many examples follow the template "<X> is like <Y>". However, we create these simpler examples as a first step, since extension to more complex and naturalistic language can be included in future work.
Second, to analyse concept shift, we machine translate test sets into English. However, these translations can be erroneous to varying degrees, which may have resulted in an over-estimation of error attribution to concept shift. This could not be avoided however, due to limited resources of obtaining human translations.
Third, English may not be the best language to transfer from in zero-shot evaluation of multilingual models. While we were constrained by training data availability, past works have shown that machine-translating train sets can help, an avenue we haven't explored here. Even though we experiment with few-shot evaluation, there may exist an optimal combination of source languages which best transfer to our target languages.
Fourth, the English authors recognized culturespecific terms that were not marked as cultural by annotators in the commonsense categorization across all languages. This may be because annotators, being mostly familiar with their own cultures, attributed culturally specific facts and terms as being common sense. Likewise, the Englishspeaking participants may have viewed a separate set of facts as common sense which would not be agreed upon by people from a different culture. It is thus difficult to disentangle common sense and culture in many cases.
## References
Ehsan Aghazadeh, Mohsen Fayyaz, and Yadollah Yaghoobzadeh. 2022. Metaphors in pre-trained language models: Probing and generalization across datasets and languages. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2037–2050, Dublin, Ireland. Association for Computational Linguistics.
Alham Fikri Aji, Genta Indra Winata, Fajri Koto, Samuel Cahyawijaya, Ade Romadhony, Rahmad Mahendra, Kemal Kurniawan, David Moeljadi, Radityo Eko Prasojo, Timothy Baldwin, Jey Han Lau, and Sebastian Ruder. 2022. One country, 700+ languages: NLP challenges for underrepresented languages and dialects in Indonesia. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 7226–7249, Dublin, Ireland. Association for Computational Linguistics.
Yuri Bizzoni and Shalom Lappin. 2018. Predicting human metaphor paraphrase judgments with deep neural networks. In Proceedings of the Workshop on Figurative Language Processing, pages 45–55, New Orleans, Louisiana. Association for Computational Linguistics.
Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, and Smaranda Muresan. 2022. Flute: Figurative language understanding through textual explanations.
Jonathan Charteris-Black and Timothy Ennis. 2001. A
comparative study of metaphor in spanish and english financial reporting. *English for specific purposes*, 20(3):249–266.
Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark
for information-seeking question answering in typologically diverse languages. *Transactions of the Association for Computational Linguistics*, 8:454–470.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
Verna Dankers, Elia Bruni, and Dieuwke Hupkes. 2022.
The paradox of the compositionality of natural language: A neural machine translation case study. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 4154–4175, Dublin, Ireland.
Association for Computational Linguistics.
Arnab Debnath, Navid Rajabi, Fardina Fathmiul Alam, and Antonios Anastasopoulos. 2021. Towards more equitable question answering systems: How much more data do you need? arXiv preprint arXiv:2105.14115.
Tejas Indulal Dhamecha, Rudra Murthy V, Samarth Bharadwaj, Karthik Sankaranarayanan, and Pushpak Bhattacharyya. 2021. Role of language relatedness in multilingual fine-tuning of language models: A
case study in indo-aryan languages. arXiv preprint arXiv:2109.10534.
Javier E Díaz-Vera and Rosario Caballero. 2013. Exploring the feeling-emotions continuum across cultures: Jealousy in english and spanish. Intercultural Pragmatics, 10(2):265–294.
Matthew S. Dryer and Martin Haspelmath, editors.
2013. *WALS Online*. Max Planck Institute for Evolutionary Anthropology, Leipzig.
Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 345–
363, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Languageagnostic bert sentence embedding. *arXiv preprint* arXiv:2007.01852.
Susan Fussell and Mallie Moss. 2008. Figurative language in emotional communication.
Dedre Gentner. 1983. Structure-mapping: A theoretical framework for analogy*. *Cognitive Science*,
7(2):155–170.
Harald Hammarström, Robert Forkel, and Martin Haspelmath. 2022. Glottolog 4.7. Max Planck Institute for the Science of Human History.
Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, and Anders Søgaard. 2022. Challenges and strategies in crosscultural NLP. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6997–7013, Dublin, Ireland. Association for Computational Linguistics.
Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. *Transactions of the Association for Computational Linguistics*, 9:962–977.
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the nlp world. *arXiv preprint arXiv:2004.09095*.
Zoltán Kövecses. 2004. Introduction: Cultural variation in metaphor. *European Journal of English Studies*, 8(3):263–274.
G. Lakoff and M. Johnson. 1981. Metaphors we Live By. University of Chicago Press.
Anne Lauscher, Vinit Ravishankar, Ivan Vulić, and Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot cross-lingual transfer with multilingual transformers. arXiv preprint arXiv:2005.00633.
Chee Wee Leong, Beata Beigman Klebanov, Chris Hamill, Egon Stemle, Rutuja Ubale, and Xianyang Chen. 2020. A report on the 2020 vua and toefl metaphor detection shared task. In *Proceedings of* the second workshop on figurative language processing, pages 18–29.
Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, et al. 2019.
Choosing transfer languages for cross-lingual learning. *arXiv preprint arXiv:1905.12688*.
Emmy Liu, Chenxuan Cui, Kenneth Zheng, and Graham Neubig. 2022. Testing the ability of language models to interpret figurative language. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 4437–4452, Seattle, United States. Association for Computational Linguistics.
Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, and Desmond Elliott. 2021. Visually grounded reasoning across languages and cultures. *arXiv preprint* arXiv:2109.13238.
Rui Mao, Chenghua Lin, and Frank Guerin. 2018.
Word embedding and WordNet based metaphor identification and interpretation. In *Proceedings of the* 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1222–1231, Melbourne, Australia. Association for Computational Linguistics.
Paolo Pedinotti, Eliana Di Palma, Ludovica Cerini, and Alessandro Lenci. 2021a. A howling success or a working sea? testing what BERT knows about metaphors. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 192–204, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Paolo Pedinotti, Giulia Rambelli, Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, and Philippe Blache. 2021b. Did the cat drink the coffee? challenging transformers with generalized event knowledge. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, pages 1–11, Online. Association for Computational Linguistics.
Sunil Sharma. 2017. Happiness and metaphors: a perspective from hindi phraseology. *Yearbook of* Phraseology, 8(1):171–190.
Ekaterina Shutova. 2010. Automatic metaphor interpretation as a paraphrasing task. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1029–1037, Los Angeles, California. Association for Computational Linguistics.
Ekaterina Shutova. 2011. Computational approaches to figurative language.
Ekaterina Shutova, Lin Sun, Elkin Darío Gutiérrez, Patricia Lichtenstein, and Srini Narayanan. 2017. Multilingual metaphor processing: Experiments with semi-supervised and unsupervised learning. *Computational Linguistics*, 43(1):71–123.
Kevin Stowe and Martha Palmer. 2018. Leveraging syntactic constructions for metaphor identification.
In *Proceedings of the workshop on figurative language processing*, pages 17–26.
Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 248–258.
1. The surgeon is (a lumberjack/a ballet dancer).
2. The movie has the depth of (a wading pool/the grand canyon)
3. Her commitment to the cause was as sturdy as (plywood/oak)
Yulia Tsvetkov, Elena Mukomel, and Anatole Gershman. 2013. Cross-lingual metaphor detection using common semantic features. In *Proceedings of the* First Workshop on Metaphor in NLP, pages 45–51.
Betty van Aken, Julian Risch, Ralf Krestel, and Alexander Löser. 2018. Challenges for toxic comment classification: An in-depth error analysis.
If you're stuck, a general template you can use is
<SUBJECT> is <metaphor 1>/<metaphor 2>.
## A Selected Languages B Instructions For Annotators
Table 10 contains additional information on languages included in the dataset. Information on languages was collected from the World Atlas of Language Structures (WALS) and Glottolog 4.7
(Hammarström et al., 2022; Dryer and Haspelmath, 2013).
In Liu et al. (2022), workers are prompted with random words taken from English metaphorical frames in Lakoff and Johnson (1981). However, as these metaphorical frames are not readily available in other languages, and we did not want to bias workers toward concepts that are only relevant in English, we chose to omit this prompt and have workers generate sentences freely, while encouraging them to emphasize aspects of their culture. Annotators were paid according to their proposed hourly range ($25/hour on average, all above $15/hr). Validators were paid $15/hr. This study was approved by our IRB. No identifying information was collected. Note that this is the English version of the instructions, as instructions were machine-translated to each target language and corrected by native speakers.
Your task is to generate pairs of sentences with opposite or very different meanings, both of which contain metaphors. You can feel free to incorporate creativity into the metaphors, but also make sure that they're something that could be understood by the speakers of the language that you are generating metaphors for, e.g., "this is as classic as pancakes for breakfast" to mean "this is classic" wouldn't make sense for a culture in which pancakes aren't traditionally eaten for breakfast.
You can do this by thinking of a metaphor that conveys a certain meaning, and replacing the metaphorical phrase with another metaphorical phrase of the same type (for instance, noun phrases, verb phrases or adjective phrases) that conveys the opposite meaning.
Here are some examples of metaphors to give you an idea of what we're looking for: Please write both the metaphor and its meaning for each sentence.
| Language | Branch | Countries | Word Order |
|------------|---------------|----------------|--------------|
| Hindi | Indo-European | India | SOV |
| Indonesian | Austronesian | Indonesia | SVO |
| Javanese | Austronesia | Indonesia | SVO |
| Kannada | Dravidian | India | SOV |
| Sundanese | Austronesian | Indonesia | SVO |
| Swahili | Niger-Congo | Tanzania | SVO |
| Yoruba | Niger-Congo | Nigeria, Benin | SVO |
| English | Indo-European | Various | SVO |
## C Unique Concepts In Different Languages E Training Details
| Lang. | Unique Concepts | Examples |
|---------------------------------------------|-------------------|------------------------------------------------|
| hi | 494 | samosa |
| seasonal rain sweet gulkand | | |
| id | 742 | smell of durian young rambutan shinchan |
| jv | 303 | elephant riding rickshaw sugar cane tripe skin |
| kn | 444 | dosa |
| ayurveda | | |
| buddha's smile | | |
| su | 365 | sticky rice papaya tree |
| lotus flower in water | | |
| sw | 481 | baobab |
| king solomon clove ointment | | |
| yo | 333 | president buhari rock of olumu aníkúlápó movie |
| en | 954 | thanksgiving buffet |
| washington post reporter renaissance artist | | |
Table 11: Number and examples of unique object concepts expressed in each language (translated to EN).
Unique concepts here are those not shared by any other language in the dataset.
## D Jaccard Similarity Between Concepts
Table 12 contains Jaccard similarities for sets of concepts found in each language. Language pairs with the highest similarity (row-wise) are bolded.
Table 11 displays the number of unique concepts and some examples in each language after basic deduplication (lemmatization and casing).
A hyperparameter grid search was conducted over values: epochs ∈ {10, 20, 30}, lr ∈ {2 × 10−4, 5 ×
10−4, 2 × 10−5, 5 × 10−5, 2 × 10−6, 5 × 10−6}, and batch size ∈ {32, 64}.
XLM-Rlarge was trained for 20 epochs with a learning rate of 5 × 10−6 and a batch size of 32. XLM-Rlarge was trained for 30 epochs with a learning rate of 2 × 10−5 and a batch size of 64.
mBERTbase was trained for 30 epochs with a learning rate of 5 × 10−5 and a batch size of 64. An A6000 GPU was used for each model. Each training run takes on the order of a few minutes.
Most seeds lead to a near-random performance on the English dev set, while a small minority of seeds lead to non-random performance. We took the top 5 seeds from 1-100 found in terms of English dev set performance in order to avoid including results from degenerate seeds.
We did not experiment with trying to optimize the hyperparameters for the experiments in Section 5.2.2 but rather used the same ones found previously. This may account for some settings leading to lower performance.
## F Few-Shot Full Results
Table 13 outlines the effect of adding ∈
{2, ..., 50} examples in each target language.
## G Four-Quadrant Examples G.0.1 Easy - **नदʍ का पानी ɟक्रस्टल कʏ तरह साफ है।**/The Water Of The River Is As Clear As Crystal
- **Ia berjalan layaknya siput**/he walks like a snail
- Inú yàrá ìdánwò nàá palọ́lọ́ bí i itẹ́
òkú/inside the exam room was a dead silence
- **Vijana ndio taifa la kesho**/youth is the nation of tomorrow
| hi | id | jv | kn | su | sw | yo | en | |
|------|--------|--------|--------|--------|--------|--------|--------|--------|
| hi | - | 0.0477 | 0.0541 | 0.0945 | 0.0534 | 0.0904 | 0.0509 | 0.0631 |
| id | 0.0477 | - | 0.0588 | 0.0431 | 0.0405 | 0.0544 | 0.0352 | 0.0425 |
| jv | 0.0541 | 0.0588 | - | 0.0619 | 0.067 | 0.0724 | 0.0449 | 0.0377 |
| kn | 0.0945 | 0.0431 | 0.0619 | - | 0.0464 | 0.0842 | 0.0594 | 0.0586 |
| su | 0.0534 | 0.0405 | 0.067 | 0.0464 | - | 0.0563 | 0.0444 | 0.0312 |
| sw | 0.0904 | 0.0544 | 0.0724 | 0.0842 | 0.0563 | - | 0.0671 | 0.0693 |
| yo | 0.0509 | 0.0352 | 0.0449 | 0.0594 | 0.0444 | 0.0671 | - | 0.0311 |
| en | 0.0631 | 0.0425 | 0.0377 | 0.0586 | 0.0312 | 0.0693 | 0.0311 | - |
= 2 = 10 = 20 = 30 = 40 = 50
Lang. Score Δ Score Δ Score Δ Score Δ Score Δ Score Δ
hi 67.47 -0.11 67.47 -0.11 67.29 -0.29 **67.72 0.14** 67.67 0.09 67.58 0
id 78.01 -0.08 78.04 -0.05 78.22 0.13 77.91 -0.18 78.04 -0.05 **78.56 0.47** jv 60.77 -0.16 **61.14 0.2** 60.36 -0.58 60.78 -0.16 61.08 0.14 60.76 -0.17
kn 58.09 0.01 58.17 0.09 59.34 1.26 59.38 1.3 60.39 2.31 **61.91 3.83**
su 60.47 0.07 60.55 0.15 **61.36 0.96** 60.22 -0.18 60.35 -0.05 61.28 0.88
sw 58.23 0.07 58.16 0 58.49 0.33 58.88 0.72 58.92 0.76 **59.00 0.84**
yo - - - - - - - - - - - -
hi 62.47 -0.01 **62.51 0.03** 62.27 -0.21 62.45 -0.03 62.06 -0.42 61.89 -0.59
id **69.23 0.35** 69.07 0.19 69.16 0.28 69.20 0.32 68.66 -0.22 69.14 0.26
jv 54.09 0.43 54.31 0.64 54.04 0.37 54.53 0.86 53.92 0.25 **54.60 0.93**
kn 54.62 -0.04 54.55 -0.12 54.56 -0.11 54.53 -0.14 **55.05 0.38** 54.44 -0.22
su 51.95 -0.46 51.90 -0.51 51.72 -0.69 51.37 -1.03 51.27 -1.14 50.48 -1.93
sw 52.78 0.05 52.76 0.03 53.00 0.27 53.04 0.31 53.50 0.76 **53.83 1.10**
yo - - - - - - - - - - - -
= 2 = 4 = 6 = 8 = 10 = 50
hi 51.43 0.11 51.41 0.09 **53.42 2.10** 51.50 0.18 51.47 0.15 50.93 -0.39 id 56.59 0.02 56.57 0.01 56.58 0.01 **56.62 0.05** 56.59 0.03 56.50 -0.07 jv **55.13 0.07** 55.03 -0.03 54.93 -0.13 55.00 -0.06 54.86 0.20 54.64 -0.42
kn 52.70 0.07 52.67 0.04 **52.70 0.07** 52.66 0.03 52.67 0.04 52.42 -0.20
su 52.83 -0.04 52.91 0.04 52.79 -0.07 52.54 -0.32 52.68 -0.19 **55.20 2.33**
sw 52.12 0 52.13 0.01 52.14 0.02 **52.20 0.08** 52.15 0.03 51.76 -0.36
yo 50.52 -0.02 50.50 -0.10 50.42 -0.19 50.31 -0.21 50.37 -0.15 50.35 -0.17
| XLM-Rlarge XLM-Rbase mBERT-base |
|-----------------------------------|
- **Dia menjalani hidup bak singa di kebun binatang**/he lives life like a lion in the zoo G.0.2 Challenge - linguistic
- Àgbẹ̀ **náà pa gbogbo ọmọ tí igi nàá bí lánàa**́/the farmer killed all the children that the tree gave birth to yesterday
- **Penzi lao ni kama moto wa kibatari**
kwenye upepo/their love is like fire in the wind
- **Kadang jelema teh bisa ipis kulit**
bengeut/sometimes people can have thin skin
- **Si eta kuliah siga nu teu kantos bobo**/that college guy looks like he never sleeps
- ಅವರು ನೀಡಿದ್ದ ನೀರು ಸಮುದ್ರದ ನೀರಿನಂತೆ ಉಪ್ಪಾಗಿತ್ತು/the water they gave was as salty as sea water G.0.3 Challenge - translation
- **hirup teh kudu boga kaditu kadieu**/life must have here and there
- लड़कʏ का ȭɜक्तत्व गुलाब जामुन कʏ तरह मीठा
था/the girl's personality was as sweet as Gulab Jamun
- Ìṣọ̀lá má ń tún ilé rẹ̀ **ṣe ní gbogbo**
nìgbà/honor does not repair his house all the time
- **Nek gawe wedang kopi Painem kaya disoki**
suruh/if you make a Painem coffee drink, it's like being told
- **Bapak tirine sifate kaya Gatot Kaca**/his stepfather is like Gatot Kaca
## G.0.4 Hard
- काɡलदास भारत के शेɜख्चली हैं।/Kalidas is Shekhchili of India
- उसके मन का मैल ɠमटʍ कʏ तरह छलनी से ɟनकल गया।/The filth of his mind was removed from the sieve like soil
- **Wajahku dan adikku ibarat pinang di belah dua**/My face and my sister are like areca nuts split in half.
- **Hari ini cuacanya seperti berada di di puncak gunung Bromo**/Today the weather is like being at the top of Mount Bromo
- **Doni karo Yanti pancen kaya Rahwana**
Sinta ing pewayangan/Doni and Yanti are really like Ravana Sinta in a puppet show
## H Accuracy On Annotated Commonsense Categories
Table 14 shows the accuracy on commonsense categories across all languages for XLM-Rlarge. Note that Yoruba is not included due to XLM-Rlarge not being trained on this language.
| Language | Category | Acc. |
|------------|------------|--------|
| hi | obj | 67.50 |
| vis | 67.48 | |
| soc | 67.86 | |
| cul | 70.65 | |
| id | obj | 76.60 |
| vis | 76.56 | |
| soc | 82.71 | |
| cul | 77.11 | |
| jv | obj | 65.02 |
| vis | 58.89 | |
| soc | 64.48 | |
| cul | 50.82 | |
| kn* | obj | 57.14 |
| vis | 36.36 | |
| soc | 55.56 | |
| cul | 77.78 | |
| su | obj | 57.07 |
| vis | 56.86 | |
| soc | 67.50 | |
| cul | 61.11 | |
| sw | obj | 58.06 |
| vis | 61.99 | |
| soc | 56.50 | |
| cul | 52.46 | |
| yo | obj | 48.15 |
| vis | 52.38 | |
| soc | 49.58 | |
| cul | 47.37 | |
Table 14: Performance of XLM-Rlarge on commonsense categories indicated by annotators.12
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section on page 9
✗ A2. Did you discuss any potential risks of your work?
We believe that there are no significant risks posed by this work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and intro page 2
✓ A4. Have you used AI writing assistants when working on this paper?
ChatGPT was used for summarizing/shortening areas of the paper
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
throughout paper
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No, however we use only publically released models and data and cite all artifacts used.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
There were no specific individuals named (beyond references to famous individuals such as the president)
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 3/4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix E
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In table 8 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix B
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 3
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
We did not directly include this in the instructions, but it was explained through text/in person if annotators were interested.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
We have submitted a request for exemption at this time due to minimal risk posed, but the IRB has not gotten back to us yet.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Table 5, appendix A |
kweon-etal-2023-open | Open-{W}iki{T}able : Dataset for Open Domain Question Answering with Complex Reasoning over Table | https://aclanthology.org/2023.findings-acl.526 | Despite recent interest in open domain question answering (ODQA) over tables, many studies still rely on datasets that are not truly optimal for the task with respect to utilizing structural nature of table. These datasets assume answers reside as a single cell value and do not necessitate exploring over multiple cells such as aggregation, comparison, and sorting. Thus, we release Open-WikiTable, the first ODQA dataset that requires complex reasoning over tables. Open-WikiTable is built upon WikiSQL and WikiTableQuestions to be applicable in the open-domain setting. As each question is coupled with both textual answers and SQL queries, Open-WikiTable opens up a wide range of possibilities for future research, as both reader and parser methods can be applied. The dataset is publicly available. | # Open-Wikitable: Dataset For Open Domain Question Answering With Complex Reasoning Over Table
Sunjun Kweon1, Yeonsu Kwon1, Seonhee Cho1**, Yohan Jo**2∗
, Edward Choi1 KAIST1, Amazon2
{sean0042, yeonsu.k, ehcho8564, edwardchoi}@kaist.ac.kr
{jyoha}@amazon.com
## Abstract
![0_Image_0.Png](0_Image_0.Png)
Despite recent interest in open domain question answering (ODQA) over tables, many studies still rely on datasets that are not truly optimal for the task with respect to utilizing structural nature of table. These datasets assume answers reside as a single cell value and do not necessitate exploring over multiple cells such as aggregation, comparison, and sorting. Thus, we release Open-WikiTable, the first ODQA
dataset that requires complex reasoning over tables. Open-WikiTable is built upon WikiSQL and WikiTableQuestions to be applicable in the open-domain setting. As each question is coupled with both textual answers and SQL queries, Open-WikiTable opens up a wide range of possibilities for future research, as both reader and parser methods can be applied.
The dataset and code are publicly available1.
## 1 Introduction
Tables have played a prominent role as a source of knowledge in question answering (QA). They contain various types of data such as numeric, temporal, and textual information in a structured manner.
Early table QA datasets (Pasupat and Liang, 2015; Zhong et al., 2017; Yu et al., 2018) have focused on complex questions that exploit the structure of tables via aggregation, comparison, or sorting. However, these datasets assume that the relevant table is always given for each question (Kostic et al. ´ ,
2021), limiting their applicability in real-world scenarios. For more practical use, recent works extend tableQA to the open-domain setting, where the evidence table should be retrieved solely from using the question itself.
The first research of open-domain QA over tables is Herzig et al. (2021). They released a dataset, NQ-table, by extracting questions from Natural
∗ This work is not associated with Amazon.
1https://github.com/sean0042/Open_
WikiTable Questions (Kwiatkowski et al., 2019) whose answers reside in a table. All questions, however, are answered by extracting a single cell and do not necessitate any extensive reasoning across multiple cells. It is also notable that 55% of the evidence tables consist of only a single row, which has little structure.
Another work for open-domain table QA is Pan et al. (2021). They presented E2E-WTQ and E2EGNQ datasets, extensions of WikiTableQuestion
(Pasupat and Liang, 2015) and GNQtables (Shraga et al., 2020), to develop cell-level table retrieval models. However, as they assume cell extraction for the table QA task and construct the datasets accordingly, E2E-WTQ and E2E-GNQ share the same limitation as the NQ-table; all answers are restricted to a single cell. Another issue with these
| Datasets | Retrieval | Complex | # of | # of | Answer | |
|--------------------|-------------|-----------|--------------|---------|-----------|--------------------------|
| Annotation | Reference | | | | | |
| Availability | Reasoning | QA pairs | table corpus | | | |
| WikiTableQuestions | ✗ | ✓ | 22,033 | 2,108 | Text | Pasupat and Liang (2015) |
| WikiSQL | ✗ | ✓ | 80,654 | 24,241 | SQL | Zhong et al. (2017) |
| E2E-WTQ | ✓ | ✗ | 1,216 | 2,108 | Text | Pan et al. (2021) |
| E2E-GNQ | ✓ | ✗ | 789 | 74,224 | Text | Pan et al. (2021) |
| NQ-table | ✓ | ✗ | 11,628 | 169,898 | Text | Herzig et al. (2021) |
| Open-WikiTable | ✓ | ✓ | 67,023 | 24,680 | Text, SQL | |
datasets is their small size, containing only around 1k examples in total. This makes it challenging to train language models as there may not be enough data.
Given that there is currently no dataset that fully considers the structural property in the opendomain setting, we present **Open-WikiTable**. It extends WikiSQL and WikiTableQuestions to be more applicable in the open-domain setting. OpenWikiTable is a large-scale dataset composed of 67,023 questions with a corpus of 24,680 tables.
The key features of our dataset are listed below.
- First, nearly 40% of the questions require advanced reasoning skills beyond simple filtering and cell selection. The model should utilize operations such as aggregating, sorting, and comparing multiple cells to derive an accurate answer.
- Second, all questions are carefully designed for the retrieval task in the open-domain setting. We manually re-annotated 6,609 table descriptions
(i.e. page title, section title, caption), then added them to the original question to ensure that questions convey sufficient context to specify the relevant table.
- Third, questions are paraphrased to reduce high word overlap between the question and the grounding table. It reflects a tendency in the open-domain setting where questions are often phrased in diverse styles, and terms in the questions may be different from those in the table.
- Lastly, every question in the dataset is labeled with both textual answers and SQL queries. This provides an opportunity to train and evaluate models with both common table QA techniques, Reader and *Parser* in parallel.
In this work, we thoroughly explain the data construction process of Open-WikiTable. We perform open domain question answering by incorporating a retrieval task with QA over tables (see Figure 1).
Then, we evaluate the performance of the retriever and the QA models with both reader and parser approaches.
## 2 Data Construction
Open-WikiTable is built upon two closed-domain table QA datasets - WikiSQL (Zhong et al.,
2017) and WikiTableQuestions (Pasupat and Liang, 2015). WikiSQL is a large-scale text-to-SQL
dataset but is composed of relatively simple questions since they are constructed from limited SQL
templates. WikiTableQuestions contains more complex questions involving superlative or arithmetic expressions but only provides textual form answers.
Shi et al. (2020) further annotated SQLs for a subset of WikiTableQuestions. By utilizing these datasets, we aim to create a diverse and intricate set of questions, with each question annotated with both a textual and logical form answer.
Although the questions in WikiSQL and WikiTableQuestions require more advanced table reasoning than those of existing open-domain table QA datasets (See Table 1), they possess two problems to be directly used in the open domain setting.
First, questions are not specific enough to retrieve relevant tables. Second, questions have high word overlaps with table contents which are unrealistic in the open-domain setting where the question can be expressed in lexically diverse forms. We resolve the first issue via decontextualization (2.1) and the second issue via paraphrasing (2.2), as elaborated in the following sections.
## 2.1 Decontextualization
Our goal is to decontextualize questions, that is, adding enough context about relevant tables to each question so that retrievers can find the relevant tables (Chen et al., 2020; Choi et al., 2021).
However, the obstacle here is that a significant portion of table descriptions provided by WikiSQL
and WikiTableQuestions were either missing or not specifically described to distinguish between tables. In this case, decontextualized questions still cannot point out the exact grounding tables
(appendix A.1). Therefore, we resolved this issue by comparing 6,609 problematic tables with the corresponding Wikipedia article and re-annotating table descriptions. The resulting table corpus of Open-WikiTable has 24,680 tables, all of which have distinct descriptions.
Next, the questions were decontextualized with the re-annotated table descriptions. All table descriptions necessary for the retrieval of the grounding table were incorporated into each question. We transformed the questions by utilizing GPT-J, a language model from Eleuther AI. In order to ensure that the generated question accurately reflects the original intention, we decontextualized the questions by maintaining the form of the original question while incorporating table descriptions only as adverbs, as exemplified in Appendix A.2. The generated questions were accepted only if all key entities (i.e. referred column names and condition values) of the original question and added table descriptions were preserved. If not, we repeatedly generated new samples until accepted.
## 2.2 Paraphrase
Although the decontextualization process ensures the questions are suitable for table retrieval, it is quite easy to retrieve the grounding table due to a high degree of word overlap in the question and the table contents. To address this issue, we further paraphrased the questions via back-translation
(Prabhumoye et al., 2018). We utilized EnglishGerman-English translation using Google Translate API. To inspect whether the degree of word overlap has decreased, we measure the average BLEU
score between the question and grounding table contents. It has dropped after paraphrase, from 7.28 × 10−2to 6.56 × 10−2. It is also notable that the variance of word distribution in the questions has increased from 2.3 × 105to 3.1 × 105through paraphrasing.
## 2.3 Quality Check
We then review the questions to ensure their quality as the final step. Authors manually reviewed 10k randomly selected questions, according to the following standards: 1) The intent of the original question should not be altered during any stage of the data construction process. 2) Every information added through the decontextualization process should be preserved after paraphrasing. It turned out that 7.9% of 10k randomly selected samples did not meet our criteria. Within the 7.9% error rate, we discovered that 70% of these errors were due to the ambiguity of the original question. As a result, errors stemming from our decontextualization and paraphrasing processes account for 2.3%
of the 10,000 random samples. The final test set, however, is composed only of the accepted samples during the quality review to ensure the integrity in the evaluation of the model performance. Error examples are reported in appendix A.3.
## 2.4 Data Statistics
As part of our dataset preparation process, we partitioned the entire dataset into train, validation, and test sets, with a ratio of 8:1:1. Consequently, the test set comprised 6,602 instances, as shown in Table 2. It is important to note that during this partitioning process, we ensured that each subset do not share any tables, enabling us to evaluate the generalizability of the models to previously unseen tables.
| Train | Valid | Test | Total | |
|----------------|---------|--------|---------|--------|
| # of questions | 53,819 | 6,602 | 6,602 | 67,023 |
| # of tables | 17,275 | 2,139 | 2,262 | 21,676 |
| corpus size | 24,680 | | | |
## 3 Experiments
First of all, we split tables into segments so that models can handle long tables within the limited input sequence length. Inspired by Karpukhin et al.
(2020), tables are split row-wise into 100-word chunks. Around 52% of tables in our corpus are split into multiple chunks, which resulted in a total of 54,282 table segments. For the retrieval task, each table is flattened and appended with the table descriptions, and then fed to a retriever. When a grounding table is split into multiple tables, all table segments that are relevant to an answer should be retrieved. Then, we perform end-to-end table QA where the model should answer the question given retrieved tables. More details about experimental settings are in Appendix B.
## 3.1 Retrieval
Experimental Setup We employ the BM25 algorithm (Robertson et al., 2009) for the sparse search.
| Encoder | Data | k=5 | k=10 | k=20 | |
|-------------|------------------|------------------|--------|--------|------|
| Text | Table | Original | 6.6 | 8.0 | 10.3 |
| BM25 | Decontextualized | 45.5 | 52.9 | 59.7 | |
| Paraphrased | 42.2 | 48.9 | 56.1 | | |
| Original | 25.0 | 34.1 | 45.1 | | |
| BERT | BERT | Decontextualized | 91.6 | 96.0 | 97.8 |
| Paraphrased | 89.5 | 95.0 | 97.3 | | |
| Original | 19.4 | 28.1 | 38.5 | | |
| BERT | TAPAS | Decontextualized | 88.2 | 94.5 | 97.3 |
| Paraphrased | 84.0 | 91.4 | 95.6 | | |
For the dense search, we utilize a dual-encoder approach: BERT (Devlin et al., 2018) and TAPAS
(Herzig et al., 2020) for the table encoder and BERT for the question encoder. They are trained to maximize the inner product between the question and table embeddings. The performance of the retriever is measured at different top-k retrieval accuracy, where we use 5, 10, and 20 for k. To analyze the effect of each data construction process on the retrieval task, we experiment with three different types of questions: original question, decontextualized question, and paraphrased question.
The result is shown in Table 3.
Result Our experiments demonstrate that decontextualizing led to improved performance in all experiments. This suggests that the original questions are not sufficient for table retrieval and decontextualization dramatically alleviates this problem.
However, the result also implies that table retrieval becomes too easy as the information is added directly to the question without any syntactic or semantic changes. This tendency is mitigated after paraphrasing, which led to a performance drop for all retrievers. Specifically, BM25 had the largest performance drop, while the methods utilizing language models had relatively smaller drops, demonstrating their robustness against linguistic variation.
These results suggest that word overlap between questions and tables is reduced and advanced semantic understanding is required. Additionally, when comparing the performance of BERT and TAPAS table encoders, retrieval performed better with BERT for all three types of data. As previously demonstrated by Wang et al. (2022) in the case of NQ-table, table retrieval does not necessarily require a table-specific model, a conclusion reconfirmed by Open-WikiTable.
| Method | Validation | Test | | | | |
|----------|--------------|--------|------|------|------|------|
| k=5 | k=10 | k=20 | k=5 | k=10 | k=20 | |
| Reader | 55.1 | 62.7 | 65.0 | 57.5 | 64.5 | 65.2 |
| Parser | 63.3 | 66.0 | 67.0 | 65.2 | 67.1 | 67.9 |
## 3.2 End-To-End Table Qa
Experimental Setup We experiment with two different methods: reader and parser. Conventionally, the parser only utilizes the table schema rather than the entire contents, as the question typically specifies the exact table value. However, in OpenWikiTable, the values are often paraphrased, requiring the parser to extract the exact value from the table contents (See Appendix C).
For end-to-end question answering, we adopt the retriever that yielded the highest performance in the previous experiment. The question and retrieved tables are concatenated and fed to QA models. Both reader and parser are implemented with the fusionin-decoder architecture (Izacard and Grave, 2020)
and the T5-*base* language model (Raffel et al.,
2020). We use the exact match accuracy (EM) for the evaluation metric. For the parser, EM is computed on the execution result of generated SQLs, as they can be expressed in a diverse form.
Result Table 4 summarizes validation and test results for end-to-end QA. As the retrieval performance improves with increased k, QA models, which rely on the retrieved tables, accordingly show consistent performance improvement with larger k. However, regardless of the number of k, the parser model outperforms the reader model.
This performance gap is most significant with small k, and decreases as k grows. We posit that this is due to the difference in the minimum amount of table segments that the reader and parser must refer to create an accurate answer. The parser model can generate a correct SQL query even when all segments of a table are not retrieved, as long as any of the retrieved splits possess all necessary cell values.
On the contrary, the reader model should refer to every relevant split to derive a correct answer.
For more detailed analysis, we categorize questions into easy or hard based on if the answer is derived from a single cell value, and into singletable or multi-table based on if the grounding table
| Category | Reader | Parser | # questions | |
|-------------|------------|----------|---------------|-------|
| Table-split | Complexity | | | |
| Single | Easy | 74.0 | 82.8 | 1,574 |
| Hard | 62.9 | 51.2 | 1,794 | |
| Multi | Easy | 70.5 | 82.5 | 1,520 |
| Hard | 56.0 | 58.8 | 1,714 | |
is split. The results are shown in Table 5. The parser outperforms the reader when the grounding table is split into multiple segments, regardless of question complexity, which aligns with the previous analysis. It is notable that the parser shows inferior or comparable performance to the reader for hard questions. We believe this is due to the relative size between WikiSQL (*i.e.* mostly easy) and WikiTableQuestions (*i.e.* mostly hard), and that the parser has a limited opportunity to understand the diversity of complex SQL queries.
## 4 Conclusion
We present Open-WikiTable, the first ODQA
dataset that requires complex reasoning over Wikipedia tables. The dataset is constructed by revising WikiTableQuestions and WikiSQL to be fully functional in the open-domain setting through decontextualization and paraphrasing. The dataset provides both textual and logical form answers for each question so that end-to-end reader and parser models can be trained. We hope that OpenWikiTable can provide new opportunities for future research such as investigating the effectiveness of leveraging both reader and parser approaches in the retrieval and generation phase.
## Limitations
Although we carefully designed Open-WikiTable for complex open-domain table QA, there are some limitations since it is based on the existing datasets.
First, ambiguous or erroneous samples from the original WikiSQL or WikiTableQuestions dataset may still lie in our training and validation set. As we mentioned in Section 3.2, most of the equivocal samples were attributed to the ambiguity of the original question and excluded from the test set, but not removed. Second, unlike semantic coverage of the questions is extended by decontextualization and paraphrasing, the coverage of the question remains in that the answer and logic to derive the answer in each question is the same. Still, OpenWikiTable demonstrates the potential for further research on open-domain QA over the table.
## Acknowledgements
This work was supported by SAMSUNG Research, Samsung Electronics Co., Ltd. and Institute of Information & Communications Technology Planning & Evaluation (IITP) grant (No.2019-000075), National Research Foundation of Korea
(NRF) grant (NRF-2020H1D3A2A03100945), and the Korea Health Industry Development Institute
(KHIDI) grant (No.HR21C0198), funded by the Korea government (MSIT, MOHW).
## Ethics Statement
No ethics concerned with our work.
## References
Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Wang, and William W Cohen. 2020. Open question answering over tables and text. *arXiv preprint* arXiv:2010.10439.
Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das, and Michael Collins. 2021. Decontextualization: Making sentences stand-alone. *Transactions of the Association* for Computational Linguistics, 9:447–461.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Jonathan Herzig, Thomas Müller, Syrine Krichene, and Julian Martin Eisenschlos. 2021. Open domain question answering over tables via dense retrieval. arXiv preprint arXiv:2103.12011.
Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Martin Eisenschlos. 2020. Tapas: Weakly supervised table parsing via pre-training. *arXiv preprint arXiv:2004.02349*.
Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick ˘
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906.
Bogdan Kostic, Julian Risch, and Timo Möller. 2021. ´
Multi-modal retrieval of tables and texts using triencoder models. *arXiv preprint arXiv:2108.04049*.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. *Transactions of the* Association for Computational Linguistics, 7:453–
466.
Feifei Pan, Mustafa Canim, Michael Glass, Alfio Gliozzo, and Peter Fox. 2021. Cltr: An end-toend, transformer-based system for cell level table retrieval and table question answering. *arXiv preprint* arXiv:2106.04441.
Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables.
arXiv preprint arXiv:1508.00305.
Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. arXiv preprint arXiv:1804.09000.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® *in Information Retrieval*, 3(4):333–389.
Tianze Shi, Chen Zhao, Jordan Boyd-Graber, Hal Daumé III, and Lillian Lee. 2020. On the potential of lexico-logical alignments for semantic parsing to sql queries. *arXiv preprint arXiv:2010.11246*.
Roee Shraga, Haggai Roitman, Guy Feigenblat, and Mustafa Cannim. 2020. Web table retrieval using multimodal deep learning. In *Proceedings of the 43rd* International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1399–1408.
Zhiruo Wang, Zhengbao Jiang, Eric Nyberg, and Graham Neubig. 2022. Table retrieval may not necessitate table-specific model design. arXiv preprint arXiv:2205.09843.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018. Spider: A
large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task.
arXiv preprint arXiv:1809.08887.
Victor Zhong, Caiming Xiong, and Richard Socher.
2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.
## A Data Construction Details A.1 Table Descriptions Re-Annotation
Figure 2 illustrates the indistinguishable annotation of the table corpus in WikiSQL and WikiTableQuestions, leading to ambiguity in the decontextualized questions. The figure on the right shows how the problem is solved by re-annotating the table descriptions.
## A.2 Construction Details
The prompt used by GPT-J for decontextualization can be found in Table 6. Table 7 shows examples of each step in the process of creating the OpenWikiTable.
## A.3 Error Analysis
Upon closer examination of the 7.9% error on generated Open-WikiSQL, we find that 70% of the errors were the results of ambiguity in the original questions, which was propagated over during the data construction process. The percentage of errors by the decontextualization process and paraphrasing process was 15% respectively. In Table 8, we provide examples for each type of error encountered.
## B Experimental Setup
| Game | Date | Team | Score | High points |
|--------|----------|--------|-----------|---------------|
| 1 | April 19 | Utah | W 113-100 | Kobe Bryant |
| 2 | April 21 | Utah | W 119-109 | Kobe Bryant |
## B.1 Flattened Table Format
In order to present the table as passages, we flattened the table and added table descriptions with the help of special tokens. For example, Page title : 2008-09 Los Angeles Lakers Section title : Playoffs Caption : First round Table ID : table_132938_29
## Is Flattened As
[Page Title] 2008-09 Los Angeles Lakers [Section Title] Playoffs [Caption] First round [table_id] table_132938_29 [Header]
Game [SEP] Date [SEP] Team [SEP] Score
[SEP] High points [Rows] [Row] 1 [SEP]
April 19 [SEP] Utah [SEP] W 113-100
[SEP] Kobe Bryant [Row] 2 [SEP] April 21
[SEP] Utah [SEP] W 119-109 [SEP] Kobe Bryant
## B.2 Hyperparameters
All experiments were on 8 NVIDIA A6000 48G
GPUs. For the retrieval models, we use a batch size of 64, with a learning rate of 1.0 e-5 using Adam and linear scheduling with a warm-up. The in-batch negative technique was utilized to train the retriever. We evaluated every 500 steps and used early stopping with patience 5. For the questionanswering module, we use batch size 8 for k = 5, 10 and batch size 4 for k = 20. The rest of the hyperparameters goes the same with the retriever.
## C Open-Wikitable With Parser
In the open-domain scenario, where the table is not specified a priori, questions may not contain the exact cell value to generate SQLs. As shown below, it is necessary to refer to the grounding table and use the exact value to generate the correct SQL.
| What is Born-Deceased if the term | |
|-------------------------------------|-------------------------------------------------------------------------------------------------------|
| Question | of office is December 4, 1941 in the list of Prime Ministers of Albania SELECT Born_Died From table_2 |
| SQL | WHERE Term_start = "4 December 1941" In the Gothic-Germanic strong verb, which part 2 has a verb |
| Question | meaning to jump? SELECT Part_2 FROM table_3 |
| SQL | WHERE Verb_meaning = "to leap" |
![7_image_0.png](7_image_0.png)
Page Title : Wake Forest Demon Deacons football, 1980–89 Section Title : Schedule Caption : 1987 Question : Who was the opponent when the result was L 0-14?
What is converted question using given information?
In 1987's schedule, who was the opponent of Wake Forest Demon Deacons when the result was L 0-14?
...
Page Title : Toronto Raptors all-time roster Section Title : A
Caption : A
Question : What is order S24's LNER 1946 number? What is converted question using given information? Considering the history of GER Class R24, what is order S24's LNER 1946 number?
...
Page Title : GER Class R24 Section Title : History Caption : Table of orders and numbers Question : What is order S24's LNER 1946 number?
What is converted question using given information? Considering the history of GER Class R24, what is order S24's LNER 1946 number?
...
Page Title : 2006–07 Toronto Raptors season Section Title : Game log Caption : February Question : Who had the highest number of rebounds on February 14?
What is converted question using given information? From 2006-07 Toronto Raptors' game log, who had the highest number of rebounds on February 14?
...
Page Title : Toronto Raptors all-time roster Section Title : O
Caption : O
Question : Which school was in Toronto in 2001-02?
What is converted question using given information? Which school was in Toronto in 2001-02 from Toronto Raptors all-time roster O?
...
Page Title : Stozhary Section Title : Stozhary 2003 Prize-Winners Caption : Stozhary 2003 Prize-Winners Question : What actor was nominted for an award in the film Anastasiya Slutskaya?
What is converted question using given information? For 2003 Stozhary prize winners, what actor was nominted for an award in the film Anastasiya Slutskaya?
...
Page Title : 1985 New England Patriots season Section Title : Regular season Caption : Regular season Question : How many weeks are there? What is converted question using given information? In 1985 New England Patriots season, how many weeks were there for regular season?
...
Page Title : Friday Night Lights (U.S. ratings)
Section Title : Weekly ratings Caption : Season 1 Question : What is the rank number that aired october 26, 2007? What is converted question using given information?
What is the rank number of Friday Night Lights Season 1's weekly ratings that aired october 26, 2007?
Table 6: The prompt used for GPT-J when decontextualizing the question 8293
| original | The nhl team new york islanders is what nationality? |
|------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| de-contexualized | The NHL team New York Islanders in what nationality 1994 NHL Entry Draft's Round one? |
| parapharsed | What nationality is the NHL team New York Islanders in the first round of the 1994 NHL Entry Draft? |
| original | What is the maximum starts that result in an average finish of 16.5? |
| de-contexualized | What is the maximum starts that result in an average finish of 16.5 for NASCAR Nationwide Series' Chad Little? |
| parapharsed | What are the maximum starts that result in a 16.5 average finish for NASCAR Nationwide Series' Chad Little? |
| original | If the population is 2188, what was the median household income? |
| de-contexualized | If the population is 2188 in Ohio locations ranked by per capita income, what was the median household income? |
| parapharsed | If Ohio's population is 2,188 ranked by per capita income, what was the median household income? |
| original | What values of HDTV correspond to n° of 862? |
| de-contexualized | From the list of television in Italy's Shopping section, what values of HDTV correspond to n° of 862? |
| parapharsed | Which HDTV values correspond to the number 862 in the TV list in the Italian shopping area? |
| original | How many stories is the torre reforma building? |
| de-contexualized | How many stories is the torre reforma building from the list of tallest buildings in Mexico's Under construction? |
| parapharsed | From the list of tallest buildings under construction in Mexico, how many floors does the Torre Reforma building have? |
| original | How many teams have a head coach named mahdi ali? |
| de-contexualized | How many teams has a head coach named mahdi ali among 2010–11 UAE Pro-League? |
| parapharsed | How many teams in UAE Pro-League 2010-11 have a head coach named Mahdi Ali? |
| original | Which Member has an Electorate of southern melbourne? |
| de-contexualized | Which Member has an Electorate of southern melbourne among Members of the Australian House of Representatives, 1903–1906? |
| parapharsed | Among the Members of the Australian House of Representatives, 1903–1906, which member does a south Melbourne electorate have? |
| original | Which position had fewer rounds than 3, and an overall of less than 48? |
| de-contexualized | Which position among 2007 Jacksonville Jaguars draft history had fewer rounds than 3, and an overall of less than 48? |
| parapharsed | Which position in the 2007 Jacksonville Jaguars draft history had less than 3 rounds and less than 48 overall? |
| original | How many numbers of dances for place 1? |
| de-contexualized | How many numbers of dances for place 1 for Dancing on Ice (series 5)? |
| parapharsed | How many dances for 1st place for Dancing on Ice (series 5) ? |
| Table 7: Examples of each step in the process of creating the Open-WikiTable | |
| Error type 1 (70%) | Ambiguity in the original question |
|---------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
| original | Name the 2005 with 2007 of sf |
| de-contextualized | Name the 2005 with 2007's Doubles name of sf among Alicia Molik? |
| parapharsed | Do you name the 2005s with 2007 doubles names from sf under Alicia Molik? |
| Error type 2 (15%) | Change of intent after decontextualizing |
| original | how many total rounds did she fight in ? |
| de-contextualized | How many total rounds did she fight for Gina Carano? |
| parapharsed | How many rounds did she fight for Gina Carano in total? |
| Error type 3 (15%) | Change of intent after paraphrasing |
| original | Which Bask has an Indoor track of 0, and a Swimming of 5? |
| de-contextualized | Which bask has an indoor track of 0,and a swimming of 5 for horizon league's women's sports championship totals? |
| parapharsed | Which pool has an indoor stretch of 0 and a swim of 5 for the total number of women's athletic championships in the horizon league? |
| Table 8: Error analysis on the construction stage of Open-WikiTable | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✗ A2. Did you discuss any potential risks of your work?
There is no potetional risks for our work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, 1.Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2. Data Construction 3. Experiments
✓ B1. Did you cite the creators of artifacts you used?
2. Data Construction 3. Experiments
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
2. Data Construction 3. Experiments
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
2. Data Construction 3. Experiments B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3. Experiments A. Data Construction Details
## C ✓ **Did You Run Computational Experiments?** 3. Experiments
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? B. Experimental Setup The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
B. Experimental Setup
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3. Experiments A. Data Construction Details B. Experimental Setup
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3. Experments D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
2.4 Quality Check D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
pan-etal-2023-context | What In-Context Learning {``}Learns{''} In-Context: Disentangling Task Recognition and Task Learning | https://aclanthology.org/2023.findings-acl.527 | Large language models (LLMs) exploit in-context learning (ICL) to solve tasks with only a few demonstrations, but its mechanisms are not yet well-understood. Some works suggest that LLMs only recall already learned concepts from pre-training, while others hint that ICL performs implicit learning over demonstrations. We characterize two ways through which ICL leverages demonstrations. Task recognition (TR) captures the extent to which LLMs can recognize a task through demonstrations {--} even without ground-truth labels {--} and apply their pre-trained priors, whereas task learning (TL) is the ability to capture new input-label mappings unseen in pre-training. Using a wide range of classification datasets and three LLM families (GPT-3, LLaMA and OPT), we design controlled experiments to disentangle the roles of TR and TL in ICL. We show that (1) models can achieve non-trivial performance with only TR, and TR does not further improve with larger models or more demonstrations; (2) LLMs acquire TL as the model scales, and TL{'}s performance consistently improves with more demonstrations in context. Our findings unravel two different forces behind ICL and we advocate for discriminating them in future ICL research due to their distinct nature. | # What In-Context Learning "Learns" In-Context: Disentangling Task Recognition And Task Learning
Jane Pan Tianyu Gao Howard Chen Danqi Chen Department of Computer Science, Princeton University
{jp7224,tianyug,howardchen,danqic}@cs.princeton.edu
## Abstract
Large language models (LLMs) exploit incontext learning (ICL) to solve tasks with only a few demonstrations, but its mechanisms are not yet well-understood. Some works suggest that LLMs only recall already learned concepts from pre-training, while others hint that ICL performs implicit learning over demonstrations. We characterize two ways through which ICL leverages demonstrations. *Task* recognition (TR) captures the extent to which LLMs can recognize a task through demonstrations - even without ground-truth labels - and apply their pre-trained priors, whereas *task* learning (TL) is the ability to capture new input-label mappings unseen in pre-training.
Using a wide range of classification datasets and three LLM families (GPT-3, LLaMA and OPT), we design controlled experiments to disentangle the roles of TR and TL in ICL. We show that (1) models can achieve non-trivial performance with only TR, and TR does not further improve with larger models or more demonstrations; (2) LLMs acquire TL as the model scales, and TL's performance consistently improves with more demonstrations in context. Our findings unravel two different forces behind ICL and we advocate for discriminating them in future ICL research due to their distinct nature.1
## 1 Introduction
Large language models (LLMs) have demonstrated the ability to perform in-context learning (ICL),
i.e., "learning" to perform a task purely from examples in the context without any parameter updates (Brown et al., 2020). This powerful and flexible phenomenon enables LLMs to be used as general-purpose models that can perform any task with a small set of labeled examples.
However, there is still no consensus on how incontext learning works. Some previous work hy1Our code is publicly available at https://github.com/
princeton-nlp/WhatICLLearns.
![0_image_0.png](0_image_0.png)
pothesizes that during pre-training, LLMs implicitly learn tasks required for downstream applications, and the in-context demonstrations merely provide information that allow the model to recognize which task is required (Xie et al., 2022).
Min et al. (2022) show empirical evidence of this hypothesis by demonstrating that ICL performance is insensitive to the usage of ground-truth labels.
On the other hand, Akyürek et al. (2023);
von Oswald et al. (2022) construct theories that Transformer-based models may perform implicit gradient descent to update an "inner-model", and Dai et al. (2023) demonstrate similarities between in-context learning and explicit fine-tuning through a series of metrics on real-world datasets. Such hypotheses assume the correct input-output mappings are important and ICL actually performs implicit learning over demonstrations.
In this paper, we disentangle ICL into **task**
recognition (TR), which recognizes the task from demonstrations and applies LLMs' pre-trained priors, and **task learning** (TL), which learns a new input-label mapping from demonstrations. In common ICL scenarios where ground-truth labels are provided, TR and TL take effect simultaneously.
We propose two settings to tease them apart: 1)
RANDOM, where the labels are uniformly sampled from the label space (Min et al., 2022), in order to restrict LLMs to only apply TR; 2) ABSTRACT,
where the labels are replaced with abstract symbols
(e.g., numbers or letters) that never co-occurred with the inputs in pre-training. We focus on how the two abilities in ICL evolve with two factors –
model sizes and *numbers of demonstrations*, which have been neglected in related literature.
Through extensive experiments with a series of classification datasets on GPT-3 (Brown et al.,
2020), LLaMA (Touvron et al., 2023), and OPT (Zhang et al., 2022), we find:
- The gap between GOLD and RANDOM is small with smaller models, corroborating with Min et al.
(2022). However, with larger models and more examples, the gap becomes larger. This suggests TR plays a significant role in ICL, but it does not scale with increasing parameters or examples.
- LLMs also perform TL, which emerges with larger models and more demonstrations. With the largest model and more than 16 examples, ABSTRACT outperforms RANDOM, pointing to a paradigm shift in in-context learning at scale.
Together, our findings provide a better way to understand ICL behaviors.2
## 2 Task Recognition And Task Learning
An LLM (parameterized by θ) performs ICL by conditioning on the input-label pair demonstrations Ddemo = (x1, y1, x2, y2, . . . , xK, yK) and the test input xtest to predict the label ytest ∼ pθ(y | Ddemo , xtest), where the demonstrations elicit a mapping f : X → Y, x ∈ X , y ∈ Y. We delineate two ways an LLM can leverage in-context demonstrations: *task recognition* and *task learning*.
Task recognition (TR) represents models' ability to recognize the mapping f purely by observing the input distribution {xi}
K
i=1 and the label distribution {yi}
K
i=1, without the provided (xi, yi) pairs.
The LLM then applies its pre-trained priors to the recognized f. Formally, when only TR is enabled,
$$\begin{array}{c}{{p_{\theta}(y\mid x_{\mathrm{test}},\{x_{i},y_{i}\}_{i=1}^{K})}}\\ {{=p_{\theta}(y\mid x_{\mathrm{test}},\{x_{i}\}_{i=1}^{K},\{y_{i}\}_{i=1}^{K}),}}\end{array}$$
which suggests TR does not rely on the pair information. For example, an input distribution of movie reviews and a label distribution of "The sentiment is positive/negative" can be easily recognized as a sentiment classification task due to their prevalence during pre-training, and LLMs can make reasonable predictions without explicitly "learning" the task via ground-truth demonstrations. This leads to observations that the model can still perform well even when we provide wrong input-label mappings, e.g., "The movie is great. The sentiment is *negative*" (Min et al., 2022). Task learning (TL), on the other hand, characterizes how the model learns a new mapping from the input-label pairs through demonstrations. Unlike TR, TL allows models to learn novel mappings and thus correct input-label pairs will be crucial.
We posit that the two mechanisms occur under separate conditions, as recognizing an already learned task is easier than learning a new mapping.
Models are able to perform TR at a small scale, but this ability does not drastically improve with increasing model sizes and demonstrations; on the other hand, TL improves significantly when model sizes and numbers of demonstrations increase. To show the above phenomenon, we disentangle TR
and TL through *label space manipulation*, including three different setups (examples in Figure 1):
- GOLD: the standard ICL setting where we use natural prompts and gold input-label pairs. This setup reflects both TR and TL abilities.
- RANDOM: similar to Min et al. (2022), we use the same natural prompts as GOLD and sample demonstration labels uniformly at random from the label space. This setup reflects TR only.
- ABSTRACT: we use minimal prompts (which provide no task information) and characters with no clear semantic meanings (e.g. numbers, letters, and random symbols) as the label for each class.
We found that even abstract labels may have biases in pre-training, e.g., "0" is biased towards negative. Hence, for *each* prompt x1, y1, . . . , xK, yK,
we randomly sample a 1-1 mapping φ : *Y → Y*∗
to avoid any bias, and no task-specific information is leaked in either the prompt template or the label
![2_image_0.png](2_image_0.png)
space. To evaluate the model's ABSTRACT performance, we measure its accuracy using φ(ytest) as target labels. Since these input-label mappings are never seen in pre-training, it reflects the TL ability.
In the following sections, we conduct comprehensive experiments with the above three different settings under two axes - model sizes and numbers of demonstrations - and show how TR and TL manifest under different conditions.
## 3 Experimental Setup 3.1 Datasets
We experiment on 16 classification datasets across 4 type of tasks: sentiment analysis, toxicity detection, natural language inference/paraphrase detection, and topic/stance classification. All datasets and references are in Appendix A. Our dataset selection largely follows Min et al. (2022), but we exclude multi-choice datasets since it is difficult to apply our ABSTRACT experiments on them.
## 3.2 Models
We use three state-of-the-art LLM families: GPT3 (Brown et al., 2020), LLaMA (Touvron et al.,
2023), and OPT (Zhang et al., 2022). We use GPT-3 ada (350M), babbage (1.3B), curie (6.7B),
and davinci (175B) via the OpenAI API. For OPT, we use checkpoints from the Transformers library (Wolf et al., 2020), with model sizes of 350M,
2.7B, 6.7B, 13B, 30B, and 66B parameters. For LLaMA, we use model sizes of 7B, 13B, 33B, and 65B parameters.3
## 3.3 Task Setup
We adopt the sample-based evaluation protocol:
for each test example, we sample a different set of demonstrations from the training set. We manually design 3 prompt templates for each type of classification tasks in a similar style to the prompts from Min et al. (2022). We report the mean by averaging across datasets and prompts, and standard variation across different prompts for each datapoint. For GPT-3, we sample 150 examples for each dataset. We use fewer examples due to budget constraints, and GPT-3 presents lower variance than other model families. For OPT and LLaMA,
we sample 1,350 examples for all datasets.
3For GPT-3, we use the non-instruction legacy models for fair comparison to OPT and LLaMA models. We did not run experiments on the largest OPT-175B model due to computational constraints.
We design two kinds of prompts: *natural language prompts* (Table 1), which are similar to the manual prompts in Min et al. (2022), and *minimal* prompts (Table 3), which remove any natural language instructions for the task. For ABSTRACT,
we tested three types of label choices: *numbers*
(0*, . . . , N* − 1, where N is the number of classes),
letters (N letters from A, B, C, *. . .* ), and *symbols*
(first N symbols of "@", "\#", "$", "%", "*", and
"∧"). For each test example, we randomly sample a new mapping between labels and abstract characters. We report the *number* abstract labels in all the main results and compare the three forms in §4.2.
## 4 Results
Figure 2 shows our main results with GPT-3, LLaMA, and OPT with our 3 settings: GOLD,
RANDOM, and ABSTRACT. Below we summarize the trends of TR and TL across different conditions.
## 4.1 Main Results
Summary of overall trends. We first verify that GOLD consistently performs the best across model families and number of demonstrations, which is expected given that the GOLD setting provides the model with all information. Overall, the RANDOM
curves do not increase with either model sizes or number of demonstrations, remaining largely flat; considering the scenario with *small* model sizes and few examples (K = 8), there is an insignificant gap between RANDOM and GOLD. Meanwhile, the ABSTRACT curves demonstrate an increasingly steep slope as the model sizes and the number of demonstrations grow; with small models or small K, ABSTRACT mostly underperforms RANDOM,
whereas ABSTRACT with largest models and K =
32 performs well above RANDOM (and may even be competitive with GOLD). We note that the OPT curves demonstrate significant variance, which we hypothesize to be a result of the models potentially being under-trained. We elaborate the takeaways on TR and TL below.
Task recognition is a broader capability across scales. For all model families, the RANDOM setting shows similar performance at all sizes and numbers of demonstrations. Moreover, TR performance is significantly stronger than the random baseline, even with small models and few examples. For instance, even the smallest 350M parameter models are able to recognize the task using
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
just 8 examples, drawing around 10 points of average performance lead against the random baseline for GPT-3 ada and 5 points for OPT-350M. This shows that task recognition from in-context examples does not drastically scale with model sizes or numbers of examples.
Task learning is enabled with scale. We observe that TL is dependent on model sizes: smaller models perform roughly the same across all numbers of demonstrations (see Figure 6). On the other hand, larger models can utilize the provided mapping information and perform TL, as ABSTRACT
(TL) performance increases drastically with larger sizes (first row of Figure 2). When using a larger model, the results also improve as the number of demonstration increases (second row of Figure 2).
With only 16 examples, OPT-66B and davinci are able to match the performance of GOLD while using a new label mapping. While LLaMA-65B's ABSTRACT is not as competitive as its GOLD, the trend of improving ABSTRACT performance with larger size s or larger K is clear. This suggests that TL is only enabled by scales and further improves with more demonstrations.
## 4.2 Further Analysis The Trends For Task Learning Generalize Across
different types of abstract labels. In Figure 3, we show ABSTRACT results with number, letter, and symbol labels respectively. We observe that all three versions show a similar trend and coincide with our main results. Numbers and letters perform consistently better than symbols. This may be because letters and numbers appear more frequently in the pre-training corpus, and therefore make for a more "natural" label space.
## Task Difficulty Affects The Trends. We Notice That
ABSTRACT scales better with sizes and examples when the task is simpler. In Figure 4 we compare two types of tasks: sentiment analysis and natural language inference (NLI). Since NLI is more difficult, we observe that it produces a flatter AB-STRACT curve, suggesting that the model relies more on the natural prompts and pre-training priors to solve those tasks. We demonstrate the full task-type breakdown results in §C.
## 5 Related Work
Many works have attempted to deepen empirical or theoretical understanding of ICL since its emergence in Brown et al. (2020). For instance, Xie et al. (2022) present a theoretical framework where latent "concepts" parameterize each document in pre-training. They posit that all concepts have been learned in pre-training; thus, ICL is the result of implicit Bayesian inference, where the LM uses incontext demonstrations as evidence to identify the correct concept. Min et al. (2022) present empirical evidence for this framework by showing that only limited information, rather than true input-label mappings, is needed to perform ICL.
Other works investigate the impact of the pretraining corpus on ICL. Chan et al. (2022) identify properties of the pre-training distribution that enable ICL behavior, including burstiness, label multiplicity, and a long-tailed class distribution - all of which are satisfied by natural language. Razeghi et al. (2022) show that the frequencies of terms in the pre-training corpora is positively correlated with model performance. Kirsch et al. (2022) show that both a rich training distribution and a sufficiently large model are critical to the development of in-context learning abilities.
More recently, several works have explored theoretical frameworks in which ICL can be seen as implicit gradient descent, treating a forward pass over the in-context demonstrations as an "update" to an implicit internal model. (Akyürek et al., 2023; von Oswald et al., 2022; Dai et al., 2023). For mechanistic perspectives on ICL, Olsson et al. (2022)
and Bansal et al. (2022) identify induction heads
(subnetworks that perform in-context pattern recognition) in small and large models, respectively.
While our conclusions align with aspects of previous studies, our work contributes novel insights on multiple axes. Min et al. (2022) also show that even small models can perform TR and argue that the performance gap between GOLD and RANDOM
is consistently small, but most of their experiments are on ≤13B models with 16 demonstrations; we show that as model sizes scale, GOLD tends to improve while RANDOM does not. Thus, the performance deficit of RANDOM grows as models become larger. Yoo et al. (2022) also perform similar experiments to RANDOM and ABSTRACT, but they do not deeply investigate the effect of model sizes or numbers of demonstrations. Contemporary work Wei et al. (2023) obtain similar results; additionally, they show that instruction-tuning strengthens the model's semantic priors more than it improves TL.
However, they primarily focus on closed-source models, whereas we also conduct experiments on public models such as LLaMA and OPT. Collectively, our findings offer a comprehensive understanding of how ICL works across scales.
## 6 Conclusion
While previous work often studies ICL as an umbrella term, regardless of model sizes and numbers of examples, we argue that there are two distinct characterizations of ICL - task recognition and task learning - and demonstrate that they emerge under different conditions. Even small models are capable of performing TR, but this ability does not scale.
On the other hand, TL is an emergent ability of large models; small models are unable to perform TL even when provided with more demonstrations, whereas large models can leverage more demonstations to improve their TL performance. We suggest that future work on ICL should distinguish the two phenomena and clearly state the conditions under which the experiments are conducted.
## Limitations
Though LLMs with in-context learning are capable of all kinds of NLP tasks, this work is limited to classification tasks because they are easier to be adapted to our RANDOM and ABSTRACT setup.
We leave other types of NLP tasks as future work.
Another limitation of our work lies in the definition and discussion of task learning. Though we empirically show that large models are capable of acquiring a novel mapping to abstract labels like numbers or letters, how models "learn" mechanistically is still elusive. As suggested in previous work, LLMs may conduct implicit gradient descent over demonstrations, or they may alternatively map the patterns shown in the demonstrations back to concepts learned in pre-training. To some extent, these mechanisms could be considered an advanced form of "task recognition". This work only designs experiments to better observe and disentangle TR and TL, and we look forward to further studies that reveal more insights about the mechanistic innerworkings of these phenomena in ICL.
## Acknowledgements
We thank the members of the Princeton NLP group for their valuable advice, thoughts, and discussions.
We also appreciate the helpful feedback given by the anonymous reviewers and the area chairs. This project was partially supported by the National Science Foundation under Award IIS-2211779, and a Sloan Fellowship.
## References
Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. 2023. What learning algorithm is in-context learning? investigations with linear models. In International Conference on Learning Representations (ICLR).
Hritik Bansal, Karthik Gopalakrishnan, Saket Dingliwal, Sravan Bodapati, Katrin Kirchhoff, and Dan Roth. 2022. Rethinking the role of scale for in-context learning: An interpretability-based case study at 66 billion scale. arXiv preprint arXiv:2212.09095.
Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54–63. Association for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language infer-
ence. In Empirical Methods in Natural Language Processing (EMNLP), pages 632–642.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Advancesin Neural Information Processing Systems
(NeurIPS), volume 33, pages 1877–1901.
Stephanie Chan, Adam Santoro, Andrew Lampinen, Jane Wang, Aaditya Singh, Pierre Richemond, James McClelland, and Felix Hill. 2022. Data distributional properties drive emergent in-context learning in transformers. In Advances in Neural Information Processing Systems (NeurIPS), volume 35, pages 18878–18891.
Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. 2023. Why can GPT learn in-context? language models implicitly perform gradient descent as meta-optimizers.
In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models.
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
Louis Kirsch, James Harrison, Jascha Sohl-Dickstein, and Luke Metz. 2022. General-purpose in-context learning by meta-learning transformers. arXiv preprint arXiv:2212.04458.
Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning.
P. Malo, A. Sinha, P. Korhonen, J. Wallenius, and P. Takala. 2014. Good debt or bad debt: Detecting semantic orientations in economic texts. Journal of the Association for Information Science and Technology.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In International Conference on Language Resources and Evaluation (LREC), pages 216–223.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations:
What makes in-context learning work? In Empirical
Methods in Natural Language Processing (EMNLP),
pages 11048–11064.
Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval2018 task 1: Affect in tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation, pages 1–17. Association for Computational Linguistics.
Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, and Grigorios Tsoumakas. 2020. Ethos: an online hate speech detection dataset. arXiv preprint arXiv:2006.08328.
Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. 2022. In-context learning and induction heads.
arXiv preprint arXiv:2209.11895.
Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. 2022. Impact of pretraining term frequencies on few-shot numerical reasoning. In Findings of Empirical Methods in Natural Language Processing (EMNLP), pages 840–854. Association for Computational Linguistics.
Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. 2018. CARER: Contextualized affect representations for emotion recognition. In Empirical Methods in Natural Language Processing (EMNLP), pages 3687–3697.
Emily Sheng and David Uthus. 2020. Investigating societal biases in a poetry composition system. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing, pages 93–106.
Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Empirical Methods in Natural Language Processing (EMNLP), pages 1631–1642.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023.
Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, and Max Vladymyrov. 2022.
Transformers learn in-context by gradient descent.
arXiv preprint arXiv:2212.07677.
Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Association for Computing Machinery Special Interest Group in Information Retrieval (ACM SIGIR), pages 200– 207.
Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, et al. 2023. Larger language models do in-context learning differently.
arXiv preprint arXiv:2303.03846.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Empirical Methods in Natural Language Processing (EMNLP), pages 38–45.
Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An explanation of in-context learning as implicit bayesian inference. In International Conference on Learning Representations (ICLR).
Kang Min Yoo, Junyeob Kim, Hyuhng Joon Kim, Hyunsoo Cho, Hwiyeol Jo, Sang-Woo Lee, Sanggoo Lee, and Taeuk Kim. 2022. Ground-truth labels matter: A deeper look into input-label demonstrations. In Empirical Methods in Natural Language Processing (EMNLP), pages 2422–2437.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al.
2022. OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068.
## A Datasets
We use a total of 16 datasets. **Sentiment analysis** includes SST-2 (Socher et al., 2013), financial_phrasebank (Malo et al., 2014), emotion (Saravia et al., 2018), and poem_sentiment (Sheng and Uthus, 2020) **Topic/stance classification**
includes TREC (Voorhees and Tice, 2000),
tweet_eval_atheist, and tweet_eval_feminist (Mohammad et al., 2018; Basile et al., 2019).
Toxicity detection includes tweet_eval_hate, ethos_race, ethos_gender, ethos_national_origin, and ethos_religion (Mollas et al., 2020) **Natural language inference/paraphrase detection** includes SICK (Marelli et al., 2014), SNLI (Bowman et al., 2015), WNLI (Levesque et al., 2012), and MRPC (Dolan and Brockett, 2005).
We sample from the training set to construct the prompts; following Min et al. (2022), we use the development set for evaluation, using sampled max(1350, dataset_size) examples.
## B Prompt Templates
For each task category (e.g. sentiment classification, topic detection), we manually design three natural language templates. Depending on exact specifications for the dataset, templates may be adjusted to better reflect the task (e.g. "Is this atheist?" for tweet_eval_atheist). We apply these templates to the natural language label sets (GOLD and RANDOM). All prompts are presented in Table 1.
We also design two task-agnostic variations on three minimal templates for ABSTRACT: one for single-sentence tasks and one for multi-sequence tasks (e.g. NLI tasks). We use these minimal templates on the abstract language label sets in order to prevent the model from being exposed to any information regarding the task from the prompt design.
All minimal templates are presented in Table 3 All prompts are designed to be answered with single-token responses (e.g. "Yes/No",
"True/False", "positive/negative/neutral", "0/1/2", "A/B/C") so that we can directly check models' last token prediction results instead of applying decoding methods.
## C More Results
We demonstrate average model performance with respect to number of parameters in Figure 5. It is clear that small models struggle to perform ABSTRACT, regardless of how many examples, whereas the largest models (especially GPT-3 Davinci and OPT-66B) are able to perform AB-STRACT. Additionally, their performance improves even more when more demonstrations are provided.
We demonstrate average model performance with respect to numbers of demonstrations in Figure 6. We can see a clear trend that RANDOM (TR)
does not change much but ABSTRACT improves drastically with more examples, especially for GPT3 Davinci and OPT-66B.
Figure 7 shows all the ABSTRACT results and demonstrates a similar trend to what §4.2 describes.
Figure 8, Figure 9, Figure 10, and Figure 11 show task-type breakdown results. Though individual task-type results are more noisy, we can make a similar observation compared to the main result - ABSTRACT (TL) scales better with sizes and numbers of examples compared to RANDOM
(TR).
| Type | Template # | Example |
|--------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------|---------------------------------------------------------|
| 1 | <s> The sentiment is <positive/negative> | |
| Sentiment Analysis | 2 | <s> Sentiment: <positive/negative> |
| 3 | <s> The sentiment of the text is <positive/negative> | |
| 1 | <s> Is this hate speech? <Yes/No> | |
| Hate Speech | 2 | <s> Is the sentence hateful? <Yes/No> |
| 3 | <s> The sentence contains hate speech. True or False? The answer is <True/False> | |
| 1 | <s> The stance is feminist. True or False? The answer is <True/False> | |
| Stance Detection | 2 | <s> Does the sentence express a feminist view? <Yes/No> |
| 3 | <s> Is the stance feminist? <Yes/No> | |
| 1 | <s> The topic is <label> | |
| Topic Detection | 2 | <s> The sentence is about <label> |
| 3 | <s> Sentence topic: <label> | |
| Table 1: Natural prompts used as input in GOLD and RANDOM settings for single-sentence datasets. <s> denotes | | |
Table 1: Natural prompts used as input in GOLD and RANDOM settings for single-sentence datasets. <s> denotes
the input sequence; labels are illustrated in red.
| Type | Temp. # | Example <s1> The question is: <s2>? |
|----------------------|-------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------|
| 1 | True or False? The answer is <True/False> Hypothesis: <s1> Premise: <s2>? | |
| 2 | Do the sentences show entailment? <Yes/No> | |
| Entailment | The hypothesis is: <s1> The premise is: <s2>? | |
| 3 | Is this entailment? <Yes/No> <s1> The question is: <s2> | |
| 1 | True, False, or Unknown? The answer is <True/False/Unknown> Hypothesis: <s1> Premise: <s2>? | |
| 2 | Given the premise, is the hypothesis true? Yes, No, or Unknown? The answer is: <Yes/No/Unknown> | |
| NLI | The hypothesis is: <s1> The premise is: <s2>? | |
| 3 | According to the premise, the hypothesis is true. True, False, or Unknown? The answer is: <True/False/Unknown> <s1> The question is: <s2> | |
| 1 | True or False? The answer is: <True/False/> Sentence 1: <s1> Sentence 2: <s2> | |
| 2 | These sentences are paraphrases. True or False? The answer is: <True/False/> | |
| Paraphrase Detection | Text: <s1> Consider this sentence: <s2> | |
| 3 | Does it paraphrase the text? <Yes/No> | |
| Type | Template # | Example <sentence> |
|-------------------|----------------------------------------------|-------------------------------|
| 1 | <label> | |
| Minimal (single | 2 | <sentence> |
| sentence) | Label: <label> Sentence: sentence> | |
| 3 | Label: <label> <sentence1> [SEP] <sentence2> | |
| 1 | <label> | |
| Minimal (multiple | 2 | <sentence1> [SEP] <sentence2> |
| sentences) | Label: <label> Sentence 1: <sentence1> | |
| 3 | Sentence 2: <sentence2> Label: <label> | |
![10_image_0.png](10_image_0.png)
tweet_eval_hate tweet_eval_atheism tweet_eval_feminist sick
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
ada 0.52 0.51 0.54 0.45 0.23 0.4 0.4 0.38 0.41 0.44 0.34 0.43
babbage 0.51 0.52 0.54 0.38 0.37 0.43 0.46 0.29 0.49 0.53 0.34 0.57 curie 0.55 0.54 0.6 0.28 0.33 0.32 0.39 0.32 0.4 0.56 0.36 0.56
davinci 0.56 0.55 0.59 0.34 0.33 0.33 0.4 0.4 0.38 0.4 0.39 0.44
financial_phrasebank ethos_race ethos_gender ethos_religion
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
ada 0.23 0.4 0.4 0.38 0.41 0.44 0.34 0.43 0.56 0.39 0.64 0.62
babbage 0.37 0.43 0.46 0.29 0.49 0.53 0.34 0.57 0.45 0.39 0.55 0.54
curie 0.33 0.32 0.39 0.32 0.4 0.56 0.36 0.56 0.54 0.42 0.63 0.53 davinci 0.33 0.33 0.4 0.4 0.38 0.4 0.39 0.44 0.4 0.56 0.44 0.52
ethos_national_origin snli sst2 trec
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
ada 0.41 0.44 0.34 0.43 0.56 0.39 0.64 0.62 0.52 0.71 0.62 0.57
babbage 0.49 0.53 0.34 0.57 0.45 0.39 0.55 0.54 0.51 0.52 0.58 0.56 curie 0.4 0.56 0.36 0.56 0.54 0.42 0.63 0.53 0.52 0.6 0.48 0.54
davinci 0.38 0.4 0.39 0.44 0.4 0.56 0.44 0.52 0.51 0.54 0.47 0.52
rte wnli mrpc poem
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
ada 0.56 0.39 0.64 0.62 0.52 0.71 0.62 0.57 0.76 0.68 0.54 0.74
babbage 0.45 0.39 0.55 0.54 0.51 0.52 0.58 0.56 0.62 0.63 0.51 0.61
curie 0.54 0.42 0.63 0.53 0.52 0.6 0.48 0.54 0.55 0.56 0.6 0.54 davinci 0.4 0.56 0.44 0.52 0.51 0.54 0.47 0.52 0.5 0.48 0.53 0.62
![11_image_1.png](11_image_1.png)
![11_image_0.png](11_image_0.png)
![11_image_2.png](11_image_2.png)
![12_image_0.png](12_image_0.png)
| tweet_eval_hate | tweet_eval_atheism | tweet_eval_feminist | sick | | | | | | | | | |
|-----------------------|----------------------|-----------------------|----------------|----------|------|--------|----------|------|--------|----------|------|------|
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| ada | 0.51 | 0.51 | 0.52 | 0.44 | 0.37 | 0.48 | 0.4 | 0.42 | 0.41 | 0.37 | 0.44 | 0.44 |
| babbage | 0.48 | 0.54 | 0.55 | 0.36 | 0.41 | 0.31 | 0.44 | 0.33 | 0.48 | 0.54 | 0.38 | 0.54 |
| curie | 0.54 | 0.58 | 0.62 | 0.28 | 0.48 | 0.3 | 0.33 | 0.38 | 0.32 | 0.56 | 0.41 | 0.56 |
| davinci | 0.56 | 0.6 | 0.64 | 0.34 | 0.42 | 0.39 | 0.29 | 0.44 | 0.38 | 0.46 | 0.49 | 0.49 |
| financial_phrasebank | ethos_race | ethos_gender | ethos_religion | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| ada | 0.37 | 0.48 | 0.4 | 0.42 | 0.41 | 0.37 | 0.44 | 0.44 | 0.54 | 0.53 | 0.67 | 0.68 |
| babbage | 0.41 | 0.31 | 0.44 | 0.33 | 0.48 | 0.54 | 0.38 | 0.54 | 0.43 | 0.53 | 0.63 | 0.56 |
| curie | 0.48 | 0.3 | 0.33 | 0.38 | 0.32 | 0.56 | 0.41 | 0.56 | 0.5 | 0.55 | 0.71 | 0.54 |
| davinci | 0.42 | 0.39 | 0.29 | 0.44 | 0.38 | 0.46 | 0.49 | 0.49 | 0.38 | 0.63 | 0.49 | 0.51 |
| ethos_national_origin | snli | sst2 | trec | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| ada | 0.41 | 0.37 | 0.44 | 0.44 | 0.54 | 0.53 | 0.67 | 0.68 | 0.52 | 0.76 | 0.68 | 0.52 |
| babbage | 0.48 | 0.54 | 0.38 | 0.54 | 0.43 | 0.53 | 0.63 | 0.56 | 0.53 | 0.61 | 0.58 | 0.54 |
| curie | 0.32 | 0.56 | 0.41 | 0.56 | 0.5 | 0.55 | 0.71 | 0.54 | 0.56 | 0.55 | 0.49 | 0.55 |
| davinci | 0.38 | 0.46 | 0.49 | 0.49 | 0.38 | 0.63 | 0.49 | 0.51 | 0.6 | 0.56 | 0.5 | 0.57 |
| rte | wnli | mrpc | poem | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| ada | 0.54 | 0.53 | 0.67 | 0.68 | 0.52 | 0.76 | 0.68 | 0.52 | 0.75 | 0.69 | 0.57 | 0.77 |
| babbage | 0.43 | 0.53 | 0.63 | 0.56 | 0.53 | 0.61 | 0.58 | 0.54 | 0.61 | 0.55 | 0.54 | 0.62 |
| curie | 0.5 | 0.55 | 0.71 | 0.54 | 0.56 | 0.55 | 0.49 | 0.55 | 0.59 | 0.49 | 0.55 | 0.59 |
| davinci | 0.38 | 0.63 | 0.49 | 0.51 | 0.6 | 0.56 | 0.5 | 0.57 | 0.59 | 0.54 | 0.67 | 0.63 |
![13_image_0.png](13_image_0.png)
| tweet_eval_hate | tweet_eval_atheism | tweet_eval_feminist | sick | | | | | | | | | |
|-----------------------|----------------------|-----------------------|----------------|----------|------|--------|----------|------|--------|----------|------|------|
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| ada | 0.48 | 0.52 | 0.53 | 0.4 | 0.37 | 0.42 | 0.41 | 0.38 | 0.42 | 0.24 | 0.45 | 0.27 |
| babbage | 0.53 | 0.58 | 0.52 | 0.32 | 0.38 | 0.35 | 0.42 | 0.35 | 0.38 | 0.44 | 0.4 | 0.5 |
| curie | 0.54 | 0.59 | 0.66 | 0.26 | 0.47 | 0.31 | 0.38 | 0.4 | 0.43 | 0.57 | 0.41 | 0.57 |
| davinci | 0.57 | 0.64 | 0.66 | 0.29 | 0.51 | 0.37 | 0.28 | 0.49 | 0.37 | 0.43 | 0.52 | 0.49 |
| financial_phrasebank | ethos_race | ethos_gender | ethos_religion | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| ada | 0.37 | 0.42 | 0.41 | 0.38 | 0.42 | 0.24 | 0.45 | 0.27 | 0.55 | 0.56 | 0.69 | 0.66 |
| babbage | 0.38 | 0.35 | 0.42 | 0.35 | 0.38 | 0.44 | 0.4 | 0.5 | 0.51 | 0.58 | 0.65 | 0.51 |
| curie | 0.47 | 0.31 | 0.38 | 0.4 | 0.43 | 0.57 | 0.41 | 0.57 | 0.52 | 0.56 | 0.71 | 0.51 |
| davinci | 0.51 | 0.37 | 0.28 | 0.49 | 0.37 | 0.43 | 0.52 | 0.49 | 0.35 | 0.68 | 0.5 | 0.63 |
| ethos_national_origin | snli | sst2 | trec | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| ada | 0.42 | 0.24 | 0.45 | 0.27 | 0.55 | 0.56 | 0.69 | 0.66 | 0.55 | 0.73 | 0.69 | 0.61 |
| babbage | 0.38 | 0.44 | 0.4 | 0.5 | 0.51 | 0.58 | 0.65 | 0.51 | 0.57 | 0.63 | 0.6 | 0.59 |
| curie | 0.43 | 0.57 | 0.41 | 0.57 | 0.52 | 0.56 | 0.71 | 0.51 | 0.59 | 0.65 | 0.5 | 0.61 |
| davinci | 0.37 | 0.43 | 0.52 | 0.49 | 0.35 | 0.68 | 0.5 | 0.63 | 0.6 | 0.63 | 0.51 | 0.62 |
| rte | wnli | mrpc | poem | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| ada | 0.55 | 0.56 | 0.69 | 0.66 | 0.55 | 0.73 | 0.69 | 0.61 | 0.73 | 0.65 | 0.63 | 0.77 |
| babbage | 0.51 | 0.58 | 0.65 | 0.51 | 0.57 | 0.63 | 0.6 | 0.59 | 0.64 | 0.57 | 0.56 | 0.65 |
| curie | 0.52 | 0.56 | 0.71 | 0.51 | 0.59 | 0.65 | 0.5 | 0.61 | 0.63 | 0.44 | 0.61 | 0.69 |
| davinci | 0.35 | 0.68 | 0.5 | 0.63 | 0.6 | 0.63 | 0.51 | 0.62 | 0.65 | 0.6 | 0.7 | 0.71 |
![14_image_0.png](14_image_0.png) ![15_image_0.png](15_image_0.png)
tweet_eval_hate tweet_eval_atheism tweet_eval_feminist sick
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
OPT-350M 0.49 0.51 0.53 0.43 0.34 0.48 0.41 0.31 0.45 0.33 0.34 0.29
OPT-2.7B 0.52 0.55 0.56 0.43 0.36 0.45 0.47 0.34 0.5 0.52 0.34 0.55
OPT-6.7B 0.53 0.53 0.57 0.26 0.33 0.27 0.33 0.39 0.36 0.46 0.36 0.48 OPT-13B 0.55 0.52 0.61 0.4 0.35 0.4 0.49 0.35 0.47 0.36 0.3 0.37 OPT-30B 0.52 0.54 0.55 0.28 0.24 0.35 0.4 0.34 0.46 0.53 0.31 0.55 OPT-66B 0.52 0.55 0.53 0.29 0.38 0.32 0.44 0.37 0.42 0.44 0.36 0.47
financial_phrasebank ethos_race ethos_gender ethos_religion
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
OPT-350M 0.34 0.48 0.41 0.31 0.45 0.33 0.34 0.29 0.48 0.36 0.48 0.6
OPT-2.7B 0.36 0.45 0.47 0.34 0.5 0.52 0.34 0.55 0.54 0.42 0.56 0.49
OPT-6.7B 0.33 0.27 0.33 0.39 0.36 0.46 0.36 0.48 0.63 0.44 0.74 0.55 OPT-13B 0.35 0.4 0.49 0.35 0.47 0.36 0.3 0.37 0.59 0.44 0.69 0.62 OPT-30B 0.24 0.35 0.4 0.34 0.46 0.53 0.31 0.55 0.56 0.43 0.61 0.44 OPT-66B 0.38 0.32 0.44 0.37 0.42 0.44 0.36 0.47 0.33 0.44 0.46 0.45
ethos_national_origin snli sst2 trec
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
OPT-350M 0.45 0.33 0.34 0.29 0.48 0.36 0.48 0.6 0.49 0.66 0.65 0.51
OPT-2.7B 0.5 0.52 0.34 0.55 0.54 0.42 0.56 0.49 0.53 0.49 0.51 0.56
OPT-6.7B 0.36 0.46 0.36 0.48 0.63 0.44 0.74 0.55 0.54 0.59 0.53 0.57
OPT-13B 0.47 0.36 0.3 0.37 0.59 0.44 0.69 0.62 0.53 0.63 0.55 0.53
OPT-30B 0.46 0.53 0.31 0.55 0.56 0.43 0.61 0.44 0.49 0.42 0.46 0.57 OPT-66B 0.42 0.44 0.36 0.47 0.33 0.44 0.46 0.45 0.55 0.53 0.44 0.55
rte wnli mrpc poem
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
OPT-350M 0.48 0.36 0.48 0.6 0.49 0.66 0.65 0.51 0.71 0.66 0.53 0.73
OPT-2.7B 0.54 0.42 0.56 0.49 0.53 0.49 0.51 0.56 0.52 0.48 0.51 0.5 OPT-6.7B 0.63 0.44 0.74 0.55 0.54 0.59 0.53 0.57 0.61 0.53 0.52 0.62 OPT-13B 0.59 0.44 0.69 0.62 0.53 0.63 0.55 0.53 0.61 0.55 0.52 0.62
OPT-30B 0.56 0.43 0.61 0.44 0.49 0.42 0.46 0.57 0.47 0.46 0.51 0.46
OPT-66B 0.33 0.44 0.46 0.45 0.55 0.53 0.44 0.55 0.38 0.49 0.55 0.56
Table 7: Single dataset accuracies across the OPT model family, using 8 examples.
tweet_eval_hate tweet_eval_atheism tweet_eval_feminist sick
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
OPT-350M 0.52 0.53 0.55 0.47 0.37 0.49 0.42 0.42 0.44 0.33 0.36 0.35
OPT-2.7B 0.52 0.56 0.58 0.44 0.44 0.47 0.51 0.39 0.46 0.55 0.39 0.57
OPT-6.7B 0.52 0.57 0.57 0.22 0.39 0.28 0.39 0.43 0.41 0.48 0.42 0.54 OPT-13B 0.58 0.54 0.62 0.32 0.44 0.38 0.41 0.39 0.41 0.36 0.4 0.36 OPT-30B 0.51 0.57 0.57 0.34 0.4 0.35 0.41 0.32 0.5 0.55 0.45 0.56 OPT-66B 0.5 0.57 0.54 0.25 0.47 0.31 0.47 0.44 0.48 0.49 0.38 0.51
financial_phrasebank ethos_race ethos_gender ethos_religion
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
OPT-350M 0.37 0.49 0.42 0.42 0.44 0.33 0.36 0.35 0.45 0.4 0.47 0.65
OPT-2.7B 0.44 0.47 0.51 0.39 0.46 0.55 0.39 0.57 0.53 0.5 0.58 0.45
OPT-6.7B 0.39 0.28 0.39 0.43 0.41 0.48 0.42 0.54 0.66 0.53 0.8 0.59 OPT-13B 0.44 0.38 0.41 0.39 0.41 0.36 0.4 0.36 0.6 0.53 0.72 0.54 OPT-30B 0.4 0.35 0.41 0.32 0.5 0.55 0.45 0.56 0.56 0.52 0.64 0.35 OPT-66B 0.47 0.31 0.47 0.44 0.48 0.49 0.38 0.51 0.3 0.57 0.49 0.44
ethos_national_origin snli sst2 trec
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
OPT-350M 0.44 0.33 0.36 0.35 0.45 0.4 0.47 0.65 0.52 0.71 0.71 0.52
OPT-2.7B 0.46 0.55 0.39 0.57 0.53 0.5 0.58 0.45 0.59 0.47 0.41 0.62
OPT-6.7B 0.41 0.48 0.42 0.54 0.66 0.53 0.8 0.59 0.56 0.71 0.62 0.61
OPT-13B 0.41 0.36 0.4 0.36 0.6 0.53 0.72 0.54 0.53 0.62 0.5 0.55
OPT-30B 0.5 0.55 0.45 0.56 0.56 0.52 0.64 0.35 0.57 0.43 0.38 0.63 OPT-66B 0.48 0.49 0.38 0.51 0.3 0.57 0.49 0.44 0.59 0.51 0.4 0.6
rte wnli mrpc poem
Random Abstract Gold Random Abstract Gold Random Abstract Gold Random Abstract Gold
OPT-350M 0.45 0.4 0.47 0.65 0.52 0.71 0.71 0.52 0.76 0.73 0.51 0.76
OPT-2.7B 0.53 0.5 0.58 0.45 0.59 0.47 0.41 0.62 0.52 0.45 0.54 0.54 OPT-6.7B 0.66 0.53 0.8 0.59 0.56 0.71 0.62 0.61 0.69 0.64 0.61 0.74 OPT-13B 0.6 0.53 0.72 0.54 0.53 0.62 0.5 0.55 0.58 0.55 0.53 0.58
OPT-30B 0.56 0.52 0.64 0.35 0.57 0.43 0.38 0.63 0.5 0.41 0.59 0.51
OPT-66B 0.3 0.57 0.49 0.44 0.59 0.51 0.4 0.6 0.46 0.46 0.59 0.55
Table 8: Single dataset accuracies across the OPT model family, using 16 examples.
| tweet_eval_hate | tweet_eval_atheism | tweet_eval_feminist | sick | | | | | | | | | |
|-----------------------|----------------------|-----------------------|----------------|----------|------|--------|----------|------|--------|----------|------|------|
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| OPT-350M | 0.53 | 0.53 | 0.55 | 0.42 | 0.35 | 0.42 | 0.43 | 0.33 | 0.4 | 0.36 | 0.34 | 0.35 |
| OPT-2.7B | 0.51 | 0.59 | 0.59 | 0.31 | 0.42 | 0.42 | 0.43 | 0.39 | 0.42 | 0.53 | 0.4 | 0.57 |
| OPT-6.7B | 0.55 | 0.59 | 0.6 | 0.26 | 0.29 | 0.24 | 0.4 | 0.39 | 0.42 | 0.49 | 0.44 | 0.53 |
| OPT-13B | 0.56 | 0.58 | 0.59 | 0.25 | 0.45 | 0.36 | 0.39 | 0.38 | 0.42 | 0.4 | 0.38 | 0.37 |
| OPT-30B | 0.52 | 0.59 | 0.57 | 0.32 | 0.47 | 0.42 | 0.47 | 0.42 | 0.47 | 0.54 | 0.45 | 0.6 |
| OPT-66B | 0.48 | 0.58 | 0.51 | 0.27 | 0.5 | 0.26 | 0.4 | 0.46 | 0.5 | 0.45 | 0.43 | 0.47 |
| financial_phrasebank | ethos_race | ethos_gender | ethos_religion | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| OPT-350M | 0.35 | 0.42 | 0.43 | 0.33 | 0.4 | 0.36 | 0.34 | 0.35 | 0.44 | 0.38 | 0.44 | 0.67 |
| OPT-2.7B | 0.42 | 0.42 | 0.43 | 0.39 | 0.42 | 0.53 | 0.4 | 0.57 | 0.51 | 0.56 | 0.58 | 0.46 |
| OPT-6.7B | 0.29 | 0.24 | 0.4 | 0.39 | 0.42 | 0.49 | 0.44 | 0.53 | 0.68 | 0.61 | 0.82 | 0.63 |
| OPT-13B | 0.45 | 0.36 | 0.39 | 0.38 | 0.42 | 0.4 | 0.38 | 0.37 | 0.61 | 0.6 | 0.72 | 0.48 |
| OPT-30B | 0.47 | 0.42 | 0.47 | 0.42 | 0.47 | 0.54 | 0.45 | 0.6 | 0.57 | 0.57 | 0.7 | 0.4 |
| OPT-66B | 0.5 | 0.26 | 0.4 | 0.46 | 0.5 | 0.45 | 0.43 | 0.47 | 0.37 | 0.64 | 0.57 | 0.41 |
| ethos_national_origin | snli | sst2 | trec | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| OPT-350M | 0.4 | 0.36 | 0.34 | 0.35 | 0.44 | 0.38 | 0.44 | 0.67 | 0.51 | 0.73 | 0.71 | 0.51 |
| OPT-2.7B | 0.42 | 0.53 | 0.4 | 0.57 | 0.51 | 0.56 | 0.58 | 0.46 | 0.55 | 0.49 | 0.43 | 0.6 |
| OPT-6.7B | 0.42 | 0.49 | 0.44 | 0.53 | 0.68 | 0.61 | 0.82 | 0.63 | 0.65 | 0.74 | 0.62 | 0.65 |
| OPT-13B | 0.42 | 0.4 | 0.38 | 0.37 | 0.61 | 0.6 | 0.72 | 0.48 | 0.56 | 0.57 | 0.44 | 0.64 |
| OPT-30B | 0.47 | 0.54 | 0.45 | 0.6 | 0.57 | 0.57 | 0.7 | 0.4 | 0.55 | 0.42 | 0.36 | 0.66 |
| OPT-66B | 0.5 | 0.45 | 0.43 | 0.47 | 0.37 | 0.64 | 0.57 | 0.41 | 0.63 | 0.52 | 0.36 | 0.67 |
| rte | wnli | mrpc | poem | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| OPT-350M | 0.44 | 0.38 | 0.44 | 0.67 | 0.51 | 0.73 | 0.71 | 0.51 | 0.77 | 0.74 | 0.52 | 0.79 |
| OPT-2.7B | 0.51 | 0.56 | 0.58 | 0.46 | 0.55 | 0.49 | 0.43 | 0.6 | 0.54 | 0.41 | 0.56 | 0.48 |
| OPT-6.7B | 0.68 | 0.61 | 0.82 | 0.63 | 0.65 | 0.74 | 0.62 | 0.65 | 0.78 | 0.65 | 0.64 | 0.77 |
| OPT-13B | 0.61 | 0.6 | 0.72 | 0.48 | 0.56 | 0.57 | 0.44 | 0.64 | 0.5 | 0.45 | 0.53 | 0.5 |
| OPT-30B | 0.57 | 0.57 | 0.7 | 0.4 | 0.55 | 0.42 | 0.36 | 0.66 | 0.46 | 0.4 | 0.71 | 0.54 |
| OPT-66B | 0.37 | 0.64 | 0.57 | 0.41 | 0.63 | 0.52 | 0.36 | 0.67 | 0.49 | 0.4 | 0.69 | 0.56 |
Table 9: Single dataset accuracies across the OPT model family, using 32 examples.
| tweet_eval_hate | tweet_eval_atheism | tweet_eval_feminist | sick | | | | | | | | | |
|-----------------------|----------------------|-----------------------|----------------|----------|------|--------|----------|------|--------|----------|------|------|
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.59 | 0.53 | 0.64 | 0.33 | 0.31 | 0.37 | 0.41 | 0.43 | 0.45 | 0.32 | 0.36 | 0.38 |
| 13B | 0.63 | 0.53 | 0.65 | 0.31 | 0.34 | 0.28 | 0.43 | 0.34 | 0.44 | 0.39 | 0.41 | 0.41 |
| 30B | 0.64 | 0.58 | 0.72 | 0.38 | 0.47 | 0.52 | 0.57 | 0.49 | 0.65 | 0.37 | 0.43 | 0.41 |
| 65B | 0.69 | 0.58 | 0.72 | 0.4 | 0.42 | 0.58 | 0.54 | 0.42 | 0.58 | 0.38 | 0.46 | 0.41 |
| financial_phrasebank | ethos_race | ethos_gender | ethos_religion | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.31 | 0.37 | 0.41 | 0.43 | 0.45 | 0.32 | 0.36 | 0.38 | 0.64 | 0.4 | 0.7 | 0.65 |
| 13B | 0.34 | 0.28 | 0.43 | 0.34 | 0.44 | 0.39 | 0.41 | 0.41 | 0.42 | 0.35 | 0.61 | 0.61 |
| 30B | 0.47 | 0.52 | 0.57 | 0.49 | 0.65 | 0.37 | 0.43 | 0.41 | 0.65 | 0.38 | 0.79 | 0.69 |
| 65B | 0.42 | 0.58 | 0.54 | 0.42 | 0.58 | 0.38 | 0.46 | 0.41 | 0.6 | 0.44 | 0.83 | 0.69 |
| ethos_national_origin | snli | sst2 | trec | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.45 | 0.32 | 0.36 | 0.38 | 0.64 | 0.4 | 0.7 | 0.65 | 0.56 | 0.73 | 0.61 | 0.53 |
| 13B | 0.44 | 0.39 | 0.41 | 0.41 | 0.42 | 0.35 | 0.61 | 0.61 | 0.52 | 0.66 | 0.59 | 0.5 |
| 30B | 0.65 | 0.37 | 0.43 | 0.41 | 0.65 | 0.38 | 0.79 | 0.69 | 0.52 | 0.76 | 0.65 | 0.52 |
| 65B | 0.58 | 0.38 | 0.46 | 0.41 | 0.6 | 0.44 | 0.83 | 0.69 | 0.55 | 0.75 | 0.65 | 0.56 |
| rte | wnli | mrpc | poem | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.64 | 0.4 | 0.7 | 0.65 | 0.56 | 0.73 | 0.61 | 0.53 | 0.7 | 0.71 | 0.52 | 0.78 |
| 13B | 0.42 | 0.35 | 0.61 | 0.61 | 0.52 | 0.66 | 0.59 | 0.5 | 0.64 | 0.71 | 0.54 | 0.78 |
| 30B | 0.65 | 0.38 | 0.79 | 0.69 | 0.52 | 0.76 | 0.65 | 0.52 | 0.77 | 0.67 | 0.56 | 0.86 |
| 65B | 0.6 | 0.44 | 0.83 | 0.69 | 0.55 | 0.75 | 0.65 | 0.56 | 0.77 | 0.73 | 0.6 | 0.87 |
Table 10: Single dataset accuracies across the LLaMA model family, using 8 examples.
| tweet_eval_hate | tweet_eval_atheism | tweet_eval_feminist | sick | | | | | | | | | |
|-----------------------|----------------------|-----------------------|----------------|----------|------|--------|----------|------|--------|----------|------|------|
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.61 | 0.58 | 0.66 | 0.33 | 0.49 | 0.37 | 0.41 | 0.35 | 0.45 | 0.31 | 0.43 | 0.36 |
| 13B | 0.6 | 0.58 | 0.66 | 0.27 | 0.5 | 0.34 | 0.4 | 0.34 | 0.42 | 0.37 | 0.42 | 0.41 |
| 30B | 0.67 | 0.67 | 0.74 | 0.37 | 0.54 | 0.53 | 0.47 | 0.5 | 0.62 | 0.36 | 0.51 | 0.42 |
| 65B | 0.66 | 0.62 | 0.73 | 0.37 | 0.56 | 0.6 | 0.52 | 0.53 | 0.6 | 0.38 | 0.55 | 0.42 |
| financial_phrasebank | ethos_race | ethos_gender | ethos_religion | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.49 | 0.37 | 0.41 | 0.35 | 0.45 | 0.31 | 0.43 | 0.36 | 0.65 | 0.46 | 0.72 | 0.6 |
| 13B | 0.5 | 0.34 | 0.4 | 0.34 | 0.42 | 0.37 | 0.42 | 0.41 | 0.41 | 0.39 | 0.59 | 0.56 |
| 30B | 0.54 | 0.53 | 0.47 | 0.5 | 0.62 | 0.36 | 0.51 | 0.42 | 0.64 | 0.49 | 0.84 | 0.6 |
| 65B | 0.56 | 0.6 | 0.52 | 0.53 | 0.6 | 0.38 | 0.55 | 0.42 | 0.56 | 0.54 | 0.87 | 0.62 |
| ethos_national_origin | snli | sst2 | trec | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.45 | 0.31 | 0.43 | 0.36 | 0.65 | 0.46 | 0.72 | 0.6 | 0.53 | 0.72 | 0.57 | 0.59 |
| 13B | 0.42 | 0.37 | 0.42 | 0.41 | 0.41 | 0.39 | 0.59 | 0.56 | 0.51 | 0.66 | 0.59 | 0.5 |
| 30B | 0.62 | 0.36 | 0.51 | 0.42 | 0.64 | 0.49 | 0.84 | 0.6 | 0.58 | 0.74 | 0.6 | 0.65 |
| 65B | 0.6 | 0.38 | 0.55 | 0.42 | 0.56 | 0.54 | 0.87 | 0.62 | 0.58 | 0.75 | 0.66 | 0.65 |
| rte | wnli | mrpc | poem | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.65 | 0.46 | 0.72 | 0.6 | 0.53 | 0.72 | 0.57 | 0.59 | 0.67 | 0.65 | 0.59 | 0.78 |
| 13B | 0.41 | 0.39 | 0.59 | 0.56 | 0.51 | 0.66 | 0.59 | 0.5 | 0.73 | 0.69 | 0.54 | 0.78 |
| 30B | 0.64 | 0.49 | 0.84 | 0.6 | 0.58 | 0.74 | 0.6 | 0.65 | 0.74 | 0.65 | 0.64 | 0.85 |
| 65B | 0.56 | 0.54 | 0.87 | 0.62 | 0.58 | 0.75 | 0.66 | 0.65 | 0.78 | 0.73 | 0.64 | 0.85 |
| tweet_eval_hate | tweet_eval_atheism | tweet_eval_feminist | sick | | | | | | | | | |
|-----------------------|----------------------|-----------------------|----------------|----------|------|--------|----------|------|--------|----------|------|------|
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.58 | 0.58 | 0.64 | 0.33 | 0.51 | 0.35 | 0.4 | 0.38 | 0.47 | 0.36 | 0.46 | 0.4 |
| 13B | 0.6 | 0.59 | 0.68 | 0.3 | 0.46 | 0.37 | 0.41 | 0.42 | 0.46 | 0.36 | 0.42 | 0.42 |
| 30B | 0.65 | 0.64 | 0.73 | 0.32 | 0.53 | 0.6 | 0.48 | 0.51 | 0.63 | 0.35 | 0.55 | 0.42 |
| 65B | 0.64 | 0.68 | 0.78 | 0.38 | 0.51 | 0.6 | 0.45 | 0.49 | 0.63 | 0.36 | 0.62 | 0.43 |
| financial_phrasebank | ethos_race | ethos_gender | ethos_religion | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.51 | 0.35 | 0.4 | 0.38 | 0.47 | 0.36 | 0.46 | 0.4 | 0.64 | 0.5 | 0.74 | 0.61 |
| 13B | 0.46 | 0.37 | 0.41 | 0.42 | 0.46 | 0.36 | 0.42 | 0.42 | 0.38 | 0.38 | 0.56 | 0.65 |
| 30B | 0.53 | 0.6 | 0.48 | 0.51 | 0.63 | 0.35 | 0.55 | 0.42 | 0.61 | 0.61 | 0.88 | 0.66 |
| 65B | 0.51 | 0.6 | 0.45 | 0.49 | 0.63 | 0.36 | 0.62 | 0.43 | 0.52 | 0.66 | 0.88 | 0.59 |
| ethos_national_origin | snli | sst2 | trec | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.47 | 0.36 | 0.46 | 0.4 | 0.64 | 0.5 | 0.74 | 0.61 | 0.59 | 0.67 | 0.47 | 0.62 |
| 13B | 0.46 | 0.36 | 0.42 | 0.42 | 0.38 | 0.38 | 0.56 | 0.65 | 0.53 | 0.73 | 0.67 | 0.57 |
| 30B | 0.63 | 0.35 | 0.55 | 0.42 | 0.61 | 0.61 | 0.88 | 0.66 | 0.6 | 0.74 | 0.55 | 0.6 |
| 65B | 0.63 | 0.36 | 0.62 | 0.43 | 0.52 | 0.66 | 0.88 | 0.59 | 0.63 | 0.76 | 0.58 | 0.66 |
| rte | wnli | mrpc | poem | | | | | | | | | |
| Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | Random | Abstract | Gold | |
| 7B | 0.64 | 0.5 | 0.74 | 0.61 | 0.59 | 0.67 | 0.47 | 0.62 | 0.69 | 0.65 | 0.64 | 0.79 |
| 13B | 0.38 | 0.38 | 0.56 | 0.65 | 0.53 | 0.73 | 0.67 | 0.57 | 0.76 | 0.7 | 0.62 | 0.83 |
| 30B | 0.61 | 0.61 | 0.88 | 0.66 | 0.6 | 0.74 | 0.55 | 0.6 | 0.8 | 0.57 | 0.65 | 0.82 |
| 65B | 0.52 | 0.66 | 0.88 | 0.59 | 0.63 | 0.76 | 0.58 | 0.66 | 0.77 | 0.63 | 0.73 | 0.87 |
Table 12: Single dataset accuracies across the LLaMA model family, using 32 examples.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
5
✗ A2. Did you discuss any potential risks of your work?
Our investigation focuses on providing an empirical explanation of the behavior of ICL. We do not spot immediate concerns for risk but are happy to supplement as needed.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✓ A4. Have you used AI writing assistants when working on this paper?
Some authors used Writefull for Overleaf, which is a grammar checker and phrase-suggestion tool.
It was used to double-check spelling and phrasing for all sections of the paper.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
Appendix A
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The datasets that we use are publicly available on Huggingface.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We do not create any artifacts; we use publicly available datasets to study the behavior of LMs performing ICL. We do not spot immediate concerns for inappropriate use but are happy to supplement as needed.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We use publicly available datasets to study the behavior of LMs performing ICL and assume that the dataset creators have rigorously anonymized the sources of their data.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We use publicly available English-based datasets across a wide spread of domains; dpcumentation is available at Huggingface and other public repositories.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We describe how many demonstrations we use per prompt and how many examples we use to evaluate model performance. We sample at most 32 demonstrations per prompt and do not perform any fine-tuning; thus, we do not currently find details of train/test/dev splits to be critical, but are happy to supplement as needed.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** 3
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
3
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
nie-etal-2023-cross | Cross-Lingual Retrieval Augmented Prompt for Low-Resource Languages | https://aclanthology.org/2023.findings-acl.528 | Multilingual Pretrained Language Models (MPLMs) perform strongly in cross-lingual transfer. We propose Prompts Augmented by Retrieval Crosslingually (PARC) to improve zero-shot performance on low-resource languages (LRLs) by augmenting the context with prompts consisting of semantically similar sentences retrieved from a high-resource language (HRL). PARC improves zero-shot performance on three downstream tasks (sentiment classification, topic categorization, natural language inference) with multilingual parallel test sets across 10 LRLs covering 6 language families in unlabeled (+5.1{\%}) and labeled settings (+16.3{\%}). PARC also outperforms finetuning by 3.7{\%}. We find a significant positive correlation between cross-lingual transfer performance on one side, and the similarity between high- and low-resource languages as well as the amount of low-resource pretraining data on the other side. A robustness analysis suggests that PARC has the potential to achieve even stronger performance with more powerful MPLMs. | # Cross-Lingual Retrieval Augmented Prompt For Low-Resource Languages
Ercong Nie⋆ 1,2 Sheng Liang⋆ 1,2 Helmut Schmid1 **Hinrich Schütze**1,2 1Center for Information and Language Processing (CIS), LMU Munich, Germany 2 Munich Center for Machine Learning (MCML), Munich, Germany
{nie, shengliang}@cis.lmu.de
## Abstract
Multilingual Pretrained Language Models
(MPLMs) perform strongly in cross-lingual transfer. We propose Prompts Augmented by Retrieval Crosslingually (**PARC**) to improve zero-shot performance on low-resource languages (LRLs) by augmenting the context with prompts consisting of semantically similar sentences retrieved from a high-resource language (HRL). PARC improves zero-shot performance on three downstream tasks (sentiment classification, topic categorization, natural language inference) with multilingual parallel test sets across 10 LRLs covering 6 language families in unlabeled (+5.1%) and labeled settings
(+16.3%). PARC also outperforms finetuning by 3.7%. We find a significant positive correlation between cross-lingual transfer performance on one side, and the similarity between high- and low-resource languages as well as the amount of low-resource pretraining data on the other side. A robustness analysis suggests that PARC has the potential to achieve even stronger performance with more powerful MPLMs.
## 1 Introduction
Multilingual pretrained language models (MPLMs)
(Devlin et al., 2019; Conneau et al., 2020; Liu et al.,
2020; Xue et al., 2021; Shliazhko et al., 2022),
pretrained on multilingual corpora with >100 languages, exhibit strong multilinguality on downstream tasks (Hu et al., 2020).
Low-resource languages, for which little text data is available for pretraining monolingual pretrained language models (PLMs), benefit from MPLMs. However, the lack of LRL data leads to an imbalanced language distribution in the pretraining corpora of MPLMs (Wu and Dredze, 2020). LRLs are therefore under-represented in pretraining, resulting in bad performance. Furthermore, the scarcity of domain- or task-specific annotated data of LRLs makes it difficult to apply the
⋆ Equal Contribution.
![0_image_0.png](0_image_0.png)
Figure 1: Main idea of PARC: we enhance zero-shot learning for low-resource languages (LRLs) by crosslingual retrieval from labeled/**unlabeled** high-resource languages (HRLs). (a) An LRL input sample is taken as query by the cross-lingual retriever to retrieve the semantically most similar HRL sample from the HRL
corpus. The label of the retrieved HRL sample is obtained either from the corpus (**labeled** setting) or by self-prediction (**unlabeled** setting). (b) The retrieved HRL sample together with its label and the input sample are reformulated as prompts. The cross-lingual retrievalaugmented prompt is created by concatenation and taken by the MPLM for prediction. Our experiments show that PARC outperforms other zero-shot methods and even finetuning.
pretraining-finetuning paradigm to LRLs (Lauscher et al., 2020). Given that the pretraining-finetuning paradigm always has a high demand for domainspecific labeled data, another line of research –
prompt-based learning - emerges, focusing on exploiting large pretrained language models by reformulating the input. The prompt is designed to help PLMs "understand" the task better and "recall" what has been learned during the pretraining. In particular, Brown et al. (2020) propose a simple incontext learning approach without any finetuning, which adds training examples as additional context to test examples. Instead of using random examples as context, KATE (Liu et al., 2022a) and SOUP
(Liu et al., 2022b) retrieve semantically similar examples as prompt for monolingual in-context learning. The above mentioned prompt-based learning techniques require no parameter updating, while there is also work employing sampled similar examples for prompt-based funetuning (Gao et al.,
2021). Unlike Brown et al. (2020) who created prompts with manually selected examples, these approaches augment the context by retrieving related information from external corpora, allowing the PLMs to capture more domain- or task-specific knowledge. The prompt-based method offers a new form of zero-shot or few-shot learning in multilingual NLP studies. It involves performing a specific task using prompts, without labeled data in the target language and has the potential of being an effective method for LRLs lacking annotated data.
Our work improves the zero-shot transfer learning performance of LRLs on three different classification tasks by taking advantage of cross-lingual information retrieval and the multilinguality of MPLMs. Specifically, we retrieve semantically similar cross-lingual sentences as prompts and use the cross-lingual retrieval information to benefit the LRLs from the multilinguality of MPLMs and achieve better performance in the zero-shot setting1. Our main contributions are: (1) We propose Prompts Augmented by Retrieval Crosslingually
(**PARC**), a pipeline for integrating retrieved crosslingual information into prompt engineering for zero-shot learning (Figure 1). (2) We conduct experiments on several multilingual tasks, showing that PARC improves the zero-shot performance on LRLs by retrieving examples from both labeled and unlabeled HRL corpora. (3) To find an optimal configuration of our PARC pipeline, we conduct a comprehensive study on the variables that affect the zero-shot performance: the number of prompts, the choice of HRL, and the robustness w.r.t. other retrieval methods and MPLMs.
## 2 Related Work
Retrieval methods External knowledge extracted by information retrieval is often leveraged to solve NLP tasks. Two types of representations have been used for retrieval: (1) sparse bag-ofwords representations (Chen et al., 2017; Wang et al., 2018), and (2) dense representation learned by neural networks (Qu et al., 2020). Dense representations come either from contextual token embeddings (May et al., 2019; Zhang et al., 2020)
or from sentence encoders (Conneau et al., 2017; Cer et al., 2018). Reimers and Gurevych (2019)
propose sentence transformers to create semantically meaningful sentence embeddings by applying siamese and triplet network structures to transformer-based pretrained language models. By using knowledge distillation, sentence transformers can be expanded to support various languages as multilingual sentence transformers (Reimers and Gurevych, 2020), allowing for cross-lingual retrieval.
Retrieval augmented prompt Brown et al.
(2020) show that large-scale pretrained language models such as GPT-3 can learn to perform a task by putting examples of input-output pairs into the input as context. The in-context learning method simply concatenates the input with examples randomly extracted from the training set. Recent studies (Gao et al., 2021; Liu et al., 2022a,b) augment the prompts for pre-trained models by sampling semantically similar examples. They apply the retrieval augmented method to discrete prompts, which are represented by tokens instead of vectors in a continuous space. They use them either for finetuning in few-shot settings or for zero-shot learning.
Chowdhury et al. (2022) use a similar kNN-based retrieval method for tuning the soft prompts in a continuous space with a standard supervised training setup. Previous work focused on monolingual retrieval-augmented prompts. Our work applies cross-lingual retrieval to discrete prompts in a scenario without parameter updating. To the best of our knowledge, our work is the first to investigate prompt learning augmented by cross-lingual retrieval.
Multilingual prompt learning Despite the success of prompting in English, prompting in multilingual tasks has not been extensively studied. Winata et al. (2021) show the multilingual skills of LMs mainly trained on English data in prompt learning by giving them a few English examples as context but testing them on non-English data. Some recent works investigate the prompt learning with multilingual PLMs (Zhao and Schütze, 2021; Huang et al.,
2022). Unlike our work, they focus on finetuning or prompt tuning requiring parameter updating. We apply our method to LRLs in a zero-shot setting without adjusting the model parameters.
## 3 Methodology
This work aims to improve the performance of MPLMs on LRLs in the zero-shot setting by leveraging retrieved cross-lingual contents from HRLs.
For that, we design the PARC pipeline that can be applied to labeled and unlabeled scenarios, i.e., the HRL information can be retrieved from either labeled or unlabeled corpora.
As Figure 1 shows, the PARC pipeline consists of two steps: (a) Cross-lingual retrieval from highresource language corpora, and (b) prediction with a retrieval-augmented prompt. Figure 1 shows an example: A Telugu input sentence from a sentiment classification task is firstly fed into the crosslingual retriever to fetch the semantically closest sample from the HRL corpus, i.e. English in this case. In the second step, the retrieved HRL sample together with its label and the LRL input sentence are transformed into a prompt. For prompt-based classification, we need (i) a *pattern* which converts the input sentence into a cloze-style question with a mask token, and (ii) a representative word (called verbalizer) for each possible class. Converting the classification task into a cloze-style question aligns seamlessly with the framework of our proposed PARC method, because it not only performs zeroshot learning well but, more significantly, facilitates better integration of the retrieved cross-lingual contexts.
In our example, we use the pattern P(X) = X◦
"In summary, the product was [MASK]."
to convert the retrieved English sentence into
"Wonderful! Works as stated! In summary, the product was [MASK].", where ◦ is the string concatenation operator. A verbalizer such as {pos
→ "great", neg → "terrible"}, which maps the original labels {pos, neg} onto words in the vocabulary, is then used to replace the [MASK] token with the verbalized label word "great", standing for the correct label pos of this sentence. We call the resulting English sentence (in our example: "Wonderful!
Works as stated! In summary, the product was great.") the "cross-lingual context". At last, we fill the same pattern with the input Telugu sentence and append it to the cross-lingual context.
We feed this cross-lingual retrieval augmented input to the MPLM. The MPLM returns for each of the verbalizers its probability of being the masked token.
More formally, let XL
i ∈ DL be the input sample from the LRL test set, (XH
j
, yj ) ∈ DH
lb and XH
j ∈ DH
un denote the HRL data from the *labeled* and *unlabeled* corpora, respectively, where Xj is the text sample and yj its class label from a label set Y . As Eq. (1) shows, the cross-lingual retriever CLR takes the HRL corpora DH and a given LRL input sentence XL
i
. It returns an ordered list of HRL sentences DRi according to the semantic similarity. We then have (X
Ri k, y Ri k
) ∈ D
Ri lb and X
Ri k ∈ DRi un for labeled and unlabeled scenarios, respectively, where X
Ri kis the k-th most similar HRL sentence to the LRL input XL
i
.
$$D^{R_{i}}=C L R(X_{i}^{L},D^{H})$$
$\eqref{eq:walpha}$
i, DH) (1)
The prompt pattern P(.) converts an HRL input sentence X
Ri kinto a cloze-style form with a mask token. The verbalizer v(.) is a bijective mapping from the set of class labels Y to a set of verbalized words V from the HRL vocabulary. We use the verbalized label word to fill in the mask token in the prompt pattern, and construct the cross-lingual context C
i k for the input XL
i with the k-th most similar HRL sample X
Ri k:
$$C_{k}^{i}=P(X_{k}^{R_{i}},v(y_{k}^{R_{i}}))$$
$$\left(2\right)$$
)) (2)
The cross-lingual context C
i k is then concatenated with the prompted LRL input as the input I
to the MPLM:
$$I_{i}=C_{k}^{i}\circ P(X_{i}^{L})$$
$$({\mathfrak{I}})$$
i) (3)
The MPLM M performs masked token prediction and returns the probabilities p = M(Ii) of all candidate words for the masked token in Ii. We predict the class yˆ whose verbalizer v(ˆy) received the highest probability from model M:
$${\hat{y}}=\arg\operatorname*{max}_{y\in Y}p(v(y))$$
$$(4)$$
p(v(y)) (4)
In the labeled scenario, we obtain the correct label y Ri kof the HRL sentence from D
Ri lb . In the unlabeled scenario, we predict the label using the same prompt-based classification method without cross-lingual context, similar to Eq. (4). We call this the *self-prediction* method:
$$\hat{y}_{k}^{R_{i}}=\arg\operatorname*{max}_{y\in Y}M(P(X_{k}^{R_{i}}),v(y))\qquad(5)$$
In order to use more cross-lingual information, we retrieve the K most similar HRL samples. With each sample, we obtain verbalizer probabilities as described above and retrieve the class whose verbalizer has the largest sum of probabilities. We call this method the Bag-of-Retrieval (BoR) strategy. We also tried concatenating the different crosslingual contexts (CONC method), but the resulting performance has been worse (see Table 15 in the Appendix).
## 4 Experimental Setup 4.1 Datasets
Base Datasets Three representative classification tasks are selected for evaluation in this work: binary sentiment analysis on Amazon product reviews (Keung et al., 2020), topic classification on AG News texts (Zhang et al., 2015), and natural language inference on XNLI (Conneau et al., 2018).
Amazon Reviews dataset categorizes the shopping reviews into 5 star ratings from 1 to 5. In order to satisfy a binary classification setting, we select the reviews with rating 1 as negative (0) and 5 as positive (1) for our experiments. The following pattern P(X) and verbalizer v are defined for an input review text X:
- $P(X)=X\circ\text{``All}$.
- P(X) = X ◦ "All in all, it was [MASK]."
- $v(0)=\text{``terrible"}$, $v(1)=\text{``great"}$.
AG News is a collection of more than 1 million news articles for topic classification. The news topic categories contained in the dataset are World
(0), Sports (1) , Business (2), and Tech (3). The pattern and verbalizers are as follows:
- $P(X)=\text{````[MASK]`}$ News: " $\circ$ $X$ .
$$\begin{array}{l}{{\mathrm{•~}v(0)="W o r l d",\,v(1)="S p o r t s",}}\\ {{v(2)="B u s i n e s",\,v(3)="T c h"}}\end{array}$$
XNLI is a multilingual version of the MultiNLI
dataset (Williams et al., 2018). We use a subset of the original XNLI dataset in our experiment.
The text in each data item consists of two parts.
Sentence A is the premise and sentence B is the hypothesis. The NLI task is to predict the type of inference between the given premise and hypothesis among the three types: entailment (0),
neutral (1) and contradiction (2). For a given sentence pair X1 and X2, we design the pattern and verbalizer as:
- $P(X_1,X_2)=X_1\circ\text{"?[MASK],"}\circ X_2$ - $v(0)\;=\;\text{"Yes"},\;v(1)\;=\;\text{"Maybe"},\;v(2)\;=\;0$ - "No" .
Construction of Multilingual Parallel Test Sets Parallel test datasets for evaluating cross-lingual transfer performance on LRLs are rare. However, recent research conducted by Hu et al. (2020); Liu et al. (2022c) shows that automatically translated test sets are useful for measuring cross-lingual performance. Hence, we adopt their methodology and construct datasets for different tasks by automatically translating English test sets to targeted LRLs.
We use the Python API of the Google Translate System to implement the construction of multilingual parallel test sets in our experiment. We also validate the translation effectiveness and quality. The original XNLI datasets include two low-resource languages that are used in our experiments (Swahili and Urdu), so we use them as the "gold" standard for our translation validation. We compare the cross-lingual transfer performance on translation test sets and original test sets of XNLI. We also measure the translation quality by using the original sets as gold standard. Through the validation conducted on these two languages within the XNLI
task, we infer that the translation method is effective and could be generalized to other languages and tasks. Detailed results are shown in Appendix §A.
Following Wu and Dredze (2020), we regard languages with a WikiSize2 of less than 7 as LRLs.
We construct a test set consisting of 10 LRLs in 6 language families: Indo-European (Afrikaans - af, Urdu - ur), Austronesian (Javanese - jv, Tagalog -
ta), Altaic (Mongolian - mn, Uzbek - uz), Dravidian
(Tamil - tl and Telugu - te), Sino-Tibetan (Burmese
- my), and Niger-Congo (Swahili - sw). Table 18 in the Appendix shows more information on the test sets.
HRL Corpora To retrieve rich and diverse information, a large-scale general corpus or knowledge base in the different HRLs would be the ideal 2WikiSize less than 7 means that the Wikipedia corpus of the language is smaller than 0.177 GB.
sentence retrieval pool. In practice, however, a trade-off is necessary in order to save computational resources. Following Wang et al. (2022), we therefore use the task-specific labeled training set of English as the sentence pool in our experiments.
The selection of the HRL will be discussed in §6.2.
## 4.2 Baseline
We compare PARC with the following baselines in both labeled and unlabeled settings:
MAJ The majority baseline. Since we construct the test sets to be balanced, MAJ is equivalent to random guess.
Random We randomly retrieve a cross-lingual sentence as prompt, similarly to the simple incontext learning using examples without semantic similarity to the input (Brown et al., 2020).
Direct The pattern filled with the input sample is directly fed to the MPLM for prediction, without adding cross-lingual context to the prompts.
Finetune The MPLM is first finetuned with the retrieved high resource sentences. Then the lowresource test input is predicted by the finetuned MPLM. We use the Cross Entropy Loss as the objective function for finetuning and AdamW for optimization with a learning rate of 1e-5. Since the finetuning data is very limited, we only train for a single epoch to avoid overfitting.
Our test sets are constructed by machine translation. Therefore we cannot apply a translation baseline, where we translate the input sample into the high resource language before feeding it to the MPLM. The Appendix presents a different experiment where we compare with a translation baseline.
## 4.3 Models
Cross-Lingual Retriever The retrieval methods used in monolingual NLP are either based on sparse or dense representations. Sparse representations such as BM25 (Manning et al., 2008) which is based on term frequency, cannot be used for crosslingual retrieval as the shared words across different languages are normally scarce. Therefore dense representations from deep learning methods such as LASER (Artetxe and Schwenk, 2019) and sentence-BERT (Reimers and Gurevych, 2019) are more suitable for our pipeline.
We choose the multilingual sentence transformer (Reimers and Gurevych, 2020) version
"*paraphrase-multilingual-mpnet-base-v2*" as the retriever in our experiments. This multilingual retriever is based on XLM-R (Conneau et al., 2020)
| Amazon | AGNews | XNLI | Avg. | |
|----------------|----------|--------|--------|------|
| MAJ | 50.0 | 25.0 | 33.3 | 36.1 |
| Random | 48.2 | 25.6 | 32.4 | 35.4 |
| Direct | 53.8 | 36.3 | 33.1 | 41.1 |
| Finetune | 68.6 | 57.9 | 34.5 | 53.7 |
| PARC-unlabeled | 58.4 | 46.7 | 33.5 | 46.2 |
| PARC-labeled | 68.9 | 67.6 | 35.8 | 57.4 |
and trained on parallel data from 50+ languages by employing knowledge distillation. Through the multilingual sentence transformer, sentences are represented by embeddings. We use the sentence embeddings to calculate the cosine similarity between the LRL inputs and HRL sentences and rank the most similar ones for retrieval. Robustness with respect to other cross-lingual retrievers will be discussed in §6.3.
## Multilingual Pretrained Language Model In
order to solve cloze-style classification tasks, we use the pretrained multilingual BERT model "*bertbase-multilingual-cased*" (Devlin et al., 2019). It contains 178M parameters and was trained on Wikipedia corpora in 104 languages. In §6.3, we will also explore XLM-R (Conneau et al., 2020),
another multilingual pretrained language model.
All the models mentioned above were implemented using the Huggingface Transformers library (Wolf et al., 2020).
## 5 Results
Table 1 presents an overview of the results on the three tasks3. PARC outperforms the MAJ, *Direct* and *Random* baseline on all three tasks, in both labeled and unlabeled settings: When retrieving from unlabeled high-resource language corpora, the performance is improved by 4.6%, **10.4%** and **0.4%**
compared to *Direct* on Amazon Review, AG News, and XNLI respectively. When retrieving from labeled HRL corpora, the performance is improved by 15.1%, **31.3%** and **2.7%**. The *Finetune* baseline uses the label of retrieved examples for promptbased finetuning. Hence it is fair to compare it with PARC in the labeled setup rather than the unlabeled one. *PARC-labeled* outperforms *Finetune* by **0.3%**,
9.7% and **1.3%** on the three tasks respectively.
Although our proposed methods perform better 3k = 1 unless otherwise specified.
than the baselines on all three tasks, the degree of improvement differs. A large improvement is found on AG News, the topic categorization task, while XNLI only shows a slight improvement. An explanation for this could be that the natural language inference task is more difficult than topic categorization, especially in a zero-shot setup. Also, prior work has shown that designing cloze-style patterns and searching the answer space for NLI tasks
(Schick and Schütze, 2021; Webson and Pavlick, 2022) is difficult.
We also find that PARC-labeled noticeably outperforms PARC-unlabeled, indicating that the performance of self-prediction is limited by the capabilities of mBERT. More powerful MPLMs and better pattern designs might further improve the performance.
To analyze the performance for every language in detail, we present the complete experimental results for the topic categorization task on AG News in Table 2. Here, we use the BoR method to take advantage of multiple retrieved HRL sentences. As expected, PARC outperforms the *Direct* baseline on all languages in both labeled and unlabeled settings.
However, it is worth noting that the sensitivity to cross-lingual retrieval differs from language to language. For some LRLs, e.g. Urdu (Ur) and Uzbek
(Uz), PARC's improvement from cross-lingual retrieval is smaller. Others gain more, e.g. Javanese
(Jv). Retrieving more samples increases the performance up to k=30 except for Telugu (Te) and Swahili (Sw) where the max is reached for k=20.
We now turn to the following two questions: 1)
How does k affect the performance on other tasks than topic categorization? 2) Which LRLs profit most from our PARC method and which HRLs are best suited to retrieve prompts?
## 6 Analysis 6.1 Effect Of K
We investigated how the performance changes as the number of retrieved HRL samples k increases.
As shown in Figure 2, an abrupt accuracy increase can be seen in both labeled and unlabeled scenarios by concatenating the most similar cross-lingual sample. In labeled scenarios, the performance tends to increase up to k=20 and then levels off.
This can be explained by the fact that later retrieved samples are less similar to the input sample, so their contribution as prompts decreases. In unlabeled
![5_image_0.png](5_image_0.png)
scenarios, there is no clear improvement beyond k=1 except for AGNews(UN), where the accuracy increases monotonically except for k=10. The performance of XNLI is less obviously influenced by the value of k than binary sentiment analysis and topic categorization. We assume that this could be attributed to the difficulty of the inference task.
Unlike the other two single sentence classification tasks, XNLI identifies the relationship between a pair of sentences. Transferring knowledge about sentence relationships is more complicated and requires more samples to learn, in contrast to the other two tasks where semantic information from similar cross-lingual sentences can be transferred directly.
## 6.2 Effect Of Languages
Lauscher et al. (2020) pointed out that two linguistic factors exert crucial effects on cross-lingual transfer performance: (1) the size of the pretraining corpus for the target language and (2) the similarity between the source and target language. In our study, we also consider a third factor: (3) the size of the pretraining corpus for the source language. In this section, we conduct a correlation analysis between PARC's cross-lingual transfer performance and the three language-related factors mentioned above. To achieve that, we have to measure these factors in a proper way at first. The size of the pretraining corpus can be easily measured by the log2 value of the Wikipedia size in MB, as we mentioned in §4. Thus the remaining problem is how to properly represent language similarity.
## 6.2.1 Measurement Of Language Similarity
Malaviya et al. (2017) and Littell et al. (2017) propose LANG2VEC from linguistic, typological, and phylogenetic perspectives. LANG2VEC employs different vectors to represent various types of lin-
![6_image_0.png](6_image_0.png)
En Af Jv Mn My Sw Ta Te Tl Ur Uz Avg
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
MAJ 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 Direct 52.5 41.8 27.4 42.5 32.2 31.3 31.5 33.0 31.6 46.9 44.8 36.3 k=1 53.7 52.8 46.2 46.5 46.1 42.8 43.3 44.3 45.0 51.0 49.7 46.7 k=3 55.8 53.6 46.2 47.1 48.2 44.9 44.5 46.3 47.1 52.6 51.0 48.1 k=5 57.1 54.4 47.0 47.0 48.0 46.6 44.8 45.8 48.5 53.1 52.3 48.7 k=10 57.5 55.3 46.3 46.4 47.6 45.6 44.1 46.7 47.7 53.0 51.4 48.4 k=20 59.7 57.2 48.1 46.7 50.0 47.9 46.0 **48.9** 49.6 55.4 53.2 50.3 k=30 60.1 57.4 49.0 47.4 51.1 49.2 **47.1** 48.7 50.1 56.5 54.4 **51.1**
k=1 74.9 75.4 68.1 63.5 68.2 64.0 62.8 65.6 64.8 72.5 71.4 67.6
![6_image_4.png](6_image_4.png)
k=3 77.1 77.1 69.6 65.6 71.1 67.6 65.6 68.4 65.9 74.6 74.4 70.0 k=5 78.1 78.6 69.0 64.4 72.9 68.8 65.9 69.3 66.4 75.8 75.4 70.6 k=10 78.7 79.4 70.5 67.0 72.9 68.3 66.6 70.7 67.2 76.6 75.9 71.5 k=20 **79.0** 79.7 70.7 67.5 72.5 **70.0** 67.5 70.7 68.1 **77.4** 76.3 72.0 k=30 79.0 79.7 71.3 **67.6** 72.8 69.9 68.1 71.1 **69.4** 77.2 76.7 **72.4**
Table 2: Results of topic categorization task on AG News dataset. k is the number of retrieved cross-lingual sample.
![6_image_3.png](6_image_3.png)
![6_image_5.png](6_image_5.png)
MAJ is the majority baseline. Avg is the average accuracy across 10 LRLs. En is the HRL for retrieval. BoR
strategy is adopted.
(a) Zero-Shot Performance (Unlabeled) (b) Language Similarity (c) Zero-Shot Performance (labeled)
guistic features for different languages. Each language is encoded with 5 vectors corresponding to different linguistic features including three typological features (syntax, phonology and phonetic inventory), phylogenetic and geographical features.
In typological vectors, each dimension represents a linguistic property. For example, one dimension of the syntax vector represents the word order feature SVO. If a language has a SVO order, then its syntax vector would have the value 1 on this dimension. Missing values in the typological vectors could have detrimental effects. Therefore we replace them with values predicted from the k most similar typological vectors (Malaviya et al., 2017).
The phylogenetic vector embodies the position of a language in the world language family tree (Harald et al., 2015), while the geographical vector contains the position information of languages w.r.t. their speakers.
Following prior work (Rama et al., 2020), we consider all 5 linguistic features when measuring the language similarity: syntax (SYN), phonology
(PHO), phonological inventory (INV), language family (FAM), and geography (GEO). Given these different types of vectors, we calculate 5 cosine similarities for each pair of source language (i) and target language (j) and average them to get the final language similarity sim(*i, j*):
$$s i m(i,j)={\frac{1}{|{\mathcal{F}}|}}\sum_{f\in{\mathcal{F}}}s(\mathbf{v}_{f}(i),\mathbf{v}_{f}(j))\quad{\mathrm{(6)}}$$
where F is the set of features, vf (i) and vf (j)
stand for the language vectors representing the feature f for i and j, and s(·) computes the minmax normalized cosine similarity of the two vectors. The detailed cosine similarities between English and 10 LRLs evaluated in our experiment are shown in Table 9 in Appendix §B.
## 6.2.2 Correlation Analysis
We conduct a correlation analysis between crosslingual performance and the three language factors mentioned above: language similarity between the
![7_image_0.png](7_image_0.png)
Direct 53.8 36.2 33.1 41.0
![7_image_9.png](7_image_9.png)
mBERT+pooling 53.1 36.9 33.6 41.2 mBERT+distiluse 54.7 38.4 34.0 42.3 mBERT+paraphrase 59.6 46.7 33.7 46.7 XLM-R+paraphrase 70.1 **57.4** 34.7 **54.1**
mBERT+LaBSE 59.4 43.8 **35.1** 46.1 mBERT+pooling 53.6 58.0 33.8 48.5 mBERT+distiluse 62.8 63.8 34.6 53.7 mBERT+paraphrase 72.9 67.6 36.8 59.1 XLM-R+paraphrase **73.0** 76.0 35.7 61.6 mBERT+LaBSE 72.2 80.0 37.5 **63.2**
Amazon AGNews XNLI **Avg.**
![7_image_7.png](7_image_7.png)
![7_image_8.png](7_image_8.png)
source (retrieved) and *target* (input) language, pretraining data size of the source language and of the target language. We use the log value of Wikipedia size to represent the size of pretraining corpus for target and source languages and sim(*i, j*) computed by Eq. (6) to represent the similarity between the source and target language. Four other HRLs
- Chinese, German, Hindi, Cebuano - are selected as source languages in addition to English. We measure the cross-lingual performance of PARC
on the Amazon product review task in both the labeled and the unlabeled settings. Full results can be found in Appendix §D.2.
Table 3 shows the outcome of the correlation analysis. We observe a significant positive correlation between cross-lingual performance and language similarity as well as target language pretraining data size, in both the labeled and the unlabeled setting. The correlation between performance and source language size is not significant. Figure 3 visualizes the correlations and further clarifies the findings by selecting 4 source languages and 4 target languages and showing the cross-lingual performance and similarity between them.
Ig Sn Mt Co Sm
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
![7_image_3.png](7_image_3.png)
![7_image_5.png](7_image_5.png)
![7_image_6.png](7_image_6.png)
Direct 30.3 32.1 29.8 32.6 30.4 LBk=1 56.5 59.7 63.9 75.0 52.0 k=3 58.1 61.4 65.2 78.2 54.1 k=5 58.8 61.6 65.9 79.8 **55.4**
![7_image_4.png](7_image_4.png)
k=1 36.6 37.3 39.1 42.6 34.4 k=3 34.8 36.2 37.6 40.6 33.9 k=5 34.8 35.3 37.2 40.4 34.1 St Haw Zu Ny **Avg.**
Direct 30.4 27.1 34.4 29.8 30.8 LBk=1 53.5 49.9 58.0 54.9 58.1 k=3 55.5 49.7 58.5 57.0 59.7 k=5 56.8 51.4 58.8 58.0 **60.7**
UNk=1 36.3 31.6 35.6 35.3 36.5 k=3 33.7 31.0 34.3 32.9 35.0 k=5 34.2 30.6 34.0 32.0 34.7
![7_image_10.png](7_image_10.png)
## 6.3 Robustness
In this section, we test the robustness of the PARC
method w.r.t. other cross-lingual retrievers and MPLMs as well as unseen languages.
## 6.3.1 Retriever And Mplm
Apart from the multilingual sentence transformer based on XLM-R ("paraphrase") used in our previous experiments, we explore several other types of cross-lingual retrievers: a "pooling" retriever which averages the last hidden states of the MPLM
and computes the cosine similarity between these pooled sentence representations; "distiluse" retriever, a sentence transformer based on multilingual distilBERT (Sanh et al., 2019); and the
"LaBSE" retriever (Feng et al., 2020), a BERTbased model trained for sentence embedding for 109 languages. As an alternative to mBERT, we also investigate the performance of XLM-R, which has the same architecture as mBERT but is more powerful. We follow the setup described in §4.
Results are shown in Table 4. We can find that even the worst combination - *mBERT+pooling* –
outperforms the *Direct* baseline on average under both labeled and unlabeled settings. If the retriever is replaced by a slightly more powerful one, such as the combination *mBERT+distiluse*, higher accuracies in the unlabeled and labeled setting are achieved, suggesting that our proposed method PARC is robust w.r.t. other cross-lingual retrievers. In the result of *XLM-R+paraphrase*, the obviously better performance of XLM-R in the unlabeled setup shows that a stronger MPLM can
![8_image_0.png](8_image_0.png)
noticeably improve the self-prediction. We expect that an even better performance could be obtained by applying our proposed PARC approach to larger and/or more powerful MPLMs such as InfoXLM
(Chi et al., 2021).
## 6.3.2 Unseen Languages
Our previous experiments show that the LRLs pretrained by MPLMs can benefit well from PARC.
However, popular MPLMs are pretrained only on approx. 100 languages, accounting for a tiny part of all languages in the world (∼100/7000). We wonder if our proposed method could potentially benefit a wider range of LRLs, so we apply PARC to several unseen LRLs, i.e. languages not included in the pretrained corpora of the MPLM. We conduct experiments on a topic categorization task for nine unseen languages. The results in Table 5 show that PARC is also effective for unseen LRLs. It can be observed from the result that PARC is also effective for unseen LRL languages.
## 6.4 Zero-Shot Setting
Different from the cross-lingual transfer paradigm where a MPLM is first finetuned on annotated training data of one language, and then directly applied to the test data of other languages for inference, our proposed approach is employed in the zero-shot setting for LRLs, i.e., the model parameters are not adjusted by finetuning with HRL data. Table 6 shows results from a preliminary experiment where our PARC method combined with a finetuned MPLM
fails to outperform the Direct baseline. When using finetuned MPLM to evaluate with PARC, we do not see sufficient performance improvement. However, without finetuning, PARC performs better in both unlabeled and labeled setup, and PARC-LB without finetuning also outperforms it with finetuning.
## 6.5 Qualitative Analysis
Table 7 shows results of the PARC pipeline for an example from the Amazon review task. The review
Amazon Review
Case #963
Input:
(Used with several loads of laundry. Gentle on the fabric and gentle on my skin.) pos Retrieved: R1: Hard to wash. The fur on top gets all over the sides in the wash. :/ pos R2: Very nice and thick high quality towels. pos R3: Smelled really bad mold! I had to wash them before use. neg Predictions: No retrieval - neg, **k=1 -** neg, **k=3 -** pos in Telugu is positive, but the class predicted without cross-lingual context is negative. The prediction stays the same when a single positive English sample is added as prompt context. When two more English samples are added, the prediction becomes correct.
This case indicates that the retrieved crosslingual samples help the MPLM make a correct decision. Furthermore, more similar HRL samples could rectify the deviation. More cases are shown in Table 10 and Table 11 in Appendix §C.
## 7 Conclusion
We propose PARC, a pipeline that augments prompts for zero-shot learning on low resource languages by retrieving semantically similar crosslingual sentences from HRL corpora. We test PARC on three classification tasks with parallel test sets across 10 LRLs, and it performs better than the baselines in both unlabeled and labeled settings. Increasing the number of retrieved prompts improves performance at first, but deteriorates it after a certain point. A robustness study shows that PARC
also performs well with other cross-lingual retrievers or MPLMs, suggesting potential applications of PARC to a wider scope of scenarios.
## Limitations
The PARC pipeline proposed in this work is designed to improve the cross-lingual transfer performance for low-resource languages in a zero-shot setting. We tested our method on different LRLs contained in MPLMs and also investigate its effectiveness for several unseen languages. These are not included in pretraining corpora of the MPLM
but use a seen script and share some subwords with the seen languages. However, our proposed method is not applicable for unseen languages with new scripts, which restricts its extension towards a wider range of languages. Besides, PARC is a retrieval-based method. More time and computational resources are required in the cross-lingual retrieval phase. Therefore, it is computationally less efficient to use PARC for inference.
## Acknowledgements
This work was supported by European Research Council (\# 740516), Munich Center for Machine Learning (MCML) and China Scholarship Council
(CSC).
## References
Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. *Transactions* of the Association for Computational Linguistics, 7:597–610.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder for english.
In Proceedings of the 2018 conference on empirical methods in natural language processing: system demonstrations, pages 169–174.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics.
Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao,
Heyan Huang, and Ming Zhou. 2021. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3576–3588, Online. Association for Computational Linguistics.
Jishnu Ray Chowdhury, Yong Zhuang, and Shuyi Wang.
2022. Novelty controlled paraphrase generation with retrieval augmented conditional prompt tuning. In AAAI.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In *Proceedings of* the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Association for Computational Linguistics.
Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Conference on Empirical Methods in Natural Language Processing.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Fangxiaoyu Feng, Yinfei Yang, Daniel Matthew Cer, N. Arivazhagan, and Wei Wang. 2020. Languageagnostic bert sentence embedding. In *Annual Meeting of the Association for Computational Linguistics*.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3816–3830, Online. Association for Computational Linguistics.
Hammarström Harald, Robert Forkel, Martin Haspelmath, and Sebastian Bank. 2015. glottolog-data:
Glottolog database 2.6.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson.
2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In *International Conference on Machine Learning*, pages 4411–4421. PMLR.
Lianzhe Huang, Shuming Ma, Dongdong Zhang, Furu Wei, and Houfeng Wang. 2022. Zero-shot crosslingual transfer of prompt-based tuning with a unified multilingual prompt. *ArXiv*, abs/2202.11451.
Phillip Keung, Yichao Lu, György Szarvas, and Noah A.
Smith. 2020. The multilingual amazon reviews corpus. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing*.
Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and ´
Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot cross-lingual transfer with multilingual transformers. *arXiv preprint* arXiv:2005.00633.
Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics:
Volume 2, Short Papers, volume 2, pages 8–14.
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022a. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO
2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics.
Yanchen Liu, Timo Schick, and Hinrich Schütze. 2022b.
Semantic-oriented unlabeled priming for large-scale language models. *arXiv preprint arXiv:2202.06133*.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
Yongkang Liu, Shi Feng, Daling Wang, and Yifei Zhang. 2022c. MulZDG: Multilingual codeswitching framework for zero-shot dialogue generation. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 648–659, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In Conference on Empirical Methods in Natural Language Processing (EMNLP),
Copenhagen, Denmark.
Christopher D Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. *Introduction to information retrieval*. Cambridge university press.
Chandler May, Alex Wang, Shikha Bordia, Samuel R.
Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. *ArXiv*,
abs/1903.10561.
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2020. Rocketqa: An optimized training approach to dense passage retrieval for open-domain question answering. In *North American Chapter of* the Association for Computational Linguistics.
Taraka Rama, Lisa Beinborn, and Steffen Eger. 2020.
Probing multilingual bert for genetic and typological signals. In *International Conference on Computational Linguistics*.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108.
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics.
Oleh Shliazhko, Alena Fenogenova, Maria Tikhonova, Vladislav Mikhailov, Anastasia Kozlova, and Tatiana Shavrina. 2022. mgpt: Few-shot learners go multilingual. *arXiv preprint arXiv:2204.07580*.
Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, and Michael Zeng. 2022. Training data is more valuable than you think: A simple and effective method by retrieving from training data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3170–3179, Dublin, Ireland. Association for Computational Linguistics.
Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018. R3:
Reinforced ranker-reader for open-domain question answering. In *AAAI*.
Albert Webson and Ellie Pavlick. 2022. Do promptbased models really understand the meaning of their prompts? In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2300–2344, Seattle, United States.
Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long Papers), pages 1112–1122. Association for Computational Linguistics.
Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Rosanne Liu, Jason Yosinski, and Pascale Fung. 2021.
Language models are few-shot multilingual learners.
In *Proceedings of the 1st Workshop on Multilingual* Representation Learning, pages 1–15, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45.
Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In *Proceedings* of the 5th Workshop on Representation Learning for NLP, pages 120–130, Online. Association for Computational Linguistics.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore:
Evaluating text generation with bert. *ArXiv*,
abs/1904.09675.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. *ArXiv*, abs/1509.01626.
Mengjie Zhao and Hinrich Schütze. 2021. Discrete and soft prompting for multilingual models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8547–8555, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
## A Effect Of Translations
In our experiment, we use multilingual parallel test sets created by machine translation from English to target low-resource languages. To explore the effect of machine translation-created test sets, we compare the cross-lingual transfer performance on translation test sets and original test sets of XNLI. The original XNLI datasets include two lowresource languages that we used in our experiments, i.e., Swahili (sw) and Urdu (ur). We also measure the translation quality by using the original sets as gold standard. The analysis results (Table 8) suggests that machine translated test sets are useful as a proxy for evaluating cross-lingual performance on LRLs.
Performance
MT Acc. 34.00 33.92 OV Acc. 34.07 33.87
Diff 0.07 -0.05
P-Value 0.85 0.92
Translation Quality
BLEU 56.39 64.96
chrF 49.58 59.89 Sim. 81.82 81.19
## B Language Features
| Languages | sw | ur |
|---------------------------------|------|------|
| Performance Translation Quality | | |
Table 9 shows the language features of all 10 LRLs evaluated in our experiments. Language similarity refers to the similarity between each LRL and English. SIM score is computed by Eq. (6). WikiSize is the log value of the Wikipedia size in MB.
## C Case Study
Table 10 shows two examples from the Amazon Review task. We compare the predictions for three scenarios: no retrieval information (i.e., Direct baseline, see §4.2), one retrieved sample, and three retrieved samples. Similarly, Table 11 shows the same comparison on the AG News task.
## D Detailed Results D.1 Results For Each Task
We show the detailed experimental results for all tasks in Table 12 (Amazon reviews), Table 13 (AG
News) and Table 14 (XNLI), respectively.
Lang Language Similarity **Wiki**
SYN PHO INV FAM GEO SIM **Size**
Af 84.9 60.3 38.4 50.4 33.1 53.4 6
![12_image_0.png](12_image_0.png)
![12_image_1.png](12_image_1.png)
![12_image_2.png](12_image_2.png)
![12_image_3.png](12_image_3.png)
![12_image_4.png](12_image_4.png)
![12_image_5.png](12_image_5.png)
![12_image_6.png](12_image_6.png)
Jv 48.0 39.2 52.7 0.0 0.0 28.0 5 Mn 31.0 100.0 39.4 0.0 56.8 45.4 5 My 17.4 80.3 100.0 0.0 37.6 47.1 5 Ta 28.9 60.3 51.5 0.0 72.7 42.7 7 Te 36.0 56.2 31.3 0.0 45.2 33.7 7 Tl 35.0 70.5 26.7 0.0 38.8 34.2 6 Sw 27.0 87.0 62.1 0.0 57.2 46.6 5 Ur 50.2 72.0 47.1 12.6 62.5 48.9 7 Uz 39.8 75.6 24.1 0.0 73.7 42.6 6
Table 9: List of language features of the 10 LRLs that we evaluate.
Amazon Review Case 1 \#37 Input:
(Very dry on my hair.) neg Retrieved:
R1: It's a little bit too greasy in my opinion. Doesn't really seem to soak into the hair very well. pos R2: The tiniest amount leaves my hair stringy and oily. neg R3: could smell this stuff all day but I don't feel like it moisturizes my skin enough, and my skin isn't overly dry to begin with. pos Predictions: No retrieval - pos, **k=1 -** neg, **k=3 -** neg Case 2 \#963 Input: (Used with several loads of laundry. Gentle on the fabric and gentle on my skin.) pos Retrieved:
R1: Hard to wash. The fur on top gets all over the sides in the wash. :/ pos R2: Very nice and thick high quality towels. pos R3: Smelled really bad mold! I had to wash them before use. neg Predictions: No retrieval - neg, **k=1 -** neg, **k=3 -** pos Table 10: PARC examples for Amazon Review task.
## D.2 Detailed Data For Correlation Analysis
Table 16 shows the detailed data used for correlation analysis of language similarity, high- and low-resource language pretraining data size with cross-lingual performance in the unlabeled setting as well as labeled setting.
## D.3 **Complete Results For Robustness Analysis**
Table 17 shows the results of each language using different combinations of retriever and MPLM for validating the robustness on three tasks.
AG News
## Case 1 #1939
Input: (Flower Power A Japanese company has come up with a way to turn flowers into amplifiers. ) Tech Retrieved: R1: Japanese firms step up spending Japanese firms continue to spend on new equipment and production plants, a survey finds, underlining a continuing recovery in the world's second-largest economy. Business R2: IBM, Honda deliver in-car speech-recognition navigation system IBM and Honda have jointly developed a hands-free and natural sounding in-vehicle speechrecognition system that will be offered as standard equipment on the 2005 Acura RL Tech R3: Scientists Make Phone That Turns Into a Sunflower
(Reuters) Reuters - Scientists said on Monday they have come up with a cell phone cover that will grow into a sunflower when thrown away. Tech Predictions: No retrieval - World, **k=1 -** Tech, k=3 - Tech
## Case 2 #1302
Input:
(Movies in a Snap: Netflix and TiVo Discuss Downloads Bee Staff Writer. The high-tech terrain is shifting underfoot amid rumblings of a new Silicon Valley alliance that would allow the owners of TiVo Inc. ) Business Retrieved:
R1: NETFLIX, TIVO HOOKUP CLOSE Netflix and TiVo are in late-stage talks on a partnership that would let subscribers use the Internet to download Netflix movies directly into their TiVo box, The Post has learned. Business R2: TiVo and NetFlix: Picture-Perfect Duo? With TiVo
(TIVO) and NetFlix (NFLX ) finally announcing a longrumored partnership to launch a video-on-demand service sometime next year, investors smiled on the deal that will keep the two popular, but under-fire, innovators ahead of competitors. Tech R3: New Treo and more unveiled at CTIA CTIA stands for the Cellular Telecommunications and Internet Association. Each year they host two shows for the industry. This week is their fall Wireless IT and Entertainment expo in San Francisco. Business Predictions: No retrieval - World, **k=1 -** Tech, k=3 - Business Table 11: PARC examples for AG News task
| Unlabeled labeled Unlabeled labeled Unlabeled labeled Unlabeled labeled |
|---------------------------------------------------------------------------|
pattern 0 [X] [MASK]
pattern 1 It was [MASK]. [X]
pattern 2 [X] All in all, it was [MASK]. pattern 3 Just [MASK]! [X]
pattern 4 [X] In summary, the product is [MASK].
en af ur
p0 p1 p2 p3 p4 p0 p1 p2 p3 p4 p0 p1 p2 p3 p4
MAJ 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 Direct 50.5 54.3 58.9 53.7 52.6 53.3 50.7 50.4 49.8 51.5 49.9 51.7 54.6 49.9 50.3 Unlabeled
k=1 50.9 55.4 59.1 51.9 52.6 51.0 54.9 57.9 52.9 52.8 **51.6 56.7 60.0 52.2 52.2**
k=3 50.7 53.7 57.7 50.8 50.4 50.4 52.5 56.2 50.7 51.0 51.3 52.9 57.1 50.8 50.9
k=5 50.8 52.2 56.0 50.3 50.9 50.8 52.2 55.0 50.2 50.6 51.2 52.5 56.4 50.3 50.7 k=10 50.7 51.9 56.0 50.0 50.6 50.7 52.0 55.8 50.2 50.7 51.4 52.4 55.5 50.0 50.3
k=20 50.5 50.8 53.6 49.9 50.1 50.5 51.1 53.5 50.0 50.2 51.1 51.2 54.0 49.8 50.0
k=1 **60.0** 82.4 82.4 82.3 82.4 66.0 79.0 79.2 79.2 79.2 **57.0** 80.4 80.6 80.6 80.6
k=3 58.5 86.2 86.2 86.2 86.2 65.0 80.7 81.1 81.1 81.0 56.4 83.8 84.3 84.3 84.3 k=5 57.3 87.2 87.2 87.2 87.2 65.4 82.7 82.9 82.9 82.8 56.2 84.6 85.0 85.0 85.0
k=10 57.7 88.9 88.9 88.9 88.9 **66.5** 85.2 85.4 85.4 85.4 56.6 87.0 87.3 87.3 87.3 k=20 56.4 **89.5 89.5 89.5 89.5** 64.3 85.3 **85.7 85.7 85.6** 55.4 **87.6 87.9 87.9 88.0**
k=30 56.3 88.9 88.9 88.9 88.9 63.6 85.4 85.6 85.6 85.6 55.7 87.4 87.6 87.6 87.6
sw te ta
p0 p1 p2 p3 p4 p0 p1 p2 p3 p4 p0 p1 p2 p3 p4
MAJ 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0
Direct 47.3 50.2 51.9 49.9 50.3 50.8 52.5 53.9 49.9 51.4 54.1 59.0 56.2 50.5 51.9
Unlabeled
k=1 51.4 50.4 50.5 50.5 50.1 51.6 54.8 57.5 52.3 52.1 **57.1 55.3 57.2 52.6 51.6**
k=3 50.5 50.3 50.3 50.1 50.1 51.3 52.8 55.3 50.6 51.3 55.7 52.5 55.0 50.5 50.6 k=5 50.6 50.1 50.0 50.1 50.1 51.6 51.7 54.0 50.4 50.3 56.1 51.4 54.0 50.1 50.1
k=10 50.8 50.1 50.0 50.1 50.1 51.8 52.1 53.5 50.4 50.3 57.3 51.5 53.9 50.0 50.1
k=20 50.5 50.1 50.0 50.1 50.1 51.4 50.6 52.9 50.0 50.0 56.9 50.5 52.9 50.0 50.0
k=1 50.5 50.0 49.9 49.9 49.9 **58.2** 75.9 75.8 75.8 75.8 68.1 75.3 75.4 75.4 75.4
k=3 51.0 54.1 54.1 54.1 54.1 58.0 78.4 78.4 78.4 78.4 70.2 79.1 79.3 79.3 79.2
k=5 50.7 54.4 54.4 54.4 54.4 56.8 79.1 79.0 79.0 79.1 70.7 80.5 80.5 80.5 80.5
k=10 **51.3 55.5 55.5 55.5 55.5** 57.2 81.3 81.6 81.6 81.6 **70.9 83.7 83.9 83.9 83.9**
k=20 50.9 54.3 54.4 54.4 54.4 56.9 **82.0 82.1 82.1 82.1** 70.8 82.8 83.1 83.1 83.1
k=30 50.7 54.3 54.3 54.3 54.3 56.8 82.0 82.0 82.0 82.0 70.5 83.3 83.5 83.4 83.4
mn uz my
p0 p1 p2 p3 p4 p0 p1 p2 p3 p4 p0 p1 p2 p3 p4
MAJ 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0
Direct 49.1 49.7 51.4 49.7 50.0 48.5 50.2 52.4 49.7 51.2 **54.4 56.1 56.1** 50.5 **52.6**
Unlabeled
k=1 **51.1 54.7 58.6 52.6 52.8** 50.4 **53.1 53.6 51.8** 50.9 53.0 53.9 56.0 **52.3** 52.0
k=3 50.2 53.2 56.4 51.0 51.1 50.5 51.9 52.1 50.2 50.3 53.0 51.5 55.0 51.2 50.7
k=5 50.2 52.0 55.3 50.4 50.5 50.5 50.3 50.7 50.0 50.2 52.9 51.1 53.6 50.5 50.3
k=10 50.4 52.2 56.3 **50.6** 50.5 50.6 50.3 50.6 50.1 50.0 53.4 51.1 54.2 50.2 50.1
k=20 50.4 51.1 54.5 50.0 50.0 50.5 50.0 50.7 50.0 50.0 53.2 50.5 52.8 50.0 50.0
k=1 60.8 74.9 74.9 74.9 74.9 **56.0** 65.0 64.7 64.7 64.7 65.3 73.9 73.8 73.8 73.8
k=3 60.3 79.5 79.7 79.7 79.7 55.2 65.3 65.2 65.2 65.2 66.6 77.5 77.7 77.7 77.7 k=5 59.7 80.6 80.6 80.6 80.6 55.5 66.1 66.0 66.0 65.8 65.8 78.6 78.9 78.9 78.9
k=10 **62.2 83.9 84.3 84.3 84.3** 55.9 68.1 68.2 68.2 68.3 **67.8** 80.9 81.1 81.1 81.1
k=20 60.3 82.5 83.2 83.2 83.2 53.8 67.0 67.1 67.1 67.1 67.4 **81.8 81.8 81.8 81.8**
k=30 59.7 83.3 83.8 83.8 83.8 54.4 67.5 67.7 67.7 67.7 67.6 81.7 81.8 81.8 81.8
jv tl **Avg.**
p0 p1 p2 p3 p4 p0 p1 p2 p3 p4 p0 p1 p2 p3 p4
MAJ 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0
Direct **50.9** 52.3 54.1 50.1 **52.3** 49.6 50.4 **51.9** 50.0 **51.2** 50.8 52.5 53.8 50.3 51.4
Unlabeled
k=1 50.6 **53.0** 54.2 **50.9** 50.5 **50.4 50.6** 50.9 50.1 50.2 **51.7 53.9 56.0 51.8 51.6** k=3 50.2 51.7 **53.5** 50.4 50.3 50.0 50.3 50.3 **50.2** 50.0 51.2 52.1 54.4 50.6 50.6
k=5 50.2 50.9 52.9 50.1 50.2 50.1 50.2 50.1 50.0 50.1 51.4 51.3 53.5 50.2 50.4
k=10 50.1 50.7 52.5 49.9 50.0 50.2 50.0 50.3 50.0 50.0 51.6 51.3 53.5 50.1 50.2
k=20 50.5 50.1 51.7 50.0 50.0 50.2 50.0 50.4 50.0 50.0 51.4 50.5 52.5 50.0 50.0
k=1 **54.1** 59.3 59.3 59.3 59.3 52.4 55.4 55.4 55.4 55.4 58.9 70.1 68.9 70.1 70.1
k=3 52.7 61.6 61.6 61.6 61.6 52.1 57.7 57.7 57.7 57.7 58.7 73.1 73.2 73.2 73.2
k=5 52.8 61.5 61.5 61.5 61.5 51.6 60.2 60.2 60.2 60.1 58.4 74.1 74.2 74.2 74.2
k=10 51.6 62.6 62.6 62.6 62.6 52.4 63.2 63.3 63.3 63.3 **59.1 76.4 76.5 76.5 76.5**
k=20 51.6 61.5 61.5 61.5 61.5 51.5 62.8 62.9 62.9 62.9 58.1 76.1 76.3 76.3 76.3
k=30 51.6 60.9 61.0 61.0 61.0 51.5 62.3 62.4 62.4 62.4 58.0 76.1 76.2 76.2 76.2
Table 12: Results on Amazon reviews dataset.
pattern 0 [X] [MASK] pattern 1 [MASK]: [X]
pattern 2 [MASK] News: [X]
pattern 3 [X] Category: [MASK]
en af ur
p0 p1 p2 p3 p0 p1 p2 p3 p0 p1 p2 p3
MAJ 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0
Direct 52.5 47.8 **47.3** 53.0 41.8 41.3 40.2 **57.8** 27.4 32.4 33.0 **53.5**
Unlabeled
k=1 53.7 47.6 45.6 53.2 52.8 **46.8 46.2** 53.2 46.2 **41.8 41.0** 49.7
k=3 55.8 47.6 43.4 54.3 53.6 46.5 44.3 54.3 46.2 40.5 38.2 49.9
k=5 57.1 **48.3** 41.7 **55.6** 54.4 46.9 43.7 55.1 47.0 40.9 37.2 51.4
k=10 57.5 45.7 41.9 55.3 55.3 44.6 42.3 55.6 46.3 38.3 35.3 51.9
k=20 **59.7** 46.7 41.5 55.3 **57.2** 45.9 42.2 56.1 **48.1** 39.7 35.5 51.6
labeled
k=1 74.9 83.5 83.8 83.8 75.4 81.2 82.9 82.7 68.1 76.9 78.8 78.7
k=3 77.1 86.5 86.8 86.7 77.1 84.3 85.4 85.2 69.6 79.4 81.7 81.8
k=5 78.1 87.7 88.0 87.9 78.6 86.8 87.1 87.1 69.0 79.9 82.7 82.7
k=10 78.7 88.2 88.5 88.5 79.4 87.2 87.7 87.5 70.5 81.5 **83.6 83.4** k=20 79.0 89.1 89.4 89.4 79.7 87.4 87.8 87.5 **70.7 81.6** 83.3 83.2
sw te ta
p0 p1 p2 p3 p0 p1 p2 p3 p0 p1 p2 p3
MAJ 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0
Direct 42.5 37.6 33.3 **56.6** 32.2 37.2 32.5 **55.4** 31.3 37.2 28.6 55.1
Unlabeled
k=1 46.5 **42.1 42.0** 46.4 46.1 **41.5 43.3** 48.6 42.8 **41.6 39.2** 47.6 k=3 **47.1** 41.2 39.9 47.9 **48.2** 40.0 42.4 50.3 44.9 41.0 36.9 50.1
k=5 47.0 41.5 39.3 48.6 48.0 40.4 41.0 52.4 46.6 39.8 36.0 50.9
k=10 46.4 38.5 37.0 50.0 47.6 39.0 39.3 51.8 45.6 37.8 33.9 51.5
k=20 46.7 39.1 36.9 49.9 50.0 40.1 39.7 51.6 **47.9** 38.8 34.7 **52.5**
labeled
k=1 63.5 68.4 70.3 70.3 68.2 73.9 75.0 75.0 64.0 69.7 71.5 71.5 k=3 65.6 70.8 72.3 72.4 71.1 77.6 78.2 78.2 67.6 74.4 75.7 75.7
k=5 64.4 72.2 73.5 73.4 **72.9** 79.7 79.9 79.8 68.8 75.8 76.6 76.5 k=10 67.0 72.5 **74.1 73.9** 72.9 79.9 80.0 80.0 68.3 76.5 77.2 77.1
k=20 **67.5 72.7** 73.6 73.6 72.5 80.2 80.6 80.6 **70.0 77.5 78.1 78.2**
mn uz my
p0 p1 p2 p3 p0 p1 p2 p3 p0 p1 p2 p3
MAJ 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0
Direct 31.5 30.9 32.0 47.3 33.0 37.5 33.8 50.7 31.6 37.4 33.7 51.9 Unlabeled
k=1 43.3 **42.5 41.5** 48.2 44.3 **44.4 42.3** 49.0 45.0 43.9 **43.6** 50.0 k=3 44.5 41.2 40.5 51.1 46.3 42.2 40.7 50.9 47.1 **44.5** 41.7 53.7
k=5 44.8 41.5 39.6 51.8 45.8 41.7 39.2 52.3 48.5 43.8 41.4 54.2
k=10 44.1 39.7 38.0 **53.3** 46.7 39.7 37.9 **53.4** 47.7 41.4 40.0 **54.4** k=20 **46.0** 39.7 37.9 52.8 **48.9** 41.2 36.9 53.1 **49.6** 42.2 40.3 53.6
labeled
k=1 62.8 70.9 72.7 72.8 65.6 71.5 73.2 73.3 64.8 76.2 77.4 77.2
k=3 65.6 75.4 77.3 77.2 68.4 73.6 75.7 75.7 65.9 79.5 80.1 79.8 k=5 65.9 75.8 78.0 77.9 69.3 76.1 77.9 77.8 66.4 81.4 82.5 81.8
k=10 66.6 77.0 **78.7 78.6** 70.7 76.4 78.3 78.2 67.2 82.4 82.9 82.3
k=20 **67.5 77.4** 78.2 78.0 70.7 77.3 78.8 78.7 **68.1 83.1 83.6 83.3**
jv tl Avg
p0 p1 p2 p3 p0 p1 p2 p3 p0 p1 p2 p3
MAJ 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0
Direct 46.9 39.3 38.0 **59.3** 44.8 44.4 42.6 **60.4** 37.8 38.4 36.2 50.9
Unlabeled
k=1 51.0 **45.5 45.4** 51.6 49.7 **45.8 43.7** 52.2 47.4 **44.2 43.5** 48.9
k=3 52.6 44.6 42.0 53.5 51.0 45.3 42.7 54.0 48.8 43.6 41.9 50.3
k=5 53.1 44.5 41.3 53.6 52.3 45.2 41.8 54.2 49.5 43.7 41.2 51.0 k=10 53.0 42.4 39.9 54.0 51.4 44.0 39.8 54.9 49.2 41.7 39.7 51.2
k=20 **55.4** 42.8 40.1 54.2 **53.2** 44.4 38.9 55.3 **51.1** 42.6 39.9 **51.4**
labeled
k=1 72.5 77.8 79.1 79.1 71.4 76.6 78.9 79.0 68.3 74.6 75.9 75.9 k=3 74.6 80.5 82.3 82.3 74.4 80.7 82.1 82.2 70.6 77.8 78.9 78.9 k=5 75.8 81.3 82.8 82.8 75.4 81.2 83.4 83.5 71.3 79.1 80.2 80.1
k=10 76.6 82.0 84.0 84.2 75.9 82.4 **84.5 84.6** 72.1 79.8 80.9 80.8
k=20 77.4 82.8 84.6 84.8 **76.3 82.8** 84.0 84.0 **72.6 80.4 81.1 81.1**
Table 13: Results on AG News dataset.
pattern 0 [X1] [MASK] [X2] pattern 1 [X1]? [MASK], [X2] (Yes - No)
pattern 2 [X1]? [MASK], [X2] (Right - Wrong)
en af ur sw
p0 p1 p2 p0 p1 p2 p0 p1 p2 p0 p1 p2
MAJ 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3
Direct 33.3 **34.2** 34.3 33.2 33.0 33.4 **33.6** 34.0 33.2 33.2 32.2 33.1
Unlabeled
k=1 **34.1** 33.7 34.5 34.0 34.1 33.7 32.4 **35.3** 32.7 33.5 **33.7 33.7**
k=3 33.7 **34.1** 34.3 33.0 32.9 34.1 33.3 34.0 33.9 **33.6** 33.0 33.5
k=5 31.9 33.7 34.3 32.5 32.8 33.9 31.2 34.1 33.6 33.2 32.7 32.9
k=10 31.9 33.6 33.3 31.9 33.3 32.6 32.2 34.2 33.2 33.0 32.7 32.5
k=20 32.0 34.4 33.3 31.6 33.6 34.1 31.6 34.4 33.9 33.1 33.1 32.0
labeled
k=1 38.9 39.1 38.8 38.7 38.9 38.1 37.0 37.4 36.7 33.3 33.4 33.4 k=3 39.2 39.1 38.6 37.9 37.9 37.4 37.0 37.8 36.8 33.7 33.5 33.7 k=5 40.0 39.8 39.5 38.0 38.0 37.1 40.2 40.6 39.8 32.7 32.5 32.6 k=10 41.5 41.6 40.9 41.1 41.1 40.5 42.0 42.4 41.0 33.7 33.7 34.1
k=20 44.5 44.1 43.5 42.3 43.0 41.3 42.4 43.4 42.2 **35.9 35.7 35.9**
te ta mn uz
p0 p1 p2 p0 p1 p2 p0 p1 p2 p0 p1 p2
MAJ 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3
Direct 31.9 33.0 33.2 32.4 34.1 32.9 **33.0** 32.7 32.6 **33.3** 33.3 32.9
Unlabeled
k=1 **34.1** 34.1 34.1 **34.5** 34.3 33.3 32.8 33.6 **34.7** 33.2 33.9 32.8 k=3 32.8 34.9 33.4 33.7 34.7 **34.2** 32.2 **34.5** 33.7 32.3 34.5 33.4 k=5 32.9 **35.1** 33.8 32.9 34.3 33.9 31.9 33.9 34.1 33.1 **34.5 33.9**
k=10 32.0 34.1 32.7 32.3 34.7 32.5 30.8 34.1 32.5 32.8 33.9 32.6
k=20 31.5 34.6 32.7 32.5 **34.8** 32.9 32.0 34.1 33.4 32.6 33.5 32.6
labeled
k=1 37.8 38.1 37.7 37.7 38.0 37.0 36.5 36.5 36.5 35.5 34.8 35.0
k=3 38.9 39.5 38.4 38.7 39.4 37.5 39.1 39.1 38.9 35.1 34.7 34.7
k=5 37.5 37.1 35.9 38.3 38.7 36.3 37.1 36.9 36.9 36.0 35.9 35.9 k=10 39.2 39.5 37.9 41.1 40.8 38.0 39.5 39.3 39.3 38.3 37.9 37.8
k=20 41.2 41.5 39.3 42.7 43.1 39.7 40.3 40.2 40.0 **40.0 39.9 39.6**
my jv **tl Avg**
p0 p1 p2 p0 p1 p2 p0 p1 p2 p0 p1 p2
MAJ 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3 33.3
Direct **33.7** 33.6 33.7 **33.3** 33.3 33.6 33.3 33.5 32.3 33.1 33.3 33.1
Unlabeled
k=1 33.3 33.5 **33.8** 32.4 32.0 33.3 33.8 32.7 32.8 **33.4** 33.7 33.5 k=3 32.6 33.9 33.7 32.1 31.4 34.2 33.7 **33.9 33.3** 32.9 **33.7 33.7**
k=5 32.5 **34.3** 33.6 32.4 31.6 34.3 **34.1** 33.5 32.1 32.7 33.6 33.6
k=10 30.5 33.9 33.3 32.1 32.6 33.5 33.2 33.1 32.6 32.1 33.5 32.8
k=20 30.9 33.5 32.7 30.8 **33.6 34.7** 32.9 32.5 33.1 32.0 33.6 33.2
labeled
k=1 36.8 36.7 36.1 34.2 33.5 33.3 34.7 34.4 34.3 36.2 36.2 35.8
k=3 36.7 36.9 36.2 34.6 33.9 33.9 35.7 35.7 35.7 36.7 36.8 36.3
k=5 37.7 37.7 37.3 **35.2 34.8 34.6** 35.7 35.7 35.3 36.9 36.8 36.2
k=10 39.5 39.3 38.1 34.7 34.4 33.6 37.2 36.9 36.9 38.6 38.5 37.7
k=20 **41.7 41.3 39.6** 32.8 32.8 32.4 37.4 37.0 37.0 **39.7 39.8 38.7**
UN
LB
![16_image_0.png](16_image_0.png)
![16_image_1.png](16_image_1.png)
![16_image_2.png](16_image_2.png)
En Af Jv Mn My Sw Ta Te Tl Ur Uz Avg
MAJ 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0 25.0
Direct 52.5 41.8 27.4 42.5 32.2 31.3 31.5 33.0 31.6 46.9 44.8 36.3
k=1 53.7 52.8 46.2 46.5 46.1 42.8 43.3 44.3 45.0 51.0 49.7 46.7
k=3 BoR 55.8 53.6 46.2 47.1 48.2 44.9 44.5 46.3 47.1 52.6 51.0 48.1
CONC 53.5 52.4 45.9 44.9 44.8 42.9 41.7 46.6 46.0 52.0 51.6 46.9
k=5 BoR 57.1 54.4 47.0 47.0 48.0 46.6 44.8 45.8 48.5 53.1 52.3 48.7
CONC 53.5 48.0 38.2 41.3 36.3 36.9 39.5 41.4 42.9 50.5 49.6 42.4
k=10 BoR 57.5 55.3 46.3 46.4 47.6 45.6 44.1 46.7 47.7 53.0 51.4 48.4
CONC 46.4 41.1 36.2 38.3 36.6 34.9 34.6 35.8 40.7 46.3 45.0 38.9
k=20 BoR 59.7 57.2 48.1 46.7 50.0 47.9 46.0 48.9 49.6 55.4 53.2 50.3
CONC 50.0 48.4 42.3 41.4 43.3 43.1 39.3 44.3 48.1 47.9 48.4 44.6
k=30 BoR 60.1 57.4 49.0 47.4 51.1 49.2 47.1 48.7 50.1 56.5 54.4 51.1
CONC 50.7 47.6 43.9 38.2 42.9 42.5 41.8 44.5 47.7 47.1 47.3 44.3
k=1 74.9 75.4 68.1 63.5 68.2 64.0 62.8 65.6 64.8 72.5 71.4 67.6
k=3 BoR 77.1 77.1 69.6 65.6 71.1 67.6 65.6 68.4 65.9 74.6 74.4 70.0
CONC 75.6 74.8 67.3 63.1 60.3 59.0 60.5 67.1 65.9 73.3 72.4 66.4
k=5 BoR 78.1 78.6 69.0 64.4 72.9 68.8 65.9 69.3 66.4 75.8 75.4 70.6
CONC 74.6 66.5 48.2 53.9 44.9 45.4 52.1 59.5 56.0 70.9 63.6 56.1
k=10 BoR 78.7 79.4 70.5 67.0 72.9 68.3 66.6 70.7 67.2 76.6 75.9 71.5
CONC 61.2 52.7 43.2 48.0 44.5 42.5 41.3 45.0 50.1 62.3 56.7 48.6
k=20 BoR 79.0 79.7 70.7 67.5 72.5 70.0 67.5 70.7 68.1 77.4 76.3 72.0
CONC 67.4 65.1 55.8 55.6 57.6 58.3 51.2 61.0 62.8 66.4 66.0 60.0
k=30 BoR 79.0 79.7 71.3 67.6 72.8 69.9 68.1 71.1 69.4 77.2 76.7 72.4
CONC 72.8 71.1 62.1 57.0 61.6 60.4 57.9 67.9 64.6 71.6 69.3 64.3
Performance Language Similarity **WikiSize**
Unlabeled labeled SYN PHO INV FAM GEO SIM source target
en-af 79.2 62.0 84.9 60.3 38.4 50.4 33.1 53.4 14 6 en-ur 80.6 63.4 50.2 72.0 47.1 12.6 62.5 48.9 14 7 en-sw 49.9 51.0 27.0 87.0 62.1 0.0 57.2 46.6 14 5
en-te 75.8 60.1 36.0 56.2 31.3 0.0 45.2 33.7 14 7
en-ta 75.4 60.2 28.9 60.3 51.5 0.0 72.7 42.7 14 7
en-mn 74.9 62.9 31.0 100.0 39.4 0.0 56.8 45.4 14 5
en-uz 64.7 54.9 39.8 75.6 24.1 0.0 73.7 42.6 14 6
en-my 73.8 60.3 17.4 80.3 100.0 0.0 37.6 47.1 14 5 en-jv 59.3 55.3 48.0 39.2 52.7 0.0 0.0 28.0 14 5
en-tl 55.4 53.5 35.0 70.5 26.7 0.0 38.8 34.2 14 6
de-af 71.6 56.5 87.1 33.1 90.3 77.2 43.1 66.2 12 6 de-ur 77.5 58.5 50.7 68.3 45.8 15.4 72.6 50.6 12 7 de-sw 50.6 48.9 29.5 33.1 36.2 0.0 66.7 33.1 12 5
de-te 71.2 55.7 45.6 29.4 5.2 0.0 56.5 27.3 12 7
de-ta 76.3 57.6 43.0 56.7 48.7 0.0 81.3 45.9 12 7 de-mn 74.7 59.1 44.4 68.3 42.8 0.0 61.8 43.4 12 5
de-uz 62.8 55.1 48.3 91.9 27.8 0.0 81.1 49.8 12 6
de-my 72.0 59.3 31.3 29.9 63.9 0.0 47.5 34.5 12 5 de-jv 60.0 50.9 41.5 14.4 32.5 0.0 10.3 19.8 12 5 de-tl 54.5 52.1 48.1 42.1 0.0 0.0 50.8 28.2 12 6 zh-af 70.4 58.6 53.9 9.5 25.2 0.0 12.1 20.1 11 6 zh-ur 75.1 62.8 59.0 43.5 36.3 0.0 82.6 44.3 11 7 zh-sw 53.9 51.5 5.7 33.1 27.0 0.0 27.6 18.7 11 5
zh-te 72.4 60.3 49.9 29.4 4.5 0.0 86.7 34.1 11 7
zh-ta 73.0 61.8 19.0 56.7 16.8 0.0 40.5 26.6 11 7 zh-mn 71.6 60.4 56.5 43.5 8.7 0.0 99.0 41.5 11 5
zh-uz 62.5 54.9 49.0 69.3 26.2 0.0 87.2 46.3 11 6
zh-my 69.6 59.3 42.5 71.8 32.7 37.8 95.7 56.1 11 5 zh-jv 59.8 54.3 41.1 42.1 31.4 0.0 85.1 39.9 11 5 zh-tl 54.7 52.4 44.7 14.4 6.9 0.0 83.4 29.9 11 6
hi-af 78.2 59.0 55.4 50.1 30.8 14.3 52.3 40.6 7 6
hi-ur 80.0 57.8 100.0 88.1 73.0 100.0 99.9 92.2 7 7 hi-sw 50.7 50.5 27.4 24.6 24.9 0.0 66.9 28.8 7 5
hi-te 72.7 58.4 74.7 74.4 67.2 0.0 100.0 63.3 7 7
hi-ta 74.2 57.0 48.9 50.1 36.8 0.0 75.8 42.3 7 7 hi-mn 74.6 57.7 57.9 61.3 31.2 0.0 89.4 48.0 7 5 hi-uz 64.0 50.8 57.8 64.8 45.6 0.0 97.2 53.1 7 6 hi-my 74.3 58.7 36.7 46.7 37.5 0.0 97.6 43.7 7 5 hi-jv 59.4 48.7 21.2 0.0 13.6 0.0 79.6 22.9 7 5 hi-tl 56.6 52.9 73.1 59.8 41.3 0.0 98.2 54.5 7 6
ceb-af 63.9 58.1 42.4 44.1 52.5 0.0 8.9 29.6 11 6
ceb-ur 68.7 57.1 29.3 84.3 22.5 0.0 62.9 39.8 11 7 ceb-sw 53.4 49.2 33.0 16.1 76.3 0.0 12.0 27.5 11 5
ceb-te 69.3 59.0 4.8 98.6 17.9 0.0 75.9 39.4 11 7
ceb-ta 66.3 55.8 22.4 72.1 63.0 0.0 16.6 34.8 11 7 ceb-mn 65.9 59.7 16.5 55.0 37.6 0.0 79.3 37.7 11 5 ceb-uz 56.2 52.6 26.2 61.3 17.9 0.0 60.6 33.2 11 6
ceb-my 64.8 56.3 3.0 43.5 57.7 0.0 88.1 38.4 11 5
ceb-jv 57.1 51.2 60.2 17.1 70.0 54.8 97.6 59.9 11 5 ceb-tl 53.0 56.2 0.0 82.7 50.0 0.0 76.2 41.8 11 6
Amazon Review
en af ur sw te ta mn uz my jv tl Avg
UN
mBERT+pooling 57.8 54.4 54.9 52.4 53.5 54.8 51.1 49.3 52.4 56.1 52.1 53.1
mBERT+distiluse 63.1 60.1 61.0 46.1 50.1 50.0 59.9 55.2 56.7 57.2 50.1 54.7
mBERT+paraphrase **69.3** 63.8 67.1 51.4 62.2 61.4 61.1 56.6 62.9 55.6 54.0 59.6
XLM-R+paraphrase 69.2 75.4 80.8 64.1 71.0 70.4 69.7 68.2 70.4 63.8 66.6 **70.1**
LB
mBERT+pooling 65.6 56.8 57.0 51.8 53.8 53.1 52.7 51.2 52.5 53.5 53.2 53.6
mBERT+distiluse 80.4 76.0 80.0 51.2 48.9 50.0 77.9 57.7 70.7 60.5 55.4 62.8
mBERT+paraphrase 87.2 82.9 **85.0** 54.4 79.0 80.5 **80.6** 66.0 **78.9** 61.5 60.2 72.9
XLM-R+paraphrase 77.6 81.7 82.2 **64.0** 74.2 73.9 75.1 **70.6** 76.4 66.3 66.1 **73.0**
AG News
en af ur sw te ta mn uz my jv tl Avg
UN
mBERT+pooling 37.9 37.3 34.8 37.7 32.9 38.0 36.0 33.7 37.4 42.0 38.8 36.9
mBERT+distiluse 43.3 43.5 38.8 40.6 25.4 29.1 39.7 39.6 42.7 42.0 42.9 38.4
mBERT+paraphrase 53.7 52.8 46.2 46.5 46.1 42.8 43.3 44.3 45.0 51.0 49.7 46.7
XLM-R+paraphrase 62.7 **61.9** 58.9 **52.2** 58.1 **55.8** 55.6 56.0 **58.6** 59.2 58.4 **57.4**
LB
mBERT+pooling 77.4 68.2 55.4 58.5 54.7 52.1 50.7 54.6 49.0 66.7 70.2 58.0
mBERT+distiluse 85.1 82.0 76.0 65.5 25.3 28.7 70.8 64.4 71.3 77.8 76.5 63.8
mBERT+paraphrase 74.9 75.4 68.1 63.5 68.2 64.0 62.8 65.6 64.8 72.5 71.4 67.6
XLM-R+paraphrase 83.8 82.9 78.8 70.4 75.1 71.7 72.7 73.2 77.4 79.2 79.0 **76.0**
XNLI
en af ur sw te ta mn uz my jv tl Avg
UN
mBERT+pooling 34.7 34.3 34.4 33.2 33.9 33.5 34.3 33.3 33.3 32.9 32.7 33.6
mBERT+distiluse 32.9 32.6 33.4 33.2 **36.1** 36.1 33.8 34.6 31.9 **34.0** 34.1 **34.0**
mBERT+paraphrase 34.1 32.9 **34.0** 33.0 34.9 34.7 34.5 34.5 33.9 31.4 33.9 33.7
XLM-R+paraphrase 35.5 **33.7** 34.0 32.3 35.0 36.5 38.1 34.7 **35.1** 33.5 34.1 **34.7**
LB
mBERT+pooling 35.5 34.1 34.0 35.3 33.3 34.1 35.7 32.8 33.1 33.5 32.3 33.8
mBERT+distiluse 34.5 35.6 33.6 **35.1** 31.3 31.4 38.5 35.6 34.8 35.7 34.3 34.6
mBERT+paraphrase 39.1 37.9 **37.8** 33.5 39.5 39.4 **39.1** 34.7 36.9 33.9 35.7 **36.8**
XLM-R+paraphrase 36.8 35.7 35.0 32.8 37.5 37.5 37.3 36.7 **37.5** 32.8 33.9 35.7
| Task | Dataset | Size | #Label | Languages |
|------------------------------|----------------|--------|----------|----------------|
| Sentiment Analysis | Amazon Reviews | 1000 | 2 | af, ur, jv, |
| Topic Categorization | AG News | 2000 | 4 | ta, mn, uz, |
| Sentence Pair Classification | XNLI | 1500 | 3 | tl, te, mn, sw |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The section after the conclustion and before the references.
✓ A2. Did you discuss any potential risks of your work?
In the limitation section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1 Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
No AI writing assistants were used.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Models used are introduced in Section 4.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Experimental setup is discussed in Section 4. Hyperparameter search is not applicable.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5 Results C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ravaut-etal-2023-unsupervised | Unsupervised Summarization Re-ranking | https://aclanthology.org/2023.findings-acl.529 | With the rise of task-specific pre-training objectives, abstractive summarization models like PEGASUS offer appealing zero-shot performance on downstream summarization tasks. However, the performance of such unsupervised models still lags significantly behind their supervised counterparts. Similarly to the supervised setup, we notice a very high variance in quality among summary candidates from these models while only one candidate is kept as the summary output. In this paper, we propose to re-rank summary candidates in an unsupervised manner, aiming to close the performance gap between unsupervised and supervised models. Our approach improves the unsupervised PEGASUS by up to 7.27{\%} and ChatGPT by up to 6.86{\%} relative mean ROUGE across four widely-adopted summarization benchmarks ; and achieves relative gains of 7.51{\%} (up to 23.73{\%} from XSum to WikiHow) averaged over 30 zero-shot transfer setups (finetuning on a dataset, evaluating on another). |
## Unsupervised Summarization Re-Ranking
Mathieu Ravaut1,2, Shafiq Joty∗1,3 **Nancy F. Chen**2 1 Nanyang Technological University, Singapore 2Institute of Infocomm Research (I2R), A∗STAR, Singapore 3 Salesforce AI
{[email protected], srjoty@ntu}.edu.sg [email protected]
## Abstract
With the rise of task-specific pre-training objectives, abstractive summarization models like PEGASUS offer appealing zero-shot performance on downstream summarization tasks.
However, the performance of such unsupervised models still lags significantly behind their supervised counterparts. Similarly to the supervised setup, we notice a very high variance in quality among summary candidates from these models while only one candidate is kept as the summary output. In this paper, we propose to re-rank summary candidates in an *unsupervised* manner, aiming to close the performance gap between unsupervised and supervised models. Our approach improves the unsupervised PEGASUS by up to 7.27% and ChatGPT by up to 6.86% relative mean ROUGE across four widely-adopted summarization benchmarks ;
and achieves relative gains of 7.51% (up to 23.73% from XSum to WikiHow) averaged over 30 zero-shot transfer setups (finetuning on a dataset, evaluating on another).1
## 1 Introduction
Transformer-based encoder-decoder language models have achieved great success in abstractive summarization in the last few years, and produce fluent summaries which can be quite abstractive (Raffel et al., 2019; Lewis et al., 2020; Zhang et al., 2020).
These models follow the *pre-train then fine-tune* paradigm: they are first pre-trained with a selfsupervised objective on a large text corpus; then they are fine-tuned on the downstream dataset of interest, using the available supervision, which may be very scarce. Finding a better pre-training objective remains an active research area. Some models like T5 (Raffel et al., 2019) and BART (Lewis et al., 2020) adopt a more general language modeling objective (e.g., masked span generation), while
| Generation method | Summary candidate | R-1 | R-2 | R-L |
|---------------------|---------------------|-------|-------|-------|
| First (top beam) | 35.47 | 13.89 | 31.61 | |
| Random | 34.89 | 13.46 | 31.22 | |
| Beam search | Minimum | 26.64 | 7.68 | 23.18 |
| Maximum (oracle) | 42.62 | 19.76 | 38.75 | |
| First | 34.35 | 13.02 | 30.65 | |
| Random | 31.73 | 11.22 | 28.4 | |
| Diverse beam search | Minimum | 21.25 | 4.45 | 18.61 |
| Maximum (oracle) | 41.87 | 19.29 | 38.22 | |
| First | 32.14 | 11.29 | 28.66 | |
| Random | 32.12 | 11.29 | 28.64 | |
| Nucleus sampling | Minimum | 24.09 | 6.49 | 21.19 |
| Maximum (oracle) | 40.19 | 17.47 | 36.43 | |
Table 1: ROUGE results with PEGASUS (unsupervised) on CNN/DM test set, for three generation methods to produce 20 summary candidates, and four candidate selection strategies.
R-1, R-2, R-L stands for ROUGE-1/2/L.
others like PEGASUS (Zhang et al., 2020) or TED
(Yang et al., 2020) are pre-trained specifically for the task of summarizing a document. PEGASUS
uses salient sentences of the document as a proxy summary label, while TED leverages the lead bias to get the pseudo-summary target.
Despite the impressive success on supervised abstractive summarization tasks, unsupervised summarization remains very challenging. The LEAD-3
(extractive) baseline which simply takes the first three sentences of a document as its summary, remains far ahead of unsupervised approaches on several news summarization datasets (See et al., 2017), especially the popular CNN/DM dataset
(Hermann et al., 2015). In fact, it was only improved on by *supervised* abstractive models not more than five years ago (Narayan et al., 2018). It is expected that a model which has never seen any summarization example would struggle, as summarization is a task that is subjective and complex even for humans (Kryscinski et al., 2019). Since summarization labels are expensive to collect, it is essential to develop models with good zero-shot performance. Starting from instruction-tuned GPT3, LLMs are offering promising performance in zero-shot summarization (Goyal et al., 2022), but remain an unscalable solution as these models are rarely open-source, and extremely computationally intensive.
Recently, in the supervised setup, second-stage approaches have gathered interest in abstractive summarization research. While the base encoderdecoder model is trained with maximum-likelihood estimation (MLE) to predict each token of the ground-truth summary in an autoregressive manner, second-stage methods work with a global view at the whole sequence level. SimCLS (Liu and Liu, 2021) and SummaReranker (Ravaut et al., 2022a)
propose to train another neural model to rank summary candidates generated by decoding methods like beam search (Reddy, 1977) or diverse beam search (Vijayakumar et al., 2016). BRIO (Liu et al.,
2022a) bypasses the need for another model, and re-uses the fine-tuned model for another fine-tuning stage in which the model also learns to rank candidates in the correct order. SummaFusion (Ravaut et al., 2022b) encodes each summary candidate separately and decodes into a new, abstractive secondstage summary. Such second-stage methods have improved ROUGE-1 state-of-the-art on CNN/DM
by more than 3 points (Liu et al., 2022a).
In this paper, we propose to re-rank summary candidates in the *unsupervised* setup. Following observations made by second-stage summarization studies in the supervised setup (Liu et al., 2021; Ravaut et al., 2022a), we also observe large variance in performance among summary candidates in the unsupervised setup. In Table 1, the *oracle* for PEGASUS, which is the summary candidate maximizing the ROUGE score with the reference, reaches 42.62 when using beam search with 20 beams on CNN/DM (Hermann et al., 2015). This is in the same range (42-45 ROUGE-1) as the top beam of *supervised* leading models on this dataset
(Lewis et al., 2020; Zhang et al., 2020). This observation implies strong potential motivating our work: **with a perfect unsupervised summarization**
re-ranker, one could potentially by-pass supervised fine-tuning and just re-rank instead.
The main challenge lies in the fact that the reranker must also not access any supervision. Our proposed model does not train any neural model, but simply computes features indicative of summary quality to score each summary candidate, some of them which also leverage the source document. A weighted average of these features is used for candidate re-ranking, and we explore several methods to estimate the feature weights. Our method, named SummScore, is lightweight, fast and easy to use as it does not rely on a neural network. Since it is purely unsupervised, the re-ranked results can provide more refined self-supervision to the pre-trained models, complementing the pretraining with rounds of self-training.
Our contributions in this paper are threefold:
- We propose SummScore, the first system to rerank summarization candidates in an unsupervised setup and in an unsupervised manner.
- We demonstrate the strength of SummScore by consistent performance improvement: up to
+7.27% with PEGASUS and +6.86% with ChatGPT2 mean ROUGE gains over four unsupervised summarization datasets, +7.51% mean ROUGE
gains averaged over 30 zero-shot transfer setups.
- Using the re-ranker, we derive an original and effective self-training method which continuously improves the base unsupervised summarization model, pushing PEGASUS from 35.47 to 39.76 ROUGE-1 (+12.09%).
## 2 Related Work
Unsupervised abstractive summarization In unsupervised abstractive summarization, SummAE
(Liu et al., 2019a) proposes to auto-encode paragraphs with a sequence-to-sequence model and decode single-sentence summaries from the latent embeddings. SEQ3 (Baziotis et al., 2019) also uses an auto-encoder to compress the input then reconstruct it into a differentiable manner, the encoder output serving as a summary. However, both methods stick to unsupervised *sentence* summarization.
More recent approaches typically rely on language models being pre-trained, then used in a zero-shot fashion. PEGASUS (Zhang et al., 2020) treats salient sentences as pseudo abstractive targets to build a pre-training objective. TED (Yang et al.,
2020) exploits the lead bias in news articles and takes out the first sentences of the document as pseudo summary targets for pre-training. Due to their pre-training objective built for summary generation, these pre-trained models can be directly used for unsupervised summarization. The Summary Loop (Laban et al., 2020) uses reinforcement learning to train a model to fill-in deleted important words from the source document using the summary generated so far, then refines this summary.
2https://chat.openai.com/
Re-ranking in abstractive summarization
![2_image_0.png](2_image_0.png)
Second-stage or sequence-level methods are gaining traction recently in *supervised* summarization.
Among such methods, re-ranking consists in selecting a better summary candidate out of several of them produced by a base model (which has already been fine-tuned). RefSum (Liu et al., 2021)
uses a meta-learning approach to learn how to rank summaries coming from multiple systems.
SimCLS (Liu and Liu, 2021) trains a RoBERTa
(Liu et al., 2019b) model with a ranking loss to learn how to rank summary candidates generated by a base BART or PEGASUS in their target metric order. SummaReranker (Ravaut et al., 2022a) also trains a RoBERTa re-ranker, but this time in a multi-label binary classification manner to predict whether each summary candidate maximizes each of the metrics of interest. To avoid using another neural network for re-ranking, BRIO (Liu et al.,
2022b) performs a second fine-tuning stage with the re-ranking loss built in the base summarization system. Each of the four models above improves the SOTA on the CNN/DM benchmark, reaching 47.78 ROUGE-1 for BRIO.
To the best of our knowledge, there is no work on sequence-level unsupervised abstractive summarization. Concurrently to our work, MBRD (Suzgun et al., 2022) proposes to rank generated candidates in several generation tasks using majority voting based on BERTScore (Zhang et al., 2019).
## 3 Method 3.1 Unsupervised Summary Re-Ranking
As an unsupervised summarization re-ranking approach, our method assumes access to a zero-shot self-supervised summarization model. We refer to it as the base model Mbase. Given a source document D, Mbase will generate k *summary candidates* using a *generation method* to transform model predictions into a natural language summary.
A widely used such generation approach is beam search, which maintains k top summary candidates throughout decoding, ranking them with decreasing mean log-probability of the sequence. In the end, practitioners keep the candidate maximizing the log-probability and discard the remaining, whereas we propose to keep all k candidates and re-rank them, following (Ravaut et al., 2022a).
Let C = {C1*, . . . , C*k} be the pool of candidates.
Our goal in (re-)ranking the candidates is to assign to each of them a score S, such that S(Ci) > S(Cj )
if Ciis a better candidate than Cj (for 1 ≤ *i, j* ≤ k)
according to some summary quality measures. We can then select the candidate maximizing the score as the best output:
$$C_{S}^{*}=\operatorname*{arg\,max}_{C_{i}\in\mathbb{C}}\ \{S(C_{1}),\ldots,S(C_{k})\}\quad(1)$$
Unlike re-ranking in a supervised setup, where one can compute such scores by comparing with the ground truth summary or build models to optimize them (Liu and Liu, 2021; Ravaut et al., 2022a; Liu et al., 2022a), in our unspervised setup, we cannot assume access to the ground truth, which thus excludes scoring the candidate with regards to it (e.g., using ROUGE). In the following, we describe how we build our unsupervised scoring method (named *SummScore*) following principles assessing the quality of a summary.
## 3.2 Multi-Objective Re-Ranking Score
We design our candidate-level SummScore as an aggregation of features, each representing desired properties for a summary. Features either come from the comparison between the summary candidate and the source, or from the candidate itself. Fig. 1 synthesizes the overall SummScore re-ranking process.
Comparison with the source One evident property of a summary is that it should stick to the source content, and contain as much of the important content as possible. The most straightforward way to measure this consists in using n-gram overlap metrics between the source document and each candidate. We use ROUGE-1 (noted R-1) (Lin, 2004), ROUGE-2 (R-2), and BLEU (Papineni et al.,
2002), which form our first set of features:
Soverlap = {R-1, R-2, BLEU} (2)
The above metrics only evaluate n-gram overlap, which can be helpful penalizing summary candidates departing too much from the source, potentially hallucinating. However, they have been shown to not be well suited at evaluating semantic similarity, and might encourage too much copying.
Thus, our next batch of SummScore features consists in model-based metrics designed to capture semantic similarity between two text items. We explore three such metrics: BERTScore (Zhang et al., 2019), BARTScore (Yuan et al., 2021) and BLEURT (Sellam et al., 2020). BERTScore (noted BS) computes token-level cosine similarity between the contextual embeddings of the pre-trained BERT
(Devlin et al., 2019) of each text item to compare.
BARTScore (noted BaS) uses BART (Lewis et al.,
2020) token-level log-probabilities from the pretrained BART to score the generated text. BLEURT
(noted BRT) also leverages BERT but extends its pre-training with an additional multi-task pretraining on synthetic data. Our next features are:
Ssemantic = {BS, BaS, BRT} (3)
When each of these metrics is referred to, it is implicit that they are used to compare a summary candidate with the source document (in contrast to the supervised case, comparing with the target).
Summary quality A good summary should be diverse, meaning it should avoid repeated n-grams.
We build a summary-level diversity score which measures the proportion of unique n-grams.
$$F_{\mathrm{div}}={\frac{1}{N}}\Sigma_{n=1}^{N}{\frac{\mathrm{unique~}n\mathrm{-grams}}{\mathrm{total~}n\mathrm{-grams}}}\qquad(4)$$
We take N = 3 in practice. The summary should not be too short, nor too long. We penalize summaries which deviate a lot from the average summary length on the given dataset. To build a score with increasing values being desirable, we use a smooth inverse of the absolute length difference between the summary candidate and the mean length of summaries µlen.
$$F_{\mathrm{len}}={\frac{1}{\operatorname*{max}(1,|\mathrm{length}-\mu_{\mathrm{len}}|)}}\qquad\qquad(5)$$
Final Score Our final set of summary features is:
$S=S_{\rm overlap}\cup S_{\rm semantic}\cup S_{\rm quality}$ (6) $=\{F_{1},\ldots,F_{|S|}\}$
where Squality = {Fdiv, Flen}. For data point xi, SummScore simply outputs the summary candidate among the set Ci maximizing a weighted combination of all features above:
$\text{SummScore}_{\theta}(\mathbb{C}_{i})=\underset{C_{i}\in\mathbb{C}_{i}}{\arg\max}\sum_{j=1}^{|S|}\theta_{j}.F_{j}(C_{i})$ (7) where $\theta_{j}$ is a constant, $\sum_{i=1}^{|S|}\theta_{i}=1.0$.
$${\mathrm{where~we~enforce~coefficients~to~be}}\sum_{j=1}^{|S|}\theta_{j}=1.0$$
## 3.3 Coefficients Estimation
SummScore is simply a linear combination of eight features in total. Yet a last crucial question remains: how to estimate the coefficients to assign to each feature? We propose to bootstrap a pseudosummary using sentences from the source document. Coefficients are then tuned to maximize the mean of ROUGE-1/2/L between the summary candidate with the highest SummScore (e.g., SummScore output candidate), and the pseudo-target.
We compare three approaches to extract pseudotargets:
- **Random-3**: As a baseline, we randomly select three sentences from the source document to form a pseudo-target.
- **LEAD-3**: This consists in the first three sentences of the document. LEAD-3 is a strong baseline for lead-biased news summarization datasets
(Hermann et al., 2015; See et al., 2017), and it has even been used as a pseudo-target for summarization pre-training in TED (Yang et al., 2020).
- **Salient Sentences**: We follow the gap-sentences generation idea introduced by PEGASUS pretraining objective (Zhang et al., 2020), and also used by SUPERT (Gao et al., 2020) for unsupervised summarization evaluation. A pseudo-target is constructed with salient sentences, which are defined as the source sentences maximizing the ROUGE with the rest of the document. The
| Dataset | Domain | # Data points | # Words | # Tokens (PEGASUS) | New summary n-grams | | | | | |
|-----------------------------------------------------------------------------------------------------------|-----------|-----------------|-----------|----------------------|-----------------------|-------|-------------|-------------|-------|-------|
| Train | Val | Test | Doc. | Summ. | Doc. | Summ. | 1-grams (%) | 2-grams (%) | | |
| CNN/DM (Hermann et al., 2015) | News | 287113 | 13334 | 11490 | 786.68 | 55.06 | 851.53 | 64.57 | 12.07 | 51.05 |
| XSum (Narayan et al., 2018) | News | 204045 | 11332 | 11334 | 430.18 | 23.19 | 456.96 | 26.01 | 33.98 | 83.33 |
| WikiHow (Koupaee and Wang, 2018) | Wikipedia | 157304 | 5600 | 5580 | 588.06 | 62.10 | 620.52 | 71.82 | 29.79 | 77.45 |
| SAMSum (Gliwa et al., 2019) | Dialogue | 14732 | 818 | 819 | 124.07 | 23.42 | 133.07 | 25.66 | 33.88 | 79.02 |
| Table 2: Statistics on the datasets used for experiments. Doc. is the source document, Summ. the summary. | | | | | | | | | | |
top 30% such sentences are extracted to form a pseudo-summary. We experiment with all three standard versions ROUGE-1, ROUGE-2 and ROUGE-L for salient sentences definition, referred to as Salient-R1, **Salient-R2** and **SalientRL**, respectively.
We emphasize that none of these pseudo-targets definition makes any access to human supervision.
Training SummScore amounts to estimating the coefficients θ in Eq. (7) using the pseudo-targets:
## ˆΘ = Arg Max Θ X I R( ˜Yi, Summscoreθ(Ci)) (8)
where R is the mean of ROUGE-1, ROUGE-2 and ROUGE-L, Ciis the set of candidates predicted by the base model Mbase for data point xi, and y˜iis the pseudo-target. To optimize coefficients, we hill climb with randomness to maximize R between the SummScore selected summary candidate, and the pseudo-target. Specifically, we estimate coefficients with stochastic local search on the validation set in a hierarchical manner: we first tune coefficients for S*overlap* and S*semantic* separately, then estimate coefficients for Squality ∪ {Foverlap, F*semantic*}, where F*overlap* (resp. F*semantic*) is the set S*overlap*
(resp. S*semantic*) after reduction to a single feature. Such hierarchical estimation is natural given that S*overlap* (resp. S*semantic*) is made of features capturing similar properties, and dramatically reduces the search space.
## 4 Experiments 4.1 Setup
We experiment on four popular abstractive summarization datasets, from three different domains (see Table 2 for basic statistics on each dataset):
- **CNN-DailyMail** (Hermann et al., 2015; See et al., 2017) is made of 93k and 220k articles from the CNN and DailyMail newspapers, respectively. CNN/DM is the most extractive dataset among all the ones we consider and has the longest source documents.
- **XSum** (Narayan et al., 2018) has 227k articles from the BBC from 2010 to 2017. This is an extreme summarization task, compressing each article into a single, very abstractive sentence.
- **WikiHow** (Koupaee and Wang, 2018) contains 168k lists of short instructions from Wikipedia.
- **SAMSum** (Gliwa et al., 2019) is a dialogue summarization dataset containing 17k conversations. In this dataset, source length is significantly shorter than in the other datasets.
To estimate coefficients, we subsample randomly
(on datasets other than SAMSum) 1,000 data points from the validation set. To avoid coefficients optimization to overfit, we cap each random search at 1,000 trials. Evaluation of summaries selected by SummScore is done with the standard ROUGE1/2/L (Lin, 2004) (using summary-level ROUGELSUM variant for ROUGE-L) and BERTScore
(Zhang et al., 2019). We use *transformers* (Wolf et al., 2020) and *datasets* (Lhoest et al., 2021) for pre-trained checkpoints and datasets, respectively.
## 4.2 **Unsupervised Abstractive Summarization**
We first apply SummScore to unsupervised abstractive summarization, using as base model
(Mbase) two models of different capacity: the pretraind PEGASUS (Zhang et al., 2020) (loading the *google/pegasus-large* checkpoint from *transformers*), and the recently introduced, highlyperforming ChatGPT3, accessed through OpenAI
API (calling the *gpt-3.5-turbo* checkpoint). Due to its pre-training objective of generating gapsentences, PEGASUS can directly be applied to the summarization task after pre-training. This is not the case of comparable sequence-to-sequence Transformer-based models T5 (Raffel et al., 2019)
and BART (Lewis et al., 2020), which are pretrained with token spans generation and sequence de-noising, respectively. For ChatGPT, to lower costs, we subsample randomly 1,000 data points from the test set on datasets other than SAMSum.
3https://chat.openai.com/. There is a chance that this checkpoint has been trained on the dataset above.
| Backbone | Model | CNN/DM | XSum | WikiHow | SAMSum | | | | | | | | |
|-------------------------------|-------------------------------|---------------------|------------------------------|-------------------------|-----------------------------|------------------------|----------------------------|-------------------------------|-------------------|---------------------|-------------|-------|----------|
| Mbase | Candidate Selection | R-1/R-2/R-L | BS | Gain (%) | R-1/R-2/R-L | BS | Gain (%) | R-1/R-2/R-L | BS | Gain (%) | R-1/R-2/R-L | BS | Gain (%) |
| Top beam (Zhang et al., 2020) | 32.90/13.28/29.38 | _ | _ | 19.27/3.00/12.72 | _ | _ | 22.59/6.10/14.44 | _ | _ | _ | _ | _ | |
| Top beam | 35.47/13.89/31.61 | 86.29 | _ | 18.77/2.86/13.85 | 85.66 | _ | 25.49/5.91/17.99 84.98 | _ | 26.64/6.32/22.75 | 86.12 | _ | | |
| Random beam | 34.89/13.46/31.22 | 86.11 | -1.67 | 18.58/2.81/13.90 | 85.29 | -1.31 | 25.39/6.00/18.09 | 84.82 | -0.38 | 25.27/5.80/21.78 | 85.31 | -5.26 | |
| SummScore - Random-3 | 35.92† | /32.34† 86.28 | 1.96 | 19.37† | /14.52† 85.78† | 3.89 | 26.29† | /18.78† 84.98 | 3.89 | 28.09† | | | |
| /14.26† | /2.99† | /6.28† | /7.26† /24.42† 86.39† | 7.27 | | | | | | | | | |
| SummScore - LEAD-3 | 36.92† /15.03† /33.19† 86.54† | 5.19 | 19.62† /3.02† /14.71† 85.92† | 5.24 | 26.17† /6.19† /18.69† 84.96 | 3.16 | 28.22† /7.16/24.39† 86.41† | 7.27 | | | | | |
| SummScore - Salient-R1 | 35.54/14.05/32.04† | 86.22 | 0.85 | 18.96/2.88/14.19† | 85.65 | 1.52 | 26.37† | /18.81† 84.92 | 4.25 | 27.89† /7.08/24.08† | 86.25 | 5.98 | |
| SummScore - Salient-R2 | 35.65/14.12/32.14† | 86.24 | 1.19 | 19.13† /2.96/14.34† | 85.67 | 2.62 | 26.40† /6.32† | /7.04/24.14† | 86.24 | 6.09 | | | |
| SummScore - Salient-RL | 35.54/14.05/32.04† | 86.22 | 0.85 | 19.29† | /6.30† /18.83† 84.92 | 4.37 | 27.93† | | | | | | |
| /18.81† 84.92 | 4.31 | 28.01† /7.08/24.21† | 86.21 | 6.46 | | | | | | | | | |
| /2.99† /14.48† 85.79† | 3.63 | 26.37† /6.32† | | | | | | | | | | | |
| First | 40.79/16.61/36.92 | 87.93 | _ | 30.48/10.00/22.16 88.78 | _ | 29.61/7.28/22.14 86.28 | _ | 40.82/15.57/35.15 | 90.67 | _ | | | |
| Random | 40.79/16.61/36.92 | 87.93 | 0.00 | 30.53/10.20/22.20 88.77 | 0.48 | 29.99/7.57/22.32 | 86.32 | _ | 40.60/15.28/34.78 | 90.63 | -0.95 | | |
| SummScore - Random-3 | 41.82† | /37.88† 87.91 | 3.69 | 27.98/8.45/19.64 | 87.94 | -10.49 | 30.09/7.85/22.16 | 86.15 | 1.78 | 42.73† | | | |
| SummScore - LEAD-3 | 42.05† /18.11† /38.06† 87.97 | 4.23 | 27.97/8.42/19.76 | 88.05 | -10.34 | 30.14/7.78/22.22 86.21 | 1.88 | 42.57† /17.45† /37.63† 90.93† | 6.86 | | | | |
| /18.20† | /17.29† /37.54† 90.88† | 6.41 | | | | | | | | | | | |
| SummScore - Salient-R1 | 40.30/17.10/36.37 | 87.67 | -0.57 | 27.84/8.46/19.55 | 87.91 | -10.87 | 30.29/7.97† /22.20 | 86.12 | 2.41 | 42.59† | | | |
| SummScore - Salient-R2 | 40.20/17.06/36.23 | 87.65 | -0.88 | 27.79/8.47/19.57 | 87.90 | -10.87 | 30.38/8.00† | /17.26† /37.50† 90.86† | 6.36 | | | | |
| SummScore - Salient-RL | 40.24/17.06/36.29 | 87.66 | -0.76 | 27.82/8.51/19.58 | 87.90 | -10.73 | 30.29/7.97† /22.27 86.13 | 2.74 | 42.43† | | | | |
| /22.20 | 86.12 | 2.39 | 42.59† /17.00† /37.30† 90.84 | 5.67 | | | | | | | | | |
| /17.26† /37.50† 90.86† | 6.36 | | | | | | | | | | | | |
ChatGPT
![5_image_0.png](5_image_0.png)
First 40.79/16.61/36.92 87.93 _ **30.48**/10.00/**22.16** 88.78 _ 29.61/7.28/22.14 **86.28** _ 40.82/15.57/35.15 90.67 _
Random 40.79/16.61/36.92 **87.93** 0.00 30.53/10.20/**22.20** 88.77 0.48 29.99/7.57/**22.32 86.32** _ 40.60/15.28/34.78 90.63 -0.95
SummScore - Random-3 41.82†
/**18.11**†
/37.88† **87.91** 3.69 27.98/8.45/19.64 87.94 -10.49 30.09/7.85/22.16 86.15 1.78 **42.73**†
/**17.45**†
/37.63† 90.93† **6.86**
SummScore - LEAD-3 **42.05**†
/**18.20**†
/38.06† 87.97 **4.23** 27.97/8.42/19.76 88.05 -10.34 30.14/7.78/**22.22** 86.21 1.88 42.57†
/17.29†
/37.54† **90.88**† 6.41
SummScore - Salient-R1 40.30/17.10/36.37 87.67 -0.57 27.84/8.46/19.55 87.91 -10.87 30.29/**7.97**†
/22.20 86.12 2.41 42.59†
/17.26†
/37.50† **90.86**† 6.36
SummScore - Salient-R2 40.20/17.06/36.23 87.65 -0.88 27.79/8.47/19.57 87.90 -10.87 30.38/**8.00**†
/**22.27** 86.13 **2.74** 42.43†
/17.00†
/37.30† **90.84** 5.67
SummScore - Salient-RL 40.24/17.06/36.29 87.66 -0.76 27.82/8.51/19.58 87.90 -10.73 30.29/**7.97**†
/22.20 86.12 2.39 42.59†
/17.26†
/37.50† **90.86**† 6.36
Table 3: Unsupervised abstractive summarization results with SummScore re-ranking on the four datasets. Models are decoded
to produce 20 summary candidates. **R-1/2/L** denotes ROUGE-1/2/L and BS denotes BERTScore. **Gain** represents the mean
ROUGE relative gain compared to *our top beam or first candidate baseline*.
† marks indicate significantly better results (p-value
of paired t-test smaller than 0.05). Best results for each (backbone, dataset) pair within 0.1 are in bold.
We decode PEGASUS with beam search, and ChatGPT with top-p sampling with p = 0.9 and temperature 0.8 to enhance diversity, both models with 20 candidates. We report candidate selection baselines from Table 1: top beam or *first*, and *random* (a randomly sampled candidate).
We show unsupervised summarization results with PEGASUS and ChatGPT with 20 summary candidates in Table 3. SummScore improves the base PEGASUS by 4.37% to 7.27% across the four datasets. Notably, SummScore fails with ChatGPT
on XSum, which we hypothesize is due to the nature of XSum and the fact that pseudo-labels from XSum source documents are too different from the ground truth labels, an issue not affecting PEGASUS because its performance range is far lower than ChatGPT. However, SummScore improves ChatGPT by 2.74% to 6.86% on the other datasets.
We point out that SummScore gains are achieved without using any human supervision.
SummScore - LEAD-3 performs best for the news domain, which intuitively makes sense due to the lead bias and first sentences containing an overview of the article. On WikiHow, SummScore
- Salient-R2 works the best, yet gains are more moderate and SummScore fails to improve the BERTScore on this dataset. SummScore - Random3 is tied with SummScore - LEAD-3 on SAMSum:
we attribute it to the fact that SAMSum source documents are very short (Table 2), and the LEAD-3, Random-3, and entire source document all overlap a lot. Appendix A confirms that SummScore re-ranking always finds a non-trivial (e.g., longest)
candidate selection.
## 4.3 Zero-Shot Transfer
Next, we investigate SummScore performance in the transfer setup, with standard-size models (discarding ChatGPT or similar models). We perform zero-shot summarization inference followed by SummScore on a target dataset where the base model Mbase was fine-tuned on *another* source dataset. As Mbase, we use three high-performing summarization models: PEGASUS (Zhang et al.,
2020), BART (Lewis et al., 2020), and the recently introduced BRIO (Liu et al., 2022a), which achieves SOTA results on news summarization
(CNN/DM & XSum). We use publicly available fine-tuned checkpoints on CNN/DM and XSum, and PEGASUS on WikiHow. We fine-tune ourselves PEGASUS on SAMSum, and BART on WikiHow and SAMSum. Generation and fine-tuning hyper-parameters and results are in Appendix B.
Given the findings from §4.2, we use SummScore - LEAD-3 on CNN/DM, XSum, and SAMSum, and SummScore - Salient-R2 on WikiHow.
We tune coefficients in the same process described in §4.1. To stick to a **no supervision** scenario, we do not apply SummScore on a dataset on the which the base model was fine-tuned, which would fall into the supervised learning use case. We compare SummScore zero-shot transfer performance on CNN/DM with that of SOTA WikiTransfer (Fabbri et al., 2021), which fine-tunes BART on external data retrieved from Wikipedia before applying the model in zero-shot summarization.
Zero-shot transfer results are displayed in Table 4. SummScore consistently improves transfer performance, with ROUGE gains of 7.51% averaged over 30 setups: +9.43% on CNN/DM, +1.27%
on XSum, +9.20% on WikiHow (up to +17.64% average when transferring from XSum) and +9.61% on SAMSum. Notably, on CNN/DM, BART transferred from SAMSum with SummScore improves on the ROUGE-1 and ROUGE-L of SOTA transfer model WikiTransfer (also using a BART backbone), despite WikiTransfer being fine-tuned on data specifically crafted to transfer better to the downstream task. We notice that SummScore helps
![6_image_0.png](6_image_0.png)
more when the base model transfers less well, such as from single-sentence summaries XSum.
Appendix C evalutes re-ranking itself and shows that SummScore can also reach strong recall.
## 4.4 Self-Training With Unsupervised Paraphrasing
Using the selected summary candidate as a pseudotarget, one can naturally extend SummScore into a self-training summarization objective. Indeed, if γ parametrizes Mbase, we can further train Mbase through the objective:
γ˜ = arg max γ X
i log p(SummScore(Ci)|xi; γ)
(9)
This process can be repeated: if we denote new model weights by γ k, we can re-apply SummScore and perform another round of self-training, yielding new model weights γ k+1.
We notice that the unsupervised PEGASUS
beam search summary candidates, including the one selected by SummScore, are quite extractive
(see Appendix D). This could be because the selfsupervised gap-sentences are extracts from the source document. To make the pseudo-summaries more abstractive and diverse enough to mitigate the confirmation bias in self-training (Tarvainen and Valpola, 2017), we use the paraphrasing approach proposed in FAR-RW (Zhang et al., 2022).
On each dataset, we train a paraphrase model to generate the top n sentences maximizing the mean ROUGE with the top n most salient sentences, conditioning on these salient sentences. This yields an unsupervised, in-domain paraphrase model which we apply to the SummScore pseudo-labels on the training set to make them more abstractive and diverse. We refer to Appendix E for details on the paraphrasing model training, its performance and resulting abstractiveness and diversity levels on pseudo-labels. As the unsupervised process of paraphrasing may harm the pseudo-summary quality, in practice, we apply it to the x% most extractive training data points, where x is among {12.5%, 25%,
50%, 100%}. We use 25% for CNN/DM, 100% for XSum, 50% for WikiHow, and 12.5% on SAMSum, as these provide an ideal ROUGE/abstractiveness trade-off (see Appendix D).
For each dataset except SAMSum, we randomly subsample 50k data points from the training set and 1k from the validation set to self-train and validate the model, resulting in a self-training process much less computationally expensive than finetuning. We show self-training results on the test sets using PEGASUS as base model in Table 5.
Self-training improves unsupervised summarization performance on all datasets, resulting in a selftrained model better than the base model although not as performing as SummScore. Notably, reapplying SummScore on the new model after selftraining further improves performance drastically.
Besides, paraphrasing self-training pseudo-labels helps maintain some degree of abstractiveness, as seen in Appendix D. On CNN/DM, one round of self-training followed by SummScore brings PE-
| Dataset | Model | R-1 | R-2 | R-L | BS |
|-------------------------------------------|------------------|------------------|-------------|-------|-------|
| PEGASUS (Zhang et al., 2020) | 32.90 | 13.28 | 29.38 | _ | |
| Summary Loop 45 (Laban et al., 2020) | 37.70 | 14.80 | 34.70 | _ | |
| TED (Yang et al., 2020) | 38.73 | 16.84 | 35.40 | _ | |
| FAR-RW* (Zhang et al., 2022) (SOTA) 40.13 | 17.00 | 36.34 | _ | | |
| PEGASUS (ours) | 35.47 | 13.89 | 31.61 | 86.29 | |
| PEGASUS (ours) + SummScore | 36.92 | 15.03 | 33.19 | 86.54 | |
| CNN/DM Self-training (1 st round) | 36.68 | 14.52 | 32.72 | 86.49 | |
| Self-training (1 st round) + SummScore | 38.75 | 16.11 | 34.78 | 86.88 | |
| Self-training (2 nd round) | 38.17 | 15.77 | 34.25 | 86.87 | |
| Self-training (2 nd round) + SummScore | 39.49 | 16.69 | 35.61 | 87.07 | |
| Self-training (3 rd round) | 38.47 | 15.95 | 34.48 | 87.00 | |
| Self-training (3 rd round) + SummScore | 39.76 | 16.79 | 35.85 87.18 | | |
| PEGASUS (ours) | 18.77 | 2.86 | 13.85 | 85.66 | |
| PEGASUS (ours) + SummScore | 19.62 3.02 14.71 | 85.92 | | | |
| XSum | Self-training | 19.33 | 2.76 | 14.18 | 86.03 |
| Self-training + SummScore | 20.02 2.84 14.93 | 86.23 | | | |
| PEGASUS (ours) | 25.49 | 5.91 | 17.99 84.98 | | |
| PEGASUS (ours) + SummScore | 26.40 | 6.30 18.83 84.92 | | | |
| WikiHow Self-training | 26.08 | 6.08 | 18.59 | 84.89 | |
| Self-training + SummScore | 26.50 | 6.28 | 19.03 | 84.93 | |
| PEGASUS (ours) | 26.64 | 6.32 | 22.75 | 86.12 | |
| PEGASUS (ours) + SummScore | 28.22 | 7.16 | 24.39 | 86.41 | |
| SAMSum Self-training | 26.96 | 6.41 | 23.40 | 86.25 | |
| Self-training + SummScore | 28.91 | 7.55 | 25.54 | 86.58 | |
| Use case | Attribute | PEGASUS | SummScore | Tie |
|-----------------------------------------------|--------------------------|---------------------------|--------------|-------|
| Unsupervised abs. summ. | Informativeness | 11.33 (1.15) 20.67 (6.43) | 18.00 (6.93) | |
| Factual consistency 14.67 (4.04) 19.33 (5.03) | 16.00 (9.00) | | | |
| 0-shot transfer from XSum Informativeness | 5.67 (2.89) 24.00 (2.00) | 20.33 (1.53) | | |
| Factual consistency | 4.67 (4.51) | 18.67 (4.04) 26.67 (3.51) | | |
GASUS performance above the Summary Loop, two rounds above TED, and three rounds to 39.76 ROUGE-1, within 1% of SOTA model FAR-RW.
## 4.5 Human Evaluation
We conduct a human evaluation on 50 data points randomly sampled from CNN/DM test set. We show human participants the source news article, alongside the summary candidate from the base PEGASUS model, and the one re-ranked by SummScore. Participants are asked to pick which summary is more informative, and which is more factually consisteny, with the option of choosing a tie. We cover two use cases: unsupervised abstractive summarization, and zero-shot transfer from a model fine-tuned on XSum. In the former use case, both summaries are identical in 7/50 data points, and 4/50 data points in the latter. Human raters are three volunteer graduate students, with full professional command of English. Results are displayed in Table 6. Although both summaries often overlap significantly (rightmost column), resulting in a high Tie, SummScore is strongly preferred over PEGASUS across both use cases and attributes.
## 5 Analysis 5.1 Ablation
To better understand SummScore performance gains, we perform an ablation study where reranking is done with each feature taken individually. Results for PEGASUS in unsupervised summarization are shown in Table 7. N-gram overlap features are very strong re-ranking baselines on WikiHow and SAMSum. In fact, ROUGE-1 with the source is even slightly better than SummScore on WikiHow. On news datasets, semantic similarity features such as BERTScore are strong baselines. Interestingly, our hand-crafted feature diversity has a *negative* contribution when used as standalone re-ranker ; however it can help a lot when combined with the other features, acting as a regularizer by encouraging some diversity. On average, SummScore performs the best. We also report trivial feature aggregation baselines *Plain average* and *Random coefficients*, which SummScore outpeforms, confirming the efficiency of estimating coefficients through pseudo-labels.
In Appendix F, we show that SummScore unsupervised re-ranking is also robust to other decoding methods diverse beam search (Vijayakumar et al., 2016) and nucleus sampling (Holtzman et al., 2019), and a different number of beams (5 to 20). We confirm that our default setup of beam search with 20 beams yields optimal ROUGE results. Echoing SummaReranker findings (Ravaut et al., 2022a), gains further increase when mixing in several decoding methods.
| Candidate selection | Dataset | Average | | | |
|-----------------------|-----------|-----------|--------|-------|-------|
| CNN/DM | XSum | WikiHow | SAMSum | | |
| PEGASUS | 26.99 | 11.83 | 16.46 | 18.57 | 18.46 |
| ROUGE-1 with source | 26.90 | 12.03 | 17.21 | 19.89 | 19.01 |
| ROUGE-2 with source | 26.98 | 11.93 | 17.16 | 19.62 | 18.92 |
| BLEU with source | 26.90 | 11.99 | 17.19 | 19.94 | 19.01 |
| BERTScore with source | 28.19 | 12.42 | 17.11 | 19.43 | 19.29 |
| BARTScore with source | 28.11 | 12.23 | 16.60 | 19.70 | 19.16 |
| BLEURT with source | 27.45 | 12.12 | 16.79 | 19.69 | 19.01 |
| Diversity score | 25.33 | 11.36 | 14.52 | 15.67 | 16.72 |
| Length score | 27.07 | 11.67 | 16.66 | 18.60 | 18.50 |
| Plain average | 27.75 | 12.28 | 16.96 | 19.73 | 19.18 |
| Random coefficients | 27.75 | 12.25 | 16.84 | 19.72 | 19.14 |
| SummScore | 28.38 | 12.45 | 17.18 | 19.92 | 19.48 |
| Source document: Reports speak of at least four people injured. The city is at the heart of the conflict between the Turkish government and Kurdish separatists. Interior Minister Suleyman Soylu said the blast happened at a vehicle repair unit, and appeared to be an accident. He said "it seems there is no outside interference, and the explosion came from the vehicle under repair". Mr Soylu said one person was trapped under rubble, another was seriously injured, and others had minor injuries. The blast brought a roof down, left a huge crater and a pall of smoke drifted over part of the city. The cause remains unclear. The banned Kurdistan Workers' Party (PKK) is active in the area. Turkey is five days away from a key referendum on granting President Recep Tayyip Erdogan sweeping new powers [...] PEGASUS summary (ROUGE-1: 10.53): Interior Minister Suleyman Soylu said the blast happened at a vehicle repair unit, and appeared to be an accident. Self-training summary (ROUGE-1: 32.43): The blast happened at a vehicle repair unit in the city of Diyarbakir, near the border with Syria. Ground truth summary: A large explosion has struck a police headquarters in the mainly Kurdish city of Diyarbakir in south-eastern Turkey. |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
## 5.2 Qualitative Samples
We refer to Appendix H for full qualitative unsupervised re-ranking examples on all datasets, and to Table 8 for an example of summary generated by the self-trained PEGASUS model on XSum.
As seen, both re-ranking and self-training can improve dramatically from the unsupervised PEGASUS baseline, capturing entirely new phrases.
## 5.3 Factual Consistency
As noted in Table 6, SummScore summaries tend to be more factually consistent than the baseline.
There is strong intuition to this result: since SummScore is built to maximize features of n-gram overlap and semantic similarity with the source, it should yield summaries closer to the source, and more factually consistent as a result. We investigate this further, and use two popular models to evaluate summarization factuality: the established factCC (Kryscinski et al., 2020) and the recently introduced state-of-the-art *QAFactEval* (Fabbri et al.,
2022). factCC uses a BERT model to classify each summary sentence as consistent or inconsistent with regards to the source, and reports the average accuracy over 100%. QAFactEval improves each step of the QA evaluation pipeline (answer selection, question generation, etc) and combines entailment with QA-based metrics into a learned metric. In Table 9, we observe that SummScore QAFactEval is consistently above PEGASUS, and SummScore factCC is better on news datasets too.
## 5.4 Learned Coefficients
We analyze coefficients learned by SummScore from a high level perspective in Table 10, gathering features from a same group together. Semantic similarity features are dominating (except for WikiHow), encouraging further research using newer semantic similarity metrics for re-ranking.
A finer-grain analysis, covering all SummScore
| Dataset | Factual consistency model | PEGASUS | SummScore |
|----------------|-----------------------------|-----------|-------------|
| CNN/DM factCC | 92.45 | 93.66 | |
| QAFactEval | 4.53 | 4.55 | |
| XSum | factCC | 96.78 | 97.53 |
| QAFactEval | 4.54 | 4.64 | |
| WikiHow factCC | 96.48 | 95.85 | |
| QAFactEval | 4.33 | 4.36 | |
| SAMSum factCC | 98.35 | 96.28 | |
| QAFactEval | 3.26 | 3.50 | |
| Dataset | PEGASUS | ChatGPT | | | | |
|-----------|-----------|-----------|--------|----------|---------|-------|
| N-gram | Semantic | Quality | N-gram | Semantic | Quality | |
| CNN/DM | 0.025 | 0.900 | 0.075 | 0.100 | 0.775 | 0.125 |
| XSum | 0.050 | 0.950 | 0.000 | 0.250 | 0.725 | 0.025 |
| WikiHow | 0.875 | 0.100 | 0.025 | 0.900 | 0.100 | 0.000 |
| SAMSum | 0.000 | 1.000 | 0.000 | 0.000 | 1.000 | 0.000 |
Table 10: Coefficients learned by SummScore in unsupervised abstractive summarization. We sum weights assigned to all features of each category defined in §3.2.
pseudo-labeling techniques, can be viewed in Tables 19 and 20 of Appendix G. SummScore -
Salient-R1 and SummScore - Salient-RL place much more emphasis on n-gram overlap with the source. In contrast, SummScore - LEAD-3 (which we use for self-training on CNN/DM, XSum, SAMSum) uses relatively more semantic similarity features like BERTScore, suggesting that it is able to exploit key semantic content contained in initial sentences.
## 6 Conclusion
We introduced SummScore, the first unsupervised abstractive summarization re-ranking system.
SummScore does not rely on a neural network:
instead, it builds features for each summary candidate, some of them using the source as well, and aggregates them into a final re-ranking score. Feature coefficients are estimated through tuning against a pseudo-label derived from the source document.
It is a simple framework which easily supports the addition of new features.
SummScore significantly improves the performance of the base summarization model, in terms of ROUGE, BERTScore, factual consistency, and human preference ; in both unsupervised and zeroshot transfer scenarios. Moreover, SummScore selected summary candidate naturally extends into a self-training objective for abstractive summarization, which improves unsupervised summarization.
## Limitations
As a second-stage method, SummScore requires access to a base abstractive summarization model generating summary candidates. Generating up to 20 summary candidates per data point can take a long time, especially on training sets, which is needed for the self-training use case. Besides, even though SummScore does not need to train a new neural network, we also need to generate all eight features for each summary candidate once all candidates are generated. N-gram overlap features are very fast, but model-based semantic similarity features (e.g, BERTScore) can be time-consuming to extract, once again, especially on entire training sets.
While SummScore will significantly improve the quality of the base model across base models and datasets, ultimately, the performance of the final selected summary is bounded by the capacity of this base model: SummScore improves more PEGASUS than it does on ChatGPT ; but PEGASUS
performance drags ChatGPT.
Another limitation lays in the metric used to compare summary candidates with the pseudo-target.
We used mean ROUGE, although a model-based semantic similarity metric would make sense too, but at a much greater computational cost.
## Acknowledgements
This research was supported by the SINGA scholarship and partially supported by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.
We thank anonymous reviewers for a fruitful discussion, especially with regards to evaluation of the factual consistency. We also thank Florian Le Bronnec and Jiajing Zhang for their proofreading.
## References
Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, and Alexandros Potamianos. 2019. SEQˆ3:
Differentiable sequence-to-sequence-to-sequence autoencoder for unsupervised abstractive sentence compression. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 673–681, Minneapolis, Minnesota. Association for Computational Linguistics.
Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Alexander Fabbri, Simeng Han, Haoyuan Li, Haoran Li, Marjan Ghazvininejad, Shafiq Joty, Dragomir Radev, and Yashar Mehdad. 2021. Improving zero and few-shot abstractive summarization with intermediate fine-tuning and data augmentation. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 704–717, Online. Association for Computational Linguistics.
Alexander Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022. QAFactEval: Improved QAbased factual consistency evaluation for summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2587–2601, Seattle, United States. Association for Computational Linguistics.
Yang Gao, Wei Zhao, and Steffen Eger. 2020. SUPERT:
Towards new frontiers in unsupervised evaluation metrics for multi-document summarization. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 1347–
1354, Online. Association for Computational Linguistics.
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In *Proceedings of the 2nd Workshop on* New Frontiers in Summarization, pages 70–79, Hong Kong, China. Association for Computational Linguistics.
Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022.
News summarization and evaluation in the era of gpt-3. *arXiv preprint arXiv:2209.12356*.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. *arXiv preprint arXiv:1904.09751*.
Mahnaz Koupaee and William Yang Wang. 2018. Wikihow: A large scale text summarization dataset. arXiv preprint arXiv:1810.09305.
Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540–551, Hong Kong, China. Association for Computational Linguistics.
Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9332–9346, Online. Association for Computational Linguistics.
Wojciech Krysci ´ nski, Romain Paulus, Caiming Xiong, ´
and Richard Socher. 2018. Improving abstraction in text summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1808–1817, Brussels, Belgium.
Association for Computational Linguistics.
Philippe Laban, Andrew Hsi, John Canny, and Marti A.
Hearst. 2020. The summary loop: Learning to write abstractive summaries without examples. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5135–5150, Online. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175–184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Peter J Liu, Yu-An Chung, and Jie Ren. 2019a. Summae: Zero-shot abstractive text summarization using length-agnostic auto-encoders. arXiv preprint arXiv:1910.00998.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Yixin Liu, Zi-Yi Dou, and Pengfei Liu. 2021. RefSum:
Refactoring neural summarization. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1437–1448, Online. Association for Computational Linguistics.
Yixin Liu and Pengfei Liu. 2021. SimCLS: A simple framework for contrastive learning of abstractive summarization. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 1065–1072, Online. Association for Computational Linguistics.
Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022a. BRIO: Bringing order to abstractive summarization. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903, Dublin, Ireland. Association for Computational Linguistics.
Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022b. Brio: Bringing order to abstractive summarization. *arXiv preprint arXiv:2203.16804*.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *arXiv preprint arXiv:1910.10683*.
Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022a.
SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 4504–4524, Dublin, Ireland.
Association for Computational Linguistics.
Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022b.
Towards summary candidates fusion. In *Proceedings of the 2022 Conference on Empirical Methods* in Natural Language Processing, pages 8488–8504, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Raj Reddy. 1977. Speech understanding systems: A
summary of results of the five-year research effort at carnegie mellon university. *Pittsburgh, Pa*.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020.
BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics.
Noam Shazeer and Mitchell Stern. 2018. Adafactor:
Adaptive learning rates with sublinear memory cost.
In *International Conference on Machine Learning*,
pages 4596–4604. PMLR.
Mirac Suzgun, Luke Melas-Kyriazi, and Dan Jurafsky.
2022. Follow the wisdom of the crowd: Effective text generation via minimum bayes risk decoding.
arXiv preprint arXiv:2211.07634.
Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In *Proceedings of the 31st International Conference on Neural Information Processing Systems*,
NIPS'17, page 1195–1204, Red Hook, NY, USA.
Curran Associates Inc.
Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. *arXiv preprint arXiv:1610.02424*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Ziyi Yang, Chenguang Zhu, Robert Gmyr, Michael Zeng, Xuedong Huang, and Eric Darve. 2020. TED:
A pretrained unsupervised summarization model with theme modeling and denoising. In Findings of the Association for Computational Linguistics: EMNLP
2020, pages 1865–1874, Online. Association for Computational Linguistics.
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
Bartscore: Evaluating generated text as text generation. *arXiv preprint arXiv:2106.11520*.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In *International Conference on Machine Learning*, pages 11328–11339. PMLR.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q
Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
Zhihao Zhang, Xinnian Liang, Yuan Zuo, and Zhoujun Li. 2022. Unsupervised abstractive summarization via sentence rewriting. *Computer Speech & Language*, page 101467.
## A Overlap With Simple Baselines
Simple Candidate
Selection **CNN/DM XSum WikiHow SAMSum**
Max R-1 w. source 16.33 21.38 60.36 38.71 Max R-2 w. source 21.45 24.32 **66.94** 44.93 Max BLEU w. source 16.68 18.92 58.44 38.95 Max BS w. source 43.46 **69.98** 35.50 **58.61** Max BaS w. source **47.15** 46.06 13.85 52.50
Max BRT w. source 14.74 13.43 15.75 23.32
Max diversity feature 5.40 5.95 1.45 4.40 Max length feature 11.80 7.46 14.19 13.68 Top beam 15.05 12.85 9.18 30.28 Oracle candidate 15.24 12.27 10.73 18.44 Worst candidate 5.33 7.48 7.65 7.20 Longest candidate 20.58 22.74 64.43 51.28
Table 11: Overlap with simple re-reranking methods (%) in unsupervised abstractive summarization with PEGASUS. We report the fraction (in percentage) of test set data points on the which SummScore falls back to a trivial summary candidate selection: maximizing one of the input features, picking the top beam, one oracle or worst candidate, or the longest one.
All setups are with beam search with 20 candidates, thus a random baseline corresponds to 5% overlap.
We perform a sanity check counting the percentage of time that SummScore falls back to a *trivial* method of re-ranking summary candidates. For each feature described in §3.2, we report the overlap between SummScore and a re-ranking approach consisting in picking the summary candidate maximing this feature. We also report baselines consisting in picking the top beam, an oracle or a *worst* candidate, and the longest candidate. As seen in Tables 11 and 12, across both backbones PEGASUS and ChatGPT, SummScore never collapses
| Simple Candidate Selection | CNN/DM | XSum | WikiHow | SAMSum |
|------------------------------|----------|--------|-----------|----------|
| Max R-1 w. source | 16.00 | 32.10 | 58.70 | 14.53 |
| Max R-2 w. source | 33.50 | 50.30 | 79.80 | 17.34 |
| Max BLEU w. source | 17.80 | 31.20 | 57.10 | 12.58 |
| Max BS w. source | 54.50 | 75.50 | 44.80 | 24.05 |
| Max BaS w. source | 52.20 | 26.50 | 24.60 | 54.09 |
| Max BRT w. source | 10.20 | 14.60 | 14.20 | 29.79 |
| Max diversity feature | 9.60 | 1.90 | 1.00 | 3.30 |
| Max length feature | 3.50 | 0.80 | 2.10 | 11.48 |
| Oracle candidate | 9.00 | 1.80 | 9.00 | 12.21 |
| Worst candidate | 4.80 | 12.50 | 6.10 | 3.17 |
| Longest candidate | 10.90 | 22.70 | 39.60 | 6.47 |
to a trivial candidate selection, and we see similar patterns on the same dataset (e.g., highest overlap with a single feature selection is with BERTScore with source feature on CNN/DM).
## B Generation & Fine-Tuning Details
In Table 13, we show generation hyper-parameters used for each dataset to generate beam search summary candidates used in Table 3. For the transfer setup shown in Table 4, we use as generation hyper-parameters on each target dataset the parameters used on that dataset for Table 3. For instance, PEGASUS-XSum, PEGASUS-WikiHow and PEGASUS-SAMSum, when transferred to CNN/DM, are decoded with hyper-parameters of PEGASUS-CNN/DM shown in Table 13.
| Dataset | Model | Max source Max target | Length Trigram | |
|-----------------|---------|-------------------------|------------------|-----|
| length | length | penalty blocking | | |
| 1024 | 128 | 0.8 | Yes | |
| PEGASUS | | | | |
| CNN/DM BART | 1.0 | Yes | | |
| BRIO | 1.0 | Yes | | |
| PEGASUS | 512 | 64 | 0.8 | Yes |
| XSum | BART | 1.0 | Yes | |
| BRIO | 0.8 | Yes | | |
| WikiHow PEGASUS | 512 | 128 | 0.6 | No |
| BART | 1.0 | Yes | | |
| SAMSum PEGASUS | 512 | 64 | 0.8 | No |
| BART | 1.0 | Yes | | |
Table 13: Generation hyper-parameters for each dataset and model used to produce summary candidates.
For experiments shown in Table 4, we fine-tune ourselves BART on WikiHow dataset, and PEGASUS and BART on SAMSum dataset. Finetuning hyper-parameters are shown in Table 14.
We perform early stopping with regards to the mean ROUGE on the validation set. Our BART
reaches 44.21/19.31/34.67 ROUGE-1/2/L on WikiHow test set, our PEGASUS 52.33/27.97/44.02 ROUGE-1/2/L on SAMSum test set, and our BART
52.78/28.28/44.08 ROUGE-1/2/L.
Table 14: Fine-tuning hyper-parameters used to fine-tune BART on WikiHow and PEGASUS and BART on SAMSum.
## C Recall Analysis
Besides the quality of the selected summary, we also analyze re-ranking performance itself. In Fig. 2, Fig. 3, Fig. 4 and Fig. 5, we show recall curves on each dataset and for all unsupervised and zero-shot summarization setups. Recall@k is defined as the probability of outputting one of the oracle summary candidates (candidates maximizing the mean ROUGE with the target) among the first k candidates. We compare SummScore with the baseline beam search output, and a random candidate selection baseline.
In most cases, SummScore (green curves) provides higher recall, with the notable exception of XSum, where both beam search and SummScore and XSum can fail to improve the random baseline.
| Dataset | Model | Epochs | Optimizer | Scheduler | LR | BS | LS Eval steps | |
|----------------|---------|----------|-------------|-------------|------|------|-----------------|-----|
| WikiHow | BART | 15 | Adam | none | 1e-5 | 80 | 0.1 | 250 |
| SAMSum PEGASUS | 30 | Adam | none | 1e-4 | 256 | 0.1 | 50 | |
| BART | 30 | Adam | linear | 1e-5 | 80 | 0.1 | 50 | |
## D Abstractiveness Analysis
In Table 15, we show ROUGE results from Table 5 alongside abstractiveness results, as measured per the fraction of novel n-grams in output summaries, for re-ranking and self-training experiments. Maximizing both ROUGE and abstractiveness is notoriously difficult, as easy solutions for abstractiveness optimization can deviate a lot from the source, resulting in a harmed ROUGE score.
The unsupervised PEGASUS (first row of each block) is very extractive and only produces a small fraction of novel n-grams. SummScore selected summaries, despite maximizing a score which maximizes the mean ROUGE with pseudo-labels extracted from the source document, both improve the ROUGE and the abstractiveness level. However, SummScore re-ranking applied to self-trained models tends to reduce their abstractiveness level, although it stays above the level of the baseline PEGASUS. Paraphrased summaries drastically increase abstractiveness, at the expense of ROUGE -
![13_image_0.png](13_image_0.png) ![14_image_0.png](14_image_0.png) ![15_image_0.png](15_image_0.png) ![16_image_0.png](16_image_0.png)
| Dataset | Model | ROUGE | Abstractiveness (new n-grams) | | | | | |
|-----------------------------------------------------------------|---------|-------------------|---------------------------------|-------|-------|-------|-------|------|
| PEGASUS | 26.99 | 35.47 | 13.89 | 31.61 | 0.19 | 0.89 | 2.44 | |
| PEGASUS + SummScore LEAD-3 | 28.38 | 36.92 | 15.03 | 33.19 | 0.19 | 0.94 | 2.73 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 100% | 22.46 | 29.72 | 11.07 | 26.58 | 14.01 | 35.18 | 44.23 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 50% | 25.37 | 33.24 | 13.02 | 29.83 | 7.29 | 18.34 | 23.77 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 25% (pseudo-labels) | 26.85 | 35.06 13.99 31.49 | 3.73 | 9.71 | 13.36 | | | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 12.5% | 27.61 | 35.99 | 14.50 | 32.35 | 1.95 | 5.29 | 7.98 | |
| CNN/DM PEGASUS self-trained (1st round) | 27.98 | 36.68 | 14.52 | 32.72 | 0.25 | 0.66 | 1.84 | |
| PEGASUS self-trained (1st round) + SummScore LEAD-3 | 29.88 | 38.75 | 16.11 | 34.78 | 0.10 | 0.43 | 1.60 | |
| PEGASUS self-trained (2nd round) | 29.40 | 38.17 | 15.77 | 34.25 | 0.66 | 1.49 | 2.61 | |
| PEGASUS self-trained (2nd round) + SummScore LEAD-3 | 30.59 | 39.49 16.69 35.61 | 0.21 | 0.93 | 2.15 | | | |
| PEGASUS self-trained (3rd round) | 29.63 | 38.47 | 15.95 | 34.48 | 0.68 | 1.72 | 2.74 | |
| PEGASUS self-trained (3rd round) + SummScore LEAD-3 | 30.80 | 39.76 | 16.79 | 35.85 | 0.11 | 0.99 | 2.25 | |
| PEGASUS | 11.83 | 18.77 | 2.86 | 13.85 | 0.20 | 0.44 | 1.16 | |
| PEGASUS + SummScore LEAD-3 | 12.45 | 19.62 | 3.02 | 14.71 | 0.19 | 0.60 | 2.04 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 100% (pseudo-labels) | 12.98 | 20.19 | 3.60 | 15.16 | 12.94 | 30.30 | 37.63 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 50% | 12.75 | 19.94 | 3.32 | 14.97 | 6.55 | 15.46 | 19.87 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 25% | 12.61 | 19.79 | 3.18 | 14.86 | 3.41 | 8.06 | 10.96 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 12.5% | 12.52 | 19.71 | 3.10 | 14.77 | 1.83 | 4.36 | 6.53 | |
| PEGASUS self-trained | 12.09 | 19.33 | 2.76 | 14.18 | 1.49 | 3.20 | 4.43 | |
| PEGASUS self-trained + SummScore LEAD-3 | 12.60 | 20.02 | 2.84 | 14.93 | 0.66 | 1.99 | 3.55 | |
| XSum | PEGASUS | 16.46 | 25.49 | 5.91 | 17.99 | 0.48 | 1.12 | 2.36 |
| PEGASUS + SummScore R-2 | 17.17 | 26.40 | 6.30 | 18.83 | 0.80 | 2.47 | 5.05 | |
| PEGASUS + SummScore R-2 - paraphrasing 100% | 16.75 | 25.59 | 6.19 | 18.47 | 4.65 | 17.13 | 26.14 | |
| PEGASUS + SummScore R-2 - paraphrasing 50% (pseudo-labels) | 16.97 | 26.01 | 6.26 | 18.62 | 2.79 | 9.82 | 15.55 | |
| WikiHow PEGASUS + SummScore R-2 - paraphrasing 25% | 17.08 | 26.24 | 6.27 | 18.73 | 1.81 | 6.14 | 10.28 | |
| PEGASUS + SummScore R-2 - paraphrasing 12.5% | 17.13 | 26.32 | 6.28 | 18.79 | 1.31 | 4.34 | 7.71 | |
| PEGASUS self-trained | 16.92 | 26.08 | 6.08 | 18.59 | 0.84 | 1.80 | 3.56 | |
| PEGASUS self-trained + SummScore R-2 | 17.27 | 26.50 | 6.28 | 19.03 | 0.61 | 1.71 | 4.02 | |
| PEGASUS | 18.57 | 26.64 | 6.32 | 22.75 | 0.30 | 1.35 | 2.81 | |
| PEGASUS + SummScore LEAD-3 | 19.92 | 28.22 | 7.16 | 24.39 | 0.54 | 1.73 | 3.85 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 100% | 15.95 | 22.84 | 4.14 | 20.88 | 15.08 | 37.45 | 50.66 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 50% | 17.77 | 25.34 | 5.55 | 22.43 | 7.45 | 18.83 | 26.22 | |
| SAMSum PEGASUS + SummScore LEAD-3 - paraphrasing 25% | 18.88 | 26.83 | 6.40 | 23.41 | 3.93 | 9.75 | 14.23 | |
| PEGASUS + SummScore LEAD-3 - paraphrasing 12.5% (pseudo-labels) | 19.33 | 27.41 | 6.73 | 23.84 | 2.28 | 5.85 | 9.29 | |
| PEGASUS self-trained | 18.92 | 26.96 | 6.41 | 23.40 | 0.36 | 1.51 | 3.35 | |
| PEGASUS self-trained + SummScore LEAD-3 | 20.67 | 28.91 | 7.55 | 25.54 | 0.60 | 2.18 | 4.93 | |
except on XSum where paraphrasing also improves ROUGE, motivating our choice to use 100% paraphrased summaries as pseudo-labels. We confirm that our pseudo-labels for self-training, made of a blend of SummScore selected summaries and selected summaries being paraphrased, maintains high ROUGE while being much more abstractive than the baseline PEGASUS.
## E Paraphrasing Model
For each dataset, we fine-tune BART-large (Lewis et al., 2020) (from the pre-training checkpoint facebook/bart-large in HuggingFace transformers
(Wolf et al., 2020)) for paraphrasing. The model is trained to paraphrase blocks of n = 3 sentences on CNN/DM, n = 1 sentence on XSum, and n = 2 sentences on WikiHow and SAMSum, in line with average summary lengths on these datasets. We train the model with Adafactor (Shazeer and Stern, 2018) for 5 epochs, with effective batch size 32, learning rate 2e-5, and no weight decay nor label smoothing. We evaluate every 500 optimization steps on CNN/DM, XSum, and WikiHow, and every 100 steps on SAMSum. At inference, we use beam search with beam width 5 and length penalty of 1.0, and block repeated trigrams like in (Krys-´
cinski et al. ´ , 2018).
Table 16: ROUGE results of the paraphrasing model, on the validation set of each dataset. We report the mean of ROUGE1/2/L.
| Dataset | CNN/DM | XSum | WikiHow | SAMSum |
|--------------------|----------|--------|-----------|----------|
| Paraphrasing model | 32.88 | 15.58 | 20.34 | 17.44 |
We track the mean of ROUGE-1, ROUGE-2 and ROUGE-L between the generated paraphrase and target paraphrase on the validation set during training, and perform early stopping. Best mean ROUGE
results are shown in Table 16.
Next, we study the impact of the paraphrasing model on the SummScore pseudo-targets. In Ta-
| Dataset | Mean R | New | New | New |
|-----------|----------|---------|-------|-------|
| 1-grams | 2-grams | 3-grams | | |
| CNN/DM | 55.80 | 17.28 | 34.58 | 39.61 |
| XSum | 62.13 | 20.93 | 34.60 | 38.59 |
| WikiHow | 81.26 | 7.96 | 20.14 | 25.60 |
| SAMSum | 50.64 | 22.52 | 41.29 | 52.02 |
ble 17, we compute the mean ROUGE between pseudo-targets and their paraphrase, and analyze the novel n-grams. We point out that the paraphrasing is only applied to the *training* pseudo-labels as the goal of paraphrasing is to encourage the model to learn diversity during self-training, hence Table 17 reporting results on *training sets*. On each dataset, the mean ROUGE is in the 50-80 range, indicating that the paraphrased pseudo-labels do not deviate too much from the original pseudo-labels and yet is able to re-write some content. Besides, there is a high proportion of new n-grams: more than 10% new 1-grams (with the exception of WikiHow on the which the paraphrasing model seems to struggle more to rephrase the input), and more than 20% 2-grams.
| Decoding | Candidate | # Candidates | | | |
|---------------------|-------------|----------------|-------|-------|-------|
| method | Selection | 5 | 10 | 15 | 20 |
| Beam search | PEGASUS | 26.74 | 27.00 | 27.00 | 26.99 |
| SummScore | 27.46 | 28.01 | 28.33 | 28.38 | |
| Diverse beam search | PEGASUS | 26.08 | 26.08 | 26.07 | 26.01 |
| SummScore | 26.98 | 27.48 | 27.76 | 27.87 | |
| Nucleus sampling | PEGASUS | 23.92 | 23.95 | 24.04 | 24.03 |
| SummScore | 26.13 | 26.57 | 26.85 | 27.11 | |
| All three methods | SummScore | 15 | 30 | 45 | 60 |
| 27.87 | 28.35 | 28.34 | 28.59 | | |
## F Other Summary Candidates Setups
In Table 18, we apply SummScore outside of the standard beam search with 20 beams setup. Results show that SummScore performance continuously improves with more summary candidates, whereas the top beam stays around the same level. Besides, SummScore relative gains are stronger with lower quality decoding methods diverse beam search and nucleus sampling. Lastly, combining 20 summary candidates from each of the three decoding methods yields a pool of 60 summary candidates, out of the which SummScore re-ranking can improve by an extra +0.21 mean ROUGE the performance compared to re-ranking 20 beam search candidates
(28.59 mean ROUGE vs 28.38). Overall, we recommend our default setup of beam search with 20 beams to apply SummScore re-ranking. A greater number of beams becomes difficult to fit into a standard GPU with 16 GB memory.
## G Learned Coefficients
In Table 19 (PEGASUS backbone) and Table 20
(ChatGPT backbone), we show coefficients found by SummScore (for each of the five methods to select pseudo-labels which we studided), and on each dataset, including when applying SummScore again on top the self-trained models. For the sake of conciseness, we do not include SummScore coefficients obtained in zero-shot setups. BERTScore with source appears as the feature which consistently receives the highest weight for SummScore
- Random-3 and SummScore - LEAD-3 ; while ROUGE-2 with source dominates for SummScore -
Salient-R1/R2/RL. *Diversity* and *Length* features are significantly less used.
## H Re-Ranking Examples
In the following, beam search output (for PEGASUS) or the first candidate from top-p sampling
(for ChatGPT) is in orange, SummScore selected summary candidate in blue, and oracle candidate(s)
in teal. On each dataset, we show one re-ranking example on the unsupervised PEGASUS and/or ChatGPT (Table 3), one zero-shot re-ranking example selected from Table 4, and one re-ranking example applied on top of the self-trained PEGASUS (Table 5).
Dataset Model ROUGE-1 ROUGE-2 BLEU BERTScore BARTScore BleuRT Diversity Length
CNN/DM
SummScore - Random-3 0.0000 **0.5700** 0.0300 0.2681 0.0000 0.0069 0.1250 0.0000
SummScore - LEAD-3 (**selected SummScore version**) 0.0000 0.0250 0.0000 **0.4275** 0.3375 0.1350 0.0500 0.0250
SummScore - Salient-R1 0.0850 **0.7650** 0.0000 0.1000 0.0031 0.0219 0.0000 0.0250 SummScore - Salient-R2 0.1444 0.1856 **0.4950** 0.1050 0.0000 0.0450 0.0000 0.0250 SummScore - Salient-RL 0.1062 **0.7438** 0.0000 0.1000 0.0031 0.0219 0.0000 0.0250 Self-training (1st round) + SummScore - LEAD-3 0.0000 0.0000 0.0000 **0.4500** 0.4275 0.0225 0.1000 0.0000
Self-training (2nd round) + SummScore - LEAD-3 0.0000 0.0000 0.0000 **0.6338** 0.2925 0.0488 0.0250 0.0000
Self-training (3rd round) + SummScore - LEAD-3 0.0000 0.0500 0.0000 **0.8075** 0.1425 0.0000 0.0000 0.0000
| CNN/DM XSum WikiHow SAMSum |
|------------------------------|
SummScore - Random-3 0.0287 **0.5462** 0.0000 0.1200 0.0900 0.1900 0.0250 0.0000
SummScore - LEAD-3 (**selected SummScore version**) 0.0500 0.0000 0.0000 **0.7837** 0.1425 0.0238 0.0000 0.0000
SummScore - Salient-R1 0.1275 **0.7225** 0.0000 0.0338 0.0000 0.0413 0.0000 0.0750 SummScore - Salient-R2 **0.8000** 0.0000 0.0000 0.0000 0.0000 0.2000 0.0000 0.0000 SummScore - Salient-RL 0.1200 0.1600 **0.5200** 0.1550 0.0000 0.0450 0.0000 0.0000
Self-training (1st round) + SummScore - LEAD-3 0.0000 0.0000 0.0000 **0.5550** 0.3700 0.0000 0.0250 0.0500
WikiHow
SummScore - Random-3 0.0100 0.0400 0.0000 **0.9025** 0.0238 0.0238 0.0000 0.0000 SummScore - LEAD-3 0.0000 0.0000 0.0000 **0.7312** 0.2437 0.0000 0.0250 0.0000 SummScore - Salient-R1 0.1094 **0.7656** 0.0000 0.0825 0.0000 0.0175 0.0250 0.0000
SummScore - Salient-R2 (selected SummScore version) **0.8750** 0.0000 0.0000 0.0825 0.0000 0.0175 0.0250 0.0000
SummScore - Salient-RL 0.2625 **0.6125** 0.0000 0.0825 0.0000 0.0175 0.0250 0.0000
Self-training (1st round) + SummScore - Salient-R2 **0.5031** 0.1750 0.1969 0.0625 0.0050 0.0325 0.0250 0.0000
SAMSum
SummScore - Random-3 0.0300 0.2625 0.0075 **0.4900** 0.2100 0.0000 0.0000 0.0000
SummScore - LEAD-3 (**selected SummScore version**) 0.0000 0.0000 0.0000 **0.7750** 0.2250 0.0000 0.0000 0.0000
SummScore - Salient-R1 0.1650 **0.6600** 0.0000 0.0000 0.0000 0.0000 0.1250 0.0500 SummScore - Salient-R2 0.0731 **0.8044** 0.0975 0.0000 0.0000 0.0000 0.0000 0.0250 SummScore - Salient-RL 0.1950 **0.7800** 0.0000 0.0000 0.0000 0.0000 0.0000 0.0250 Self-training (1st round) + SummScore - LEAD-3 0.0000 0.0000 0.0000 **0.8500** 0.1500 0.0000 0.0000 0.0000
Dataset Model ROUGE-1 ROUGE-2 BLEU BERTScore BARTScore BleuRT Diversity Length
CNN/DM
SummScore - Random-3 0.0600 0.2400 0.0000 **0.3881** 0.1437 0.0431 0.1250 0.0000
SummScore - LEAD-3 (**selected SummScore version**) 0.0000 0.0975 0.0025 **0.5038** 0.2712 0.0000 0.1250 0.0000
SummScore - Salient-R1 0.2925 **0.6075** 0.0000 0.0925 0.0025 0.0050 0.0000 0.0000 SummScore - Salient-R2 **0.3825** 0.3375 0.1800 0.0850 0.0075 0.0075 0.0000 0.0000 SummScore - Salient-RL 0.2925 **0.6075** 0.0000 0.0825 0.0000 0.0175 0.0000 0.0000
XSum
SummScore - Random-3 0.0581 **0.4844** 0.2325 0.1350 0.0150 0.0500 0.0250 0.0000
SummScore - LEAD-3 (**selected SummScore version**) 0.0250 0.2250 0.0000 **0.6525** 0.0544 0.0181 0.0250 0.0000
SummScore - Salient-R1 0.1575 **0.6525** 0.0900 0.0775 0.0025 0.0200 0.0000 0.0000 SummScore - Salient-R2 0.2700 **0.4950** 0.1350 0.0800 0.0050 0.0150 0.0000 0.0000
SummScore - Salient-RL 0.3600 **0.5400** 0.0000 0.0750 0.0050 0.0200 0.0000 0.0000
WikiHow
SummScore - Random-3 0.0600 **0.4800** 0.0600 0.3000 0.0281 0.0469 0.0250 0.0000 SummScore - LEAD-3 0.0000 0.1187 0.0063 **0.7200** 0.0800 0.0000 0.0750 0.0000 SummScore - Salient-R1 **0.4950** 0.3150 0.0900 0.0850 0.0050 0.0100 0.0000 0.0000
SummScore - Salient-R2 (**selected SummScore version**) 0.3825 **0.4950** 0.0225 0.0875 0.0050 0.0075 0.0000 0.0000
SummScore - Salient-RL **0.4950** 0.3150 0.0900 0.0850 0.0050 0.0100 0.0000 0.0000
SAMSum
SummScore - Random-3 (**selected SummScore version**) 0.0000 0.0000 0.0000 0.0925 0.3006 **0.5319** 0.0000 0.0750
SummScore - LEAD-3 0.0000 0.0000 0.0000 0.0750 0.3250 **0.6000** 0.0000 0.0000 SummScore - Salient-R1 0.0000 0.0000 0.0000 0.1500 0.2250 **0.6250** 0.0000 0.0000 SummScore - Salient-R2 0.0000 0.0000 0.0000 0.0250 0.2250 **0.7500** 0.0000 0.0000 SummScore - Salient-RL 0.0000 0.0000 0.0000 0.1500 0.2250 **0.6250** 0.0000 0.0000
| CNN/DM: re-ranking from the unsupervised PEGASUS | |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Royal Dutch Shell Plc said it . has filed a complaint in federal court in Alaska seeking an . order to remove Greenpeace activists who climbed aboard an oil . rig in the Pacific Ocean bound for the Arctic on Monday in a . protest against Arctic drilling. The environmental group said in a statement its team would . occupy the underside of the main deck of the Polar Pioneer, which is under contract to Shell, and plans to unfurl a banner . with the names of millions of people opposed to Arctic drilling. The group said the activists would not interfere with the . vessel's navigation. Scroll down for video . On the rig: Greenpeace activists scale the Polar Pioneer drill rig in the Pacific Ocean . Map: The activists boarded the rig just 750 miles northwest of Hawaii as it makes its journey to the Arctic . At dawn on Monday, the six, from the USA, Germany, New Zealand, Australia, Sweden and Austria, sped towards the Polar Pioneer in inflatable boats launched from the Greenpeace ship Esperanza . Climbers: All Greenpeace activists aboard the rig are experienced climbers and say they don't plan to interfere with the ship's course . 'We're here to highlight that in less than 100 days Shell is . going to the Arctic to drill for oil,' 32-year-old Johno Smith, . one of the six to board the Blue Marlin, the ship carrying the . rig, said in the statement. 'Shell's actions are exploiting the melting ice to increase . a man-made disaster. Climate change is real,' he added. Shell said in an emailed statement that it has met with .groups against oil drilling off Alaska's shores and 'respect . their views' but condemned the boarding. 'We can confirm that protesters from Greenpeace have . illegally boarded the Polar Pioneer, under contract to Shell, jeopardizing not only the safety of the crew on board, but the . protesters themselves,' Shell said. The move comes just days after the U.S. Interior Department . upheld a 2008 lease sale in the Chukchi Sea off Alaska, moving. Shell a step closer to returning to oil and gas exploration in . the Arctic since it suffered mishaps in the region in 2012. The people vs shell: The activists hope they will draw media attention to oil drilling in the Arctic . Reveal a list: Greenpeace activists scale the Polar Pioneer drill rig in the Pacific Ocean to unfurl a banner with the names of millions of people opposed to Arctic drilling . Long haul: The activists used ropes and climbing equipment to scale the 38,000-tonne platform . Many environmentalists oppose offshore energy exploration in . the Arctic, saying that once production begins any oil spill . would be extremely difficult to clean up. Oil industry interests say the Arctic will be important to . the United States' energy security in coming decades when output . from shale formations is expected to wane. Images published by Greenpeace showed the activists using . climbing gear to move from an inflatable boat onto the Blue . Marlin heavy-lift vessel towing the Pioneer, one of two drill . rigs heading to the region, as it cruised some 750 miles (1,207 . km) northwest of Hawaii. The six activists planned to camp on the 38,000-tonne Polar Pioneer platform, which they boarded using inflatable boats from he Greenpeace vessel 'Esperanza.' Tweeting from the rig: Aliyah Field tweeted she'd love some coffee but that the sunrise over the Pacific is gorgeous even from the side of the oil rig . Many names: Aliyah maybe referring to the list of names the activists will hang showing all the people who are opposed to oil drilling in the arctic . The six - from the United States, Germany, New Zealand, Australia, Sweden and Austria - have supplies for several days and can communicate with the outside world, Greenpeace said. 'We made it! We're on Shell's platform. And we're not alone. Everyone can help turn this into a platform for people power!' tweeted one of the six, Aliyah Field. Johno Smith from New Zealand added: 'We're here to highlight that in less than 100 days Shell is going to the Arctic to drill for oil. 'This pristine environment needs protecting for future generations and all life that will call it home. But instead Shell's actions are exploiting the melting ice to increase a man-made disaster.' A Shell spokeswoman, Kelly op de Weegh, blasted the action. 'We can confirm that protestors from Greenpeace have illegally boarded the 'Polar Pioneer,'under contract to Shell, jeopardizing not only the safety of the crew on board, but the protestors themselves,' she said on Monday. She added: 'Shell has met with organizations and individuals who oppose energy exploration offshore Alaska. We respect their views and value the dialogue. 'We will not, however, condone the illegal tactics employed by Greenpeace. Nor will we allow these stunts to distract from preparations underway to execute a safe and responsible exploration program,' she said in a statement. | |
| Beam #1 | Summary 'We're here to highlight that in less than 100 days Shell is. Scores Mean ROUGE: 6.55 (rank 11) || SummScore rank: 20 'We're here to highlight that in less than 100 days Shell is.' Greenpeace activists boarded the Polar Pioneer drill rig in the Pacific Ocean to unfurl a banner with the names of millions of people opposed to oil drilling in the Arctic. Greenpeace activists climbed aboard an oil drilling rig off the coast of Alaska on Monday in a |
| Beam #2 | Summary protest against Shell's plans to drill for oil in Arctic waters, the environmental group said in a statement. The group said the activists would not interfere with the vessel's navigation, but the company said it had filed a complaint in federal court in Alaska seeking an order to remove the activists. Scores Mean ROUGE: 43.17 (rank 3) || SummScore rank: 2 'We're here to highlight that in less than 100 days Shell is.' Greenpeace activists boarded the Polar Pioneer drill rig in the Pacific Ocean to unfurl a banner with the names of millions of people opposed to oil drilling in the Arctic. Greenpeace activists climbed aboard an oil drilling rig off the coast of Alaska on Monday in a protest |
| Beam #3 | Summary against Shell's plans to drill for oil in Arctic waters, the environmental group said in a statement. The group said the activists would not interfere with the ship's navigation, but the company said it had filed a complaint in federal court in Alaska seeking an order to remove the activists. Scores Mean ROUGE: 42.85 (rank 4) || SummScore rank: 1 (SummScore output) 'We're here to highlight that in less than 100 days Shell is.' Greenpeace activists boarded the Polar Pioneer drill rig in the Pacific Ocean to unfurl a banner with the names of millions of people opposed to oil drilling in the Arctic. Greenpeace activists climbed aboard an oil drilling rig off the coast of Alaska on Monday in a protest against Shell's |
| Beam #4 | Summary plans to drill for oil in Arctic waters, the environmental group said in a statement. The group said the activists would not interfere with the vessel's navigation, but the company said it had filed a complaint in federal court in Alaska seeking an order to remove the activists from Scores Mean ROUGE: 43.59 (rank 2) || SummScore rank: 12 |
| ... | 'We're here to highlight that in less than 100 days Shell is.' Greenpeace activists boarded the Polar Pioneer drill rig in the Pacific Ocean to unfurl a banner with the names of millions of people opposed to oil drilling in the Arctic. Greenpeace activists climbed aboard an oil drilling rig off the coast of Alaska on Monday in a protest against |
| Beam #10 | Summary Shell's plans to drill for oil in Arctic waters, the environmental group said in a statement. The group said the activists would not interfere with the vessel's navigation, but the company said it had filed a complaint in federal court in Alaska seeking an order to remove them from the Scores Mean ROUGE: 43.91 (rank 1) || SummScore rank: 6 |
| ... Source | Shell has filed a complaint in federal court in Alaska seeking an order to remove Greenpeace activists who climbed aboard an oil rig in the Pacific . The environmental group said in a statement its team would occupy the underside of the main deck of the Polar Pioneer . |
Reference
Shell has filed a complaint in federal court in Alaska seeking an order to remove Greenpeace activists who climbed aboard an oil rig in the Pacific .
The environmental group said in a statement its team would occupy the underside of the main deck of the Polar Pioneer .
The six activists are camping on the 38,000-tonne Polar Pioneer platform, which they boarded using inflatable boats from the Greenpeace vessel 'Esperanza'
'We made it! We're on Shell's platform. And we're not alone. Everyone can help turn this into a platform for people power!' tweeted Aliyah Field .
Table 21: SummScore re-ranking applied to the unsupervised PEGASUS with beam search on CNN/DM.
| CNN/DM: re-ranking from ChatGPT | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Although Hillary Clinton boasts a robust 3.6 million Twitter followers, not even a vast right-wing conspiracy would be able to interact with 2 million of them. According to two popular online measuring tools, no more than 44 per cent of her Twitter fan base consists of real people who are active in using the social media platform. And at least 15 per cent - more than 544,000 - are completely fake. StatusPeople.com, the oldest publicly available Twitter-auditing tool, reports that 44 per cent of the former secretary of state's followers are 'good'; 15 per cent are 'fake'; and 41 per cent are 'inactive,' meaning that they never tweet or reply to any tweets. SCROLL DOWN FOR VIDEO . FAKERS: According to one popular online audit tool, only 44 per cent of Hillary Clinton's Twitter followers are real people who participate on the social media platform . 'I'M RUNNING FOR PRESIDENT': Clinton has cast herself as a champion of 'everyday Americans' Another Twitter sleuthing website sampled more than 320,000 of Clinton's followers and found that a much larger number of them were 'fake' Just 4 per cent of President Barack Obama's Twitter followers, by comparison, are considered fake. The White House worked overtime to purge most of them after a September 2013 report found that more than half of his followers didn't really exist. Michelle Obama's Twitter audience is 25 per cent fake, according to StatusPeople, along with 21 per cent of Vice President Joe Biden's. Another tool, TwitterAudit.com, sampled 320,000 of Mrs. Clinton's followers and found that 18 per cent were fake. The new measurements will add to the Clinton presidential campaign's embarrassment following news on Tuesday that a large number of her Facebook fans may represent 'likes' that were purchased rather than earned. REALLY? Hillary Clinton's Twitter follower-count appears to be significantly inflated . FACEBOOK FAKERY: Clinton boasts two-thirds of a million Facebook 'likes,' but more than 46,000 of them list 'Baghdad' as their hometown . Vocativ reported that at least 7 per cent of them listed Baghdad, Iraq as their hometown, a larger number than any U.S. city. That would represent more than 46,000 people. Additional evidence of digital astroturfing is that while most of her U.S. Facebook fans are older than 55, most of the Baghdad contingent is in the 18-34 age range. While Clinton was America's top diplomat, her State Department was buying Facebook 'likes,'according to an Associated Press report from last year. 'In 2013, the State Department, which has more than 400,000 likes and was recently most popular in Cairo, said it would stop buying Facebook fans after its inspector general criticized the agency for spending $630,000 to boost the numbers,' the wire service reported. Clinton left the agency while the inspector general was investigating. ASTROTURF? A woman cast as a kindly, gardening grandma in Clinton's campaign launch video is actually a former abortion lobbyist who once helmed a campaign for Texas state Senator Wendy Davis . WHERE'S SCOOBY? Clinton's caravan was spotted in Maumee, Ohio on Monday when she lunched at Chipotle, but no one recognized her . Facebook says it has entire teams of employees tasked with rooting out schemes to artificially boost statistics. 'When we catch fraudulent activity, we work to counter and prevent it - including blocking accounts and removing fake likes,' the company reports on its website. Clinton is in Monticello, Iowa on Tuesday for the first stop on her nascent presidential 'listening' tour. She was driven from New York in a van that her campaign aides have dubbed 'Scooby,' after the 'Mystery Machine' van in the Scooby-Doo cartoon series. Daily Mail Online visited Kirkwood Community College on Monday, the site of the event, and heard from students that the former first lady is a 'control freak' who may be pursuing legal status for illegal immigrants because the Democratic Party needs a new pool of loyal voters. Online measuring tools show that only 44% of Hillary Clinton's Twitter followers are real people who actively use the platform, with 15% completely fake. | |
| Candidate #1 | Summary Comparatively, just 4% of President Obama's Twitter followers are fake. Clinton's Facebook audience has also come under scrutiny, with reports of purchased likes and a large number of followers from Baghdad, Iraq. Scores Mean ROUGE: 23.88 (rank 9) || SummScore rank: 18 At least 15% of Hillary Clinton's Twitter followers are fake, according to two online measuring tools, with more than 544,000 completely fake. In comparison, just 4% of |
| Candidate #2 | Summary Barack Obama's Twitter followers are fake. Clinton's recent embarrassment of having a large number of Facebook likes purchased rather than earned further adds to the digital astroturfing speculation. Scores Mean ROUGE: 19.60 (rank 26) || SummScore rank: 15 An investigation has revealed that at least 15% of Hillary Clinton's Twitter followers are completely fake. The same investigation also found that only 44% of her Twitter |
| Candidate #3 | Summary followers are real people who actively use the platform. This adds to the Clinton presidential campaign's embarrassment following news on Tuesday that a large number of her Facebook fans may represent 'likes' that were purchased rather than earned. Scores Mean ROUGE: 23.88 (rank 8) || SummScore rank: 5 According to two online measuring tools, only 44% of Hillary Clinton's 3.6 million Twitter followers are real people who actively use the platform, with at least 15% being |
| Candidate #4 | Summary fake. In comparison, only 4% of President Barack Obama's Twitter followers are considered fake. Clinton's Facebook page has also been called into question, with evidence of 'likes' being purchased rather than earned. Scores Mean ROUGE: 27.09 (rank 3) || SummScore rank: 10 According to online measuring tools, over half of Hillary Clinton's Twitter followers are either fake or inactive. One tool reported that only 44% of her followers are real |
| Candidate #5 | Summary people who are active on the platform. Additionally, a large number of Clinton's Facebook fans may have been purchased rather than earned, with over 46,000 of them listing Baghdad, Iraq as their hometown. Scores Mean ROUGE: 24.82 (rank 7) || SummScore rank: 7 |
| ... | According to online measuring tools, only 44% of Hillary Clinton's 3.6 million Twitter followers consist of real people who are active on the social media platform. At least |
| Candidate #17 Summary 15% of her followers, or more than 544,000, are completely fake. This comes after news that a large number of her Facebook fans may represent 'likes' that were purchased rather than earned. Scores Mean ROUGE: 31.34 (rank 1) || SummScore rank: 1 (SummScore output) ... Two different online audit tools say no more than 44 per cent of Hillary's 3.6 million Twitter fans are real people who participate in the platform . The newly minted presidential candidate is fending off accusations that her Facebook page is full of fake 'likes' Reference Her Facebook fan base includes more people from Baghdad, Iraq than any US city . When she was secretary of state, her agency paid $630,000 to bulk up its Facebook likes, but pledged to stop after she left . Source Table 22: SummScore re-ranking applied to ChatGPT with top-p sampling on CNN/DM. | |
| CNN/DM: re-ranking from the PEGASUS trained on WikiHow | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Assault: Dr Sahar Hussain attacked two Tube workers because she didn't want to miss the last train home . A GP attacked two Tube workers while screaming 'I'm a doctor' because she did not want to miss the last train home on a Friday night. Dr Sahar Hussain, 53, panicked when she was unable to get through the gates at Leicester Square station, and started ranting at staff. She denied assaulting the two workers, saying she was worried about being stranded on her own in central London because she is a Muslim woman. But Hussain has now been found guilty and ordered to pay a total of £2,250 in fines, compensation and court costs - and she could face disciplinary action from the General Medical Council. In video footage captured on her own mobile phone, Hussain could be heard to shout: 'I'm a doctor actually, I work for the NHS. I'm a doctor. Get me through the gate, I'm going to miss my train.' City of London Magistrates' Court heard Hussain arrived at the station around 11.30pm on June 20 last year, trying to get home to Woodford Green after socialising with friends in the West End. When she was refused entry by the automatic gates, she demanded that ticket seller Malcolm Shaw let her through before lashing out at his colleague Indira Ramsaroop, who was trying to help. Hussain, originally from Iraq, screamed and shouted at Mrs Ramsaroop as she thrust a camera phone into her face before grabbing her by the arm. The 24-year-old Transport for London worker was then chased by the doctor as she tried to flee to the control room, bumping her head on the way. In the video on Hussain's phone she was heard shouting: 'This woman is on something, she's not sober is she? You're in work and you're not sober. Get me through the gate.' During the scuffle Hussain, a mother of one who helps train GPs at two universities, also grabbed Mr Shaw by the arms, leaving him with scratches. Mrs Ramsaroop was close to tears in court as she told how she had to take almost two weeks off work following the incident, adding: 'I had a lot of sleepless nights. It had an impact on myself with customers when I came back to work. 'I have felt very let down to have been threatened and been running away in my place of work. It actually affected me for a very long time and I got quite ill just at the worrying and fear.' Row: The assault took place on a Friday night at Leicester Square station in central London . Hussain admitted losing her temper, telling the court: 'I'm very sorry about the way I expressed myself with my agitation and frustration.' District Judge Quentin Purdy found her guilty of two counts of assault by beating, saying: 'The evidence is overwhelming. You completely lost your self-control. 'Unusually for the sort of incident this court regularly deals with, there is no hint of intoxication in any way. 'But the explanation is there. You were anxious about missing the last train, concerned about being late for work or unable to work the next day, cultural embarrassment by being left in London as a married Muslim woman. 'It was completely out of character, but as we have all seen, you acted in an unpleasant and hysterical fashion towards numerous members of staff.' He said she had subjected Mrs Ramsaroop to 'completely unjustified violence', as well as 'humiliation' for falsely suggesting she was intoxicated at work. The judge added: 'You have lost a great deal by your lack of self-control. Your good name is now lost with these two convictions for assault.' Hussain was fined £500 for each assault and ordered her to pay £500 compensation to Mrs Ramsaroop, £200 to Mr Shaw, £500 prosecution costs and a £50 victim surcharge. | |
| Beam #1 | Summary Try to get through the gate because you didn't want to miss the last train home. Attack the workers because you didn't want to miss the last train home. Scores Mean ROUGE: 13.80 (rank 6) || SummScore rank: 17 |
| Beam #2 | Summary Tell the court she was worried about being stranded on her own in central London because she is a Muslim woman. Describe to the court how she screamed: 'I'm a doctor, get me through the gate.' Scores Mean ROUGE: 40.97 (rank 2) || SummScore rank: 2 |
| Beam #3 | Summary Tell the court she was worried about being stranded on her own in central London because she is a Muslim woman. Describe to the court how she screamed: 'Get me through the gate, get me through the gate.' Scores Mean ROUGE: 39.29 (rank 4) || SummScore rank: 10 |
| Beam #4 | Summary Try to get through the gate because you don't want to miss the last train home. Attack the workers because you didn't want to miss the last train home. Scores Mean ROUGE: 13.80 (rank 6) || SummScore rank: 18 |
| Beam #5 | Summary Try to get through a gate because you don't want to miss the last train home. Attack the workers because you don't want to miss the last train home. Scores Mean ROUGE: 13.09 (rank 7) || SummScore rank: 20 |
| Beam #6 | Summary Try to get through the gate because you don't want to miss the last train home. Attack the workers because you don't want to miss the last train home. Scores Mean ROUGE: 13.80 (rank 6) || SummScore rank: 19 |
| Beam #7 | Summary Tell the court she was worried about being stranded on her own in central London because she is a Muslim woman. Describe to the court how she yelled: 'Get me through the gate, get me through the gate.' Scores Mean ROUGE: 39.29 (rank 3) || SummScore rank: 8 |
| ... Beam #10 Summary Tell the court she was worried about being stranded on her own in central London because she is a Muslim woman. Describe to the court how she screamed: 'I'm a doctor, get me through the gate!' Scores Mean ROUGE: 40.97 (rank 2) || SummScore rank: 1 (SummScore output) ... Beam #14 Summary Tell the court she was worried about being stranded on her own in central London because she is a Muslim woman. Describe to the court how she screamed: 'I'm a doctor, get through the gate.' Scores Mean ROUGE: 42.04 (rank 1) || SummScore rank: 9 ... Source Shell has filed a complaint in federal court in Alaska seeking an order to remove Greenpeace activists who climbed aboard an oil rig in the Pacific . The environmental group said in a statement its team would occupy the underside of the main deck of the Polar Pioneer . Reference The six activists are camping on the 38,000-tonne Polar Pioneer platform, which they boarded using inflatable boats from the Greenpeace vessel 'Esperanza' 'We made it! We're on Shell's platform. And we're not alone. Everyone can help turn this into a platform for people power!' tweeted Aliyah Field . Table 23: SummScore re-ranking applied to the PEGASUS fine-tuned on WikiHow with beam search on CNN/DM. | |
| CNN/DM: re-ranking from the self-trained PEGASUS | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Grandparents have pleaded for the safe return to Australia of two young children whose mother took them from Melbourne to the Islamic State capital in Syria. Former Melbourne woman Dullel Kassab fled to Raqqa in Syria with her children last year, and she regularly boasts on Twitter that her four-year-old daughter and two-year-old son sleep with toy guns next to their beds and her daughter likes watching IS videos of 'Muslims killing bad ppl.' The children's paternal grandparents say they are worried Kassab, 28, is 'brainwashing' the children, after their father was killed near the Syria-Turkey border last year, The Herald Sun reported. Former Melbourne woman Dullel Kassab fled to Raqqa in Syria from Melbourne with her children last year . Kassab posts pictures to Twitter of airstrikes hitting blocks away from their Raqqa apartment . 'We miss the children a lot. Their safety and religion has been compromised and we are deeply worried but unable to do anything about it,' a family spokesman told the Herald Sun. 'We pray they come back but it does not look good.' Kassab's Twitter paints a picture of their life in the city the terrorist group IS have made their headquarters, where the children cannot go to school and airstrikes hit blocks away from their apartment. The 28-year-old has a new husband, as the Islamic State does not permit unmarried foreign women to stay in Raqqa. In social media posts she boasts about her children's distaste for Kuffar (non-believers). A photo of another airstrike a day later. The children's paternal grandparents say they are worried Kassab, 28, is 'brainwashing' the children, after their father was killed near the Syria-Turkey border last year . On her Twitter account she boasts about her children's distaste for Kuffar (non-believers) 'My 4y/o encouraging her little bro to eat his eggs - "C'mon eat ur eggs so u can be big & strong & fight the Kuffar!" Allah yehmikum! [sic]' she wrote in December. '#Awkward Just asked my 4yo wat she wants 2 watch.. "Muslims killing bad ppl" (i.e. #IS vids obv not beheading ones) LOL [sic],' she wrote in October. Kassab has also complained the 12 to 17-year-olds are now regarded as children when 'in the past they were warriors'. And during the Sydney Lindt café siege in December last year she sent a series of tweets joking that it was exciting. 'This is the most excitement Sydney has seen since the 2000 Olympics!' she posted. Kassab also posts pictures of the Islamic State capital - including this of a 'double rainbow' And during the Sydney Lindt café siege last year Kassab sent a series of tweets joking that it was 'exciting' 'I guess attack the coffee shop wasn't a bad idea, It's a long night. . . One needs caffeine and chocolate!! [sic]' Kassab also posts pictures of the Islamic State capital, and of Nutella and Twix and Snickers chocolate bars with the caption: 'Im really appreciating #globalization right about now! #SimplePleasures Another reason to love #IS [sic].' The 28-year-old's father Jalal Kassab said he was worried about his grandchildren living in a war zone, but said the threat of imprisonment made it difficult for his daughter to return to Australia. 'I know she wants to come back and we are trying everything we can to bring her back,' Mr Kassab told the Herald Sun. Another former Melbourne woman Zehra Duman last month shared a series of propaganda pictures she says shows her 'five star jihad' lifestyle . In photographs posted to a Twitter several women are pictured standing under an Islamic State flag, reclining against a clean white BMW M5 and wielding machine guns . In one tweet, Duman said: 'US + Australia, how does it feel that all 5 of us were born n raised in your lands, & now here thirsty for ur blood?' The children's paternal grandparents say they are worried Kassab, 28, is 'brainwashing' the children, after their father was killed near the Syria-Turkey border | |
| Beam #1 | Summary last year. 'We pray they come back but it does not look good.' Kassab's Twitter paints a picture of their life in the city the terrorist group IS have made their headquarters, where the children cannot go to school and airstrikes hit blocks away from their apartment. Scores Mean ROUGE: 14.89 (rank 4) || SummScore rank: 6 'We pray they come back but it does not look good.' Kassab's Twitter paints a picture of their life in the city the terrorist group IS have made their headquarters, |
| Beam #2 | Summary where the children cannot go to school and airstrikes hit blocks away from their apartment. The children's paternal grandparents say they are worried Kassab, 28, is 'brainwashing' the children, after their father was killed near the Syria-Turkey border last year. Scores Mean ROUGE: 14.89 (rank 4) || SummScore rank: 11 The children's paternal grandparents say they are worried Kassab, 28, is 'brainwashing' the children, after their father was killed near the Syria-Turkey border last |
| Beam #3 | Summary year, The Herald Sun reported. 'We pray they come back but it does not look good.' Kassab's Twitter paints a picture of their life in the city the terrorist group IS have made their headquarters, where the children cannot go to school and airstrikes hit blocks away from their apartment. Scores Mean ROUGE: 14.41 (rank 6) || SummScore rank: 5 'We pray they come back but it does not look good.' Kassab's Twitter paints a picture of their life in the city the terrorist group IS have made their headquarters, |
| Beam #4 | Summary where the children cannot go to school and airstrikes hit blocks away from their apartment. 'My 4y/o encouraging her little bro to eat his eggs ˘ 'C'mon eat ur eggs so u can be big & strong & fight the Kuffar!' Allah yehmikum! Scores Mean ROUGE: 9.92 (rank 10) || SummScore rank: 13 |
| ... | Former Melbourne woman Dullel Kassab fled to Raqqa in Syria with her children last year, and she regularly boasts on Twitter that her four-year-old daughter and |
| Beam #9 | Summary two-year-old son sleep with toy guns next to their beds and her daughter likes watching IS videos of 'Muslims killing bad ppl.' The children's paternal grandparents say they are worried Kassab, 28, is 'brainwashing' the children, after their father was killed near the Syria-Turkey border last year. Scores Mean ROUGE: 57.48 (rank 1) || SummScore rank: 1 (SummScore output) |
| ... Reference | Grandparents have pleaded for the safe return of two children in Syria . Former Melbourne woman Dullel Kassab fled to Raqqa in Syria with her four-year-old daughter and two-year-old son last year . She said her daughter likes watching IS videos of 'Muslims killing bad ppl' |
| Source | Table 24: Self-trained PEGASUS with beam search on CNN/DM. |
| XSum: re-ranking from the unsupervised PEGASUS | |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Acting Taoiseach Enda Kenny of Fine Gael and Micheál Martin of Fianna Fáil hope to avoid a second election. Mr Martin has said his party will facilitate a minority government, but will not support a programme for government. Water charges are one of the main sticking points to reaching agreement. A commission to consider the future of national water utility Irish Water is one of the proposals being considered. Fianna Fáil want to see the immediate removal of water charges, but Fine Gael | |
| Source | see a role for them. Following the election, almost two months ago, Fine Gael had 50 seats, Fianna Fáil 44, Sinn Féin 23 and the Labour Party got seven. But no party was able to form a majority government and TDs have so far failed to elect a taoiseach. |
| Beam #1 | Summary Fianna Fil want to see the immediate removal of water charges, but Fine Gael see a role for them. Scores Mean ROUGE: 8.77 (rank 5) || SummScore rank: 14 |
| Beam #2 | Summary Following the election, almost two months ago, Fine Gael had 50 seats, Fianna Fil 44, Sinn Féin 23 and the Labour Party got seven. Scores Mean ROUGE: 6.06 (rank 9) || SummScore rank: 6 |
| Beam #3 | Summary Acting Taoiseach Enda Kenny of Fine Gael and Michel Martin of Fianna Fil hope to avoid a second election. Scores Mean ROUGE: 7.02 (rank 7) || SummScore rank: 15 |
| Beam #4 | Summary After the election, almost two months ago, Fine Gael had 50 seats, Fianna Fil 44, Sinn Féin 23 and the Labour Party got seven. Scores Mean ROUGE: 6.06 (rank 9) || SummScore rank: 7 |
| Beam #5 | Summary The election, almost two months ago, Fine Gael had 50 seats, Fianna Fil 44, Sinn Féin 23 and the Labour Party got seven. Scores Mean ROUGE: 6.20 (rank 8) || SummScore rank: 12 |
| Beam #6 | Summary A commission to consider the future of national water utility Irish Water is one of the proposals being considered. Fianna Fil want to see the immediate removal of water charges, but Fine Gael see a role for them. Scores Mean ROUGE: 10.53 (rank 4) || SummScore rank: 4 |
| Beam #7 | Summary Mr Martin has said his party will facilitate a minority government, but will not support a programme for government. Fianna Fil want to see the immediate removal of water charges, but Fine Gael see a role for them. Scores Mean ROUGE: 17.63 (rank 3) || SummScore rank: 2 |
| Beam #8 | Summary following the election, almost two months ago, Fine Gael had 50 seats, Fianna Fil 44, Sinn Féin 23 and the Labour Party got seven. Scores Mean ROUGE: 6.06 (rank 9) || SummScore rank: 8 |
| Beam #9 | Summary Follow the election, almost two months ago, Fine Gael had 50 seats, Fianna Fil 44, Sinn Féin 23 and the Labour Party got seven. Scores Mean ROUGE: 6.06 (rank 9) || SummScore rank: 13 |
| Beam #10 | Summary During the election, almost two months ago, Fine Gael had 50 seats, Fianna Fil 44, Sinn Féin 23 and the Labour Party got seven. Scores Mean ROUGE: 6.06 (rank 9) || SummScore rank: 9 |
| Beam #11 | Summary acting Taoiseach Enda Kenny of Fine Gael and Michel Martin of Fianna Fil hope to avoid a second election. Scores Mean ROUGE: 7.02 (rank 7) || SummScore rank: 20 |
| Beam #12 | Summary Fianna Fil wants to see the immediate removal of water charges, but Fine Gael see a role for them. Scores Mean ROUGE: 8.77 (rank 5) || SummScore rank: 16 |
| Beam #13 | Summary Mr Martin has said his party will facilitate a minority government, but will not support a programme for government. Fianna Fil want to see the immediate removal of water charges, but Fine Gael see a role for them. However, no party was able to form a majority government and TDs have so far failed Scores Mean ROUGE: 19.28 (rank 2) || SummScore rank: 1 (SummScore output) |
| Beam #14 | Summary While Fianna Fil want to see the immediate removal of water charges, but Fine Gael see a role for them. Scores Mean ROUGE: 8.55 (rank 6) || SummScore rank: 19 |
| Beam #15 | Summary Fianna Fil wanted to see the immediate removal of water charges, but Fine Gael see a role for them. Scores Mean ROUGE: 8.77 (rank 5) || SummScore rank: 17 |
| Beam #16 | Summary Mr Martin has said his party will facilitate a minority government, but will not support a programme for government. Scores Mean ROUGE: 21.25 (rank 1) || SummScore rank: 10 |
| Beam #17 | Summary Fianna Fil hope to see the immediate removal of water charges, but Fine Gael see a role for them. Scores Mean ROUGE: 8.77 (rank 5) || SummScore rank: 18 |
| Beam #18 | Summary Mr Martin has said his party will facilitate a minority government, but will not support a programme for government. Fianna Fil want to see the immediate removal of water charges, but Fine Gael see a role for them. However, no party was able to form a majority government and TDs so far failed to Scores Mean ROUGE: 19.28 (rank 2) || SummScore rank: 3 |
| Beam #19 | Summary Following the election, almost two months ago, Fine Gael had 50 seats, Fianna Fil 44, Sinn Féin 23 and the Labour Party got seven.. Scores Mean ROUGE: 6.06 (rank xx) || SummScore rank: 11 |
| Beam #20 | Summary Mr Martin has said his party will facilitate a minority government, but will not support a programme for government. Fianna Fil want to see the immediate removal of water charges, but Fine Gael see a role for them. However, no party was able to form a majority government and TDs will so far failed Scores Mean ROUGE: 19.28 (rank 2) || SummScore rank: 5 |
| Reference | Negotiators from Ireland's two biggest parties are to meet for a final effort to form a minority government. |
| Table 25: SummScore re-ranking applied to the unsupervised PEGASUS with beam search on XSum. | |
| XSum: re-ranking from the BART trained on WikiHow | |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Cardiff narrowly missed out on the Championship play-offs last term, finishing eighth in the table. But under new head coach Paul Trollope, Pilkington thinks the Welsh side are capable of better. "We fell just short [last season] and we were all really disappointed with that. We felt we had the squad to get in the play-offs definitely," he said. "It's going to be a tough season again, just like it was last year, but hopefully we can be right up there come the end of the season." Pilkington, 28, scored Cardiff's first goal in their 4-0 friendly win at Shrewsbury on Saturday. The Republic of Ireland international played in attack, having moved from the wing to a central forward role last season. Pilkington made the transition under former Cardiff boss Russell Slade, who was replaced by Trollope in May. Trollope had been working under Slade before then, and Pilkington believes that smooth transition has helped the Bluebirds in their preparations for the new campaign. "The gaffer was working with us last season so we know how he works," he added. "We know what he's about and we know we have to be on our toes all the time. "He's come in and changed a few things. It's been good up to now so hopefully we can carry on working really well." | |
| Beam #1 | Summary Aim to improve on last season's finish., Expect a tough season. Scores Mean ROUGE: 5.13 (rank 11) || SummScore rank: 12 |
| Beam #2 | Summary Aim to improve on last season's finish.,Pilkington believes the Bluebirds are capable of reaching the play-offs. Scores Mean ROUGE: 24.63 (rank 2) || SummScore rank: 5 |
| Beam #3 | Summary Aim to improve on last season's finish., Believe the Bluebirds have the squad to get back up the table. Scores Mean ROUGE: 17.89 (rank 7) || SummScore rank: 6 |
| Beam #4 | Summary Reach the play-offs again. Scores Mean ROUGE: 7.02 (rank 10) || SummScore rank: 18 |
| Beam #5 | Summary Aim to improve on last season's finish.,Pilkington believes the Bluebirds have the squad to get back up the table. Scores Mean ROUGE: 23.20 (rank 4) || SummScore rank: 1 (SummScore output) |
| Beam #6 | Summary Aim to improve on last season's finish.,Pilkington believes the Bluebirds have the squad to reach the play-offs. Scores Mean ROUGE: 23.90 (rank 3) || SummScore rank: 2 |
| Beam #7 | Summary Expect to improve on last season's finish.,Pilkington believes the Bluebirds have the squad to get back up the table. Scores Mean ROUGE: 23.20 (rank 4) || SummScore rank: 3 |
| Beam #8 | Summary Aim to improve on last season's finish.,Pilkington believes the Bluebirds have the squad to challenge for promotion. Scores Mean ROUGE: 41.06 (rank 1) || SummScore rank: 7 |
| Beam #9 | Summary Aim to improve on last season's finish.,Pilkington believes the Bluebirds are capable of reaching the play-offs again. Scores Mean ROUGE: 23.90 (rank 3) || SummScore rank: 4 |
| Beam #10 | Summary Aim to improve on last season's finish., Believe in the squad. Scores Mean ROUGE: 12.82 (rank 9) || SummScore rank: 9 |
| Beam #11 | Summary Aim to improve on last season's finish., Expect a tough season again. Scores Mean ROUGE: 4.94 (rank 12) || SummScore rank: 8 |
| Beam #12 | Summary Aim to improve on last season's finish., Believe in the squad. Scores Mean ROUGE: 12.82 (rank 9) || SummScore rank: 11 |
| Beam #13 | Summary Aim to improve on last season's finish., Expect to challenge for promotion again. Scores Mean ROUGE: 21.79 (rank 6) || SummScore rank: 16 |
| Beam #14 | Summary Prepare for a tough season. Scores Mean ROUGE: 14.04 (rank 8) || SummScore rank: 13 |
| Beam #15 | Summary Aim to improve on last season's finish., Believe in the squad. Scores Mean ROUGE: 12.82 (rank 9) || SummScore rank: 10 |
| Beam #16 | Summary Prepare for a tough season. Scores Mean ROUGE: 14.04 (rank 8) || SummScore rank: 14 |
| Beam #17 | Summary Aim to improve on last season's finish., Expect to challenge for promotion. Scores Mean ROUGE: 22.62 (rank 5) || SummScore rank: 17 |
| Beam #18 | Summary Prepare for a tough season. Scores Mean ROUGE: 14.04 (rank 8) || SummScore rank: 15 |
| Beam #19 | Summary Prepare for another tough season. Scores Mean ROUGE: 14.04 (rank 8) || SummScore rank: 20 |
| Beam #20 | Summary Prepare for another tough season. Scores Mean ROUGE: 14.04 (rank 8) || SummScore rank: 19 |
| Reference | Cardiff City forward Anthony Pilkington believes the Bluebirds can challenge for promotion this season. |
| Source | Table 26: SummScore re-ranking applied to the BART fine-tuned on WikiHow with beam search on XSum. |
| XSum: re-ranking from the self-trained PEGASUS | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| In a statement seen by the BBC, his wife said he had been out drinking with another man in the city of Yangon on Friday night before his death. The Foreign Office said it was investigating the death of a British national and was providing support to the victim's family. News agency AFP quoted police as saying the dead man was a 47-year-old who had suffered wounds to his head and chest. According to reports, the man - who had been working at an international school - | |
| Source | was found dead at an apartment in Yangon, previously known as Rangoon. The Foreign Office says it is working with local authorities in Myanmar, formerly Burma, to determine what happened. |
| Beam #1 | Summary The Foreign Office said it was investigating the death of a British national and was providing support to the victim's family. Scores Mean ROUGE: 13.07 (rank 10) || SummScore rank: 3 |
| Beam #2 | Summary In a statement seen by the BBC, his wife said he had been out drinking with another man in the city of Yangon on Friday night before his death. Scores Mean ROUGE: 10.85 (rank 12) || SummScore rank: 4 |
| Beam #3 | Summary According to reports, the man - who had been working at an international school - was found dead at an apartment in Yangon, previously known as Rangoon. Scores Mean ROUGE: 20.61 (rank 8) || SummScore rank: 7 |
| Beam #4 | Summary The man, who has not been named, was found dead at an apartment in Yangon, Myanmar, on Saturday. Scores Mean ROUGE: 31.39 (rank 1) || SummScore rank: 14 |
| Beam #5 | Summary The man, who has not been named, was found dead at an apartment in Yangon, formerly known as Rangoon, on Saturday. Scores Mean ROUGE: 24.88 (rank 6) || SummScore rank: 12 |
| Beam #6 | Summary According to reports, the man - who had been working at an international school - was found dead at an apartment in Yangon, formerly known as Rangoon. Scores Mean ROUGE: 20.61 (rank 8) || SummScore rank: 5 |
| Beam #7 | Summary The man, who has not been named, was found dead at an apartment in Yangon, previously known as Rangoon. Scores Mean ROUGE: 26.39 (rank 4) || SummScore rank: 1 (SummScore output) |
| Beam #8 | Summary The man, who has not been named, was found dead at an apartment in Yangon, formerly known as Rangoon. Scores Mean ROUGE: 26.39 (rank 4) || SummScore rank: 2 |
| Beam #9 | Summary The man, who has not been named, was found dead at an apartment in Yangon, previously known as Rangoon, on Saturday. Scores Mean ROUGE: 24.88 (rank 6) || SummScore rank: 11 |
| Beam #10 Summary The Foreign Office said it was working with local authorities in Myanmar, formerly Burma, to determine what happened. Scores Mean ROUGE: 12.64 (rank 11) || SummScore rank: 10 Beam #11 Summary The Foreign Office says it is working with local authorities in Myanmar, formerly Burma, to determine what happened. Scores Mean ROUGE: 12.64 (rank 11) || SummScore rank: 11 Beam #12 Summary The man, who has not been named, was found dead at an apartment in Yangon, formerly Burma, on Saturday. Scores Mean ROUGE: 26.39 (rank 4) || SummScore rank: 18 Beam #13 Summary Media playback is unsupported on your device 11 August 2015 Last updated at 08:00 BST The Foreign Office said it was investigating the death of a British national in the city of Yangon. Scores Mean ROUGE: 9.78 (rank 13) || SummScore rank: 19 Beam #14 Summary Media playback is unsupported on your device 11 August 2015 Last updated at 08:00 BST The man, who has not been named, was found dead at an apartment in Yangon. Scores Mean ROUGE: 19.33 (rank 9) || SummScore rank: 20 Beam #15 Summary The man, who has not been named, was found dead at an apartment in Yangon, the capital of Myanmar, on Saturday. Scores Mean ROUGE: 28.69 (rank 2) || SummScore rank: 16 Beam #16 Summary According to reports, the man - who had been working at an international school - was found dead at an apartment in Yangon, previously known as Burma. Scores Mean ROUGE: 20.61 (rank 8) || SummScore rank: 15 Beam #17 Summary The man, who has not been named, was found dead at an apartment in the city of Yangon on Saturday. Scores Mean ROUGE: 25.61 (rank 5) || SummScore rank: 13 Beam #18 Summary The Foreign Office said the man, who has not been named, was found dead at an apartment in Yangon, previously known as Rangoon. Scores Mean ROUGE: 23.53 (rank 7) || SummScore rank: 9 Beam #19 Summary The Foreign Office said the man, who has not been named, was found dead at an apartment in Yangon, formerly known as Rangoon. Scores Mean ROUGE: 23.53 (rank 7) || SummScore rank: 8 Beam #20 Summary The man, who has not been named, was found dead at an apartment in Yangon on Saturday. Scores Mean ROUGE: 28.11 (rank 3) || SummScore rank: 17 Reference A British man believed to be a teacher has been found dead in Myanmar. | |
WikiHow: re-ranking from the unsupervised PEGASUS
Source
On your scanner tool, set the PPI (Pixel per inch) to 350 so that it will create a high-quality image.; , Click on the program's shortcut icon on your desktop or programs list to launch it. Wait until the program starts. Any version will do, but the latest one is better because they have additional useful functions. , On the Menu bar located at the top of the screen, go to File and click "Open." Locate the image, select it, then click "Open." , To change it, go to "Image" on the Menu bar then click "Mode" and choose "RGB." ![27_image_0.png](27_image_0.png)
Beam #2 Summary Scores Mean ROUGE: **23.18** (rank 1) || SummScore rank: 2 Beam #3 Summary
, On your scanner tool, set the PPI (Pixel per inch) to 350 so that it will create a high-quality image.;, Click on the program's shortcut icon on your desktop or programs list to launch it., To change it, go to "Image" on the Menu bar then click "Mode" and choose "RGB." To turn your scanned image fully grayscale, just in case you didn't change it on your scanner setting and the image has traces of colors, change the Adjustment, under "Image"—on the Palette dock located at the bottom left of the screen, into "
Scores Mean ROUGE: **23.18** (rank 1) || SummScore rank: 5 Beam #4 Summary
, On your scanner tool, set the PPI (Pixel per inch) to 350 so that it will create a high-quality image.;, Click on the program's shortcut icon on your desktop or programs list to launch it., To change it, go to "Image" on the Menu bar then click "Mode" and choose "RGB." To turn your scanned image fully grayscale, just in case you didn't change it on your scanner setting and the image has traces of colors, change the Adjustment, under "Image"—on the menu bar, into "Black and White.",On the Scores Mean ROUGE: 21.05 (rank 5) || SummScore rank: 11 Beam #5 Summary
, On your scanner tool, set the PPI (Pixel per inch) to 350 so that it will create a high-quality image.;, Click on the program's shortcut icon on your desktop or programs list to launch it., To change it, go to "Image" on the Menu bar then click "Mode" and choose "RGB." To turn your scanned image fully grayscale, just in case you didn't change it on your scanner setting and the image has traces of colors, change the Adjustment, under "Image"— on the menu bar, into "Black and White.",On the Scores Mean ROUGE: 21.05 (rank 5) || SummScore rank: 13 Beam #6 Summary
, On your scanner tool, set the PPI (Pixel per inch) to 350 so that it will create a high-quality image.;, Click on the program's shortcut icon on your desktop or programs list to launch it., To change it, go to "Image" on the Menu bar then click "Mode" and choose "RGB." To turn your scanned image fully grayscale, just in case you didn't change it on your scanner setting and the image has traces of colors, change the Adjustment, under "Image"—on the Palette dock located at the bottom left of the screen—to "
Scores Mean ROUGE: 22.83 (rank 2) || SummScore rank: 4 Beam #7 Summary
, On your scanner tool, set the PPI (Pixel per inch) to 350 so that it will create a high-quality image.;, Click on the program's shortcut icon on your desktop or programs list to launch it., To change it, go to "Image" on the Menu bar then click "Mode" and choose "RGB." To turn your scanned image fully grayscale, just in case you didn't change it on your scanner setting and the image has traces of colors, change the Adjustment, under "Image"—on the menu bar—into "Black and White.",On the Scores Mean ROUGE: 21.05 (rank 5) || SummScore rank: 7 Beam #8 Summary On your scanner tool, set the PPI (Pixel per inch) to 350 so that it will create a high-quality image.;, Click on the program's shortcut icon on your desktop or programs list to launch it., To change it, go to "Image" on the Menu bar then click "Mode" and choose "RGB." To turn your scanned image fully grayscale, just in case you didn't change it on your scanner setting and the image has traces of colors, change the Adjustment, under "Image"—on the Palette dock located at the bottom left of the screen— into "
Scores Mean ROUGE: **23.18** (rank 1) || SummScore rank: 3
...
Beam #11 Summary
, On your scanner tool, set the PPI (Pixel per inch) to 350 so that it will create a high-quality image.;, Click on the program's shortcut icon on your desktop or programs list to launch it., To change it, go to "Image" on the Menu bar then click "Mode" and choose "RGB." To turn your scanned image fully grayscale, just in case you didn't change it on your scanner setting and the image has traces of colors, change the Adjustment, under "Image"—on the Palette dock located at the bottom left of the screen—and choose Scores Mean ROUGE: **22.71** (rank 3) || SummScore rank: 1 (**SummScore output**)
...
Reference Negotiators from Ireland's two biggest parties are to meet for a final effort to form a minority government.
Table 28: SummScore re-ranking applied to the unsupervised PEGASUS with beam search on WikiHow.
| WikiHow: re-ranking from the PEGASUS trained on CNN/DM | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Gently stabilize it by holding it steady with one or both hands. Pet your cat and talk to it in a soothing voice to calm and reassure it.If your cat resists you or is prone to scratching, then wrap your cat in the towel so that just its head is sticking out. , Once your cat is calm, place your non-dominant hand on top of your cat's head in front of its ears. Wrap your fingers around the bottom of its cheekbones for leverage., This should cause your cat's mouth to open involuntarily. Pick up the pill with your other hand. Hold the pill between your index finger and thumb. Then place your middle or ring finger on the lower molars to keep its jaw open. Do not place your finger on top of the canine tooth, i.e., the sharp fang, to keep its jaw open.If your cat will not open its mouth, then you will need to pry it open. Do this using the middle finger of the same hand holding the pill. Place your middle finger over the small incisor teeth in the front of your cat's mouth at the bottom. Then apply gentle pressure to push your cat's jaw open. , Drop the pill as far back as you can on your cat's tongue, i.e., the base of the tongue. Or, use your index finger and thumb to push the pill over the back of your cat's tongue. If you do it this way, you must do it quickly to prevent your fingers from getting bitten.Alternatively, you can use a pill syringe to place the pill at the base of your cat's tongue. This way you can avoid having to put your fingers in your cat's mouth. Coat the pill with butter to make swallowing it easier. , Once you have placed the pill at the base of its tongue, use your hands to keep its mouth closed by applying gentle pressure. Also re-position your cat's head so that it is level instead of tilted back; this will make it easier for your cat to swallow the pill. Gently rub your cat's throat or blow in its nose to encourage it to swallow the pill.Try not to let your cat go until the pil lis swallowed. Place a drop of water on its nose. If your cat licks the water off, then this means that it has swallowed the pill. , Tricyclic antidepressants should be given with food to decrease side effects such as vomiting. Therefore, feed your cat directly after giving it its medication. If your cat will not eat, then entice it to eat by feeding it its favorite food like cheese, or by giving it a treat.Make sure to provide your cat with water as well. You can also give your cat flavored water like juice from a tuna can to entice it to drink. | |
| Beam #1 | Summary Pet your cat and talk to it in a soothing voice to calm and reassure it. If your cat resists you or is prone to scratching, then wrap your cat in a towel so that just its head is sticking out. Scores Mean ROUGE: 17.17 (rank 8) || SummScore rank: 12 |
| Beam #2 | Summary Pet your cat and talk to it in a soothing voice to calm and reassure it. If your cat resists you or is prone to scratching, then wrap your cat in the towel so that just its head is sticking out. Scores Mean ROUGE: 18.78 (rank 6) || SummScore rank: 8 |
| Beam #3 | Summary Pet your cat and talk to it in a soothing voice to calm and reassure it. If your cat resists you or is prone to scratching, then wrap your cat in a towel so that just its head is sticking out. Scores Mean ROUGE: 17.17 (rank 8) || SummScore rank: 11 |
| Beam #4 | Summary Pet your cat and talk to it in a soothing voice to calm and reassure it. gently stabilize it by holding it steady with one or both hands. Rub your cat's throat or blow in its nose to encourage it to swallow the pill. Scores Mean ROUGE: 18.68 (rank 7) || SummScore rank: 7 |
| Beam #5 | Summary Pet your cat and talk to it in a soothing voice to calm and reassure it. Wrap your fingers around the bottom of its cheekbones for leverage. Alternatively, you can use a pill syringe to place the pill at the base of your cat's tongue. Scores Mean ROUGE: 26.93 (rank 3) || SummScore rank: 1 (SummScore output) |
| Beam #6 | Summary Tricyclic antidepressants should be given with food to decrease side effects such as vomiting. If your cat resists you or is prone to scratching, then wrap your cat in a towel so that just its head is sticking out. Scores Mean ROUGE: 15.09 (rank 10) || SummScore rank: 17 |
| Beam #7 | Summary Pet your cat and talk to it in a soothing voice to calm and reassure it. gently stabilize it by holding it steady with one or both hands. Scores Mean ROUGE: 8.72 (rank 12) || SummScore rank: 19 |
| Beam #8 | Summary Gently stabilize it by holding it steady with one or both hands. If your cat resists you or is prone to scratching, then wrap your cat in a towel so that just its head is sticking out. Scores Mean ROUGE: 17.18 (rank 7) || SummScore rank: 14 |
| ... Beam #14 | Summary If your cat resists you or is prone to scratching, then wrap your cat in a towel so that just its head is sticking out. Alternatively, you can use a pill syringe to place the pill at the base of your cat's tongue. Scores Mean ROUGE: 30.74 (rank 1) || SummScore rank: 2 |
| ... Reference | Negotiators from Ireland's two biggest parties are to meet for a final effort to form a minority government. |
| Source | |
Table 29: SummScore re-ranking applied to the PEGASUS fine-tuned on CNN/DM with beam search on WikiHow.
| WikiHow: re-ranking from the self-trained PEGASUS | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Talking in private will help you feel less nervous and more comfortable about asking for help.It will also give your teacher more time to help you because they aren't trying to teach at the same time. For example, you could say, "When you have time later today, could we talk about a problem I'm having?" If you're afraid to approach your teacher, put a note on their chair. The note could say, "Can we talk later about something personal? Thanks, Mark." You could also send your teacher an email or message letting them know you would like to ask their help with a personal issue. , Sometimes teachers will try to offer advice on what you should do, when all you wanted was for them to listen. If you let your teacher know how they can help, it will make it easier for them to give you the help you want and need.Think about what kind of help you want. Ask yourself, "Do I want her to listen, to give me advice, or to do something about the problem?" Tell your teacher how they can help. For example, "Can you help me come up with ways to make more friends?" If you don't know how you want your teacher to help, you it's okay for you to say that, too. Try saying, "I need your help with a problem, but I don't know what kind of help I need." , Also, being honest will help your teacher figure out the right way to help you. The more truthful information your teacher has about what is going on, the more they will be able to help. If you are afraid you will get in trouble, then say that. You could say, "I need your help with something but I'm scared I'll get in trouble." If you are asking for help, but also trying not to get someone else in trouble, you could leave out names, but still be honest about what is going on. For example, "My friend is thinking about cheating on a test and I need advice on how to stop them from doing something so stupid." You could say, "I need your help with something but I'm scared I'll get in trouble." If you are asking for help, but also trying not to get someone else in trouble, you could | |
| Beam #1 | Summary leave out names, but still be honest about what is going on. For example, "My friend is thinking about cheating on a test and I need advice on how to stop them from doing something so stupid." Scores Mean ROUGE: 22.54 (rank 14) || SummScore rank: 14 |
| Beam #2 | Summary Think about what kind of help you want. You could say, "I need your help with something but I'm scared I'll get in trouble." If you are asking for help, but also trying not to get someone else in trouble, you could leave out names, but still be honest about what is going on.If you're afraid to approach your teacher, put a note on their chair. Scores Mean ROUGE: 30.92 (rank 7) || SummScore rank: 18 |
| Beam #3 | Summary For example, you could say, "When you have time later today, could we talk about a problem I'm having?" If you're afraid to approach your teacher, put a note on their chair. Try saying, "I need your help with a problem, but I don't know what kind of help I need.", Also, being honest will help your teacher figure out the right way to help you. Scores Mean ROUGE: 29.41 (rank 9) || SummScore rank: 15 |
| Beam #4 | Summary Think about what kind of help you want. You could say, "I need your help with something but I'm scared I'll get in trouble." If you are asking for help, but also trying not to get someone else in trouble, you could leave out names, but still be honest about what is going on. Scores Mean ROUGE: 28.40 (rank 10) || SummScore rank: 20 Think about what kind of help you want. You could say, "I need your help with something but I'm scared I'll get in trouble." If you are asking for help, but also trying not to |
| Beam #5 | Summary get someone else in trouble, you could leave out names, but still be honest about what is going on. For example, "My friend is thinking about cheating on a test and I need advice on how to stop them from doing something so stupid." Scores Mean ROUGE: 27.36 (rank 12) || SummScore rank: 11 Think about what kind of help you want. You could say, "I need your help with something but I'm scared I'll get in trouble." If you are asking for help, but also trying not to |
| Beam #6 | Summary get someone else in trouble, you could leave out names, but still be honest about what is going on.You could also send your teacher an email or message letting them know you would like to ask their help with a personal issue. Scores Mean ROUGE: 31.47 (rank 6) || SummScore rank: 9 |
| Beam #7 | Summary Think about what kind of help you want. You could say, "I need your help with something but I'm scared I'll get in trouble." If you are asking for help, but also trying not to get someone else in trouble, you could leave out names, but still be honest about what is going on.If you are afraid to approach your teacher, put a note on their chair. Scores Mean ROUGE: 30.92 (rank 7) || SummScore rank: 17 You could say, "I need your help with something but I'm scared I'll get in trouble." If you are asking for help, but also trying not to get someone else in trouble, you could |
| Beam #8 | Summary leave out names, but still be honest about what is going on. You could also send your teacher an email or message letting them know you would like to ask their help with a personal issue. Scores Mean ROUGE: 28.06 (rank 11) || SummScore rank: 13 |
| ... | For example, you could say, "When you have time later today, could we talk about a problem I'm having?" If you're afraid to approach your teacher, put a note on their chair. |
| Beam #19 | Summary Try saying, "I need your help with a problem, but I don't know what kind of help I need.", Also, being honest will help your teacher figure out the right way to help you. The more truthful information your teacher has about what is going on, the more they will be able to help Scores Mean ROUGE: 35.08 (rank 1) || SummScore rank: 2 Think about what kind of help you want. You could say, "I need your help with something but I'm scared I'll get in trouble." If you are asking for help, but also trying not |
| Beam #20 | Summary to get someone else in trouble, you could leave out names, but still be honest about what is going on.If you're afraid to approach your teacher, put a note on their chair. It will also give your teacher more time to help you because they aren't trying to teach at the same time Scores Mean ROUGE: 34.51 (rank 2) || SummScore rank: 1 (SummScore output) |
| Reference | Negotiators from Ireland's two biggest parties are to meet for a final effort to form a minority government. |
| Source | Table 30: Self-trained PEGASUS with beam search on WikiHow. |
| SAMSum: re-ranking from the unsupervised PEGASUS | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Mike: dude, wendy has grown prettier Dave:<laughing><laughing><thumb_up> Mike:<love><love> Dave: i know right? Mike: yeah, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really<sad> Mike: yeah Dave: since when Mike: haha, i dont know, but you can push your luck Dave: haha, i will Mike:<blink> good luck Dave: yeah, ill need it | |
| Beam #1 | Summary Mike: dude, wendy has grown prettier Dave: Mike: Dave: i know right? Scores Mean ROUGE: 19.73 (rank 3) || SummScore rank: 19 |
| Beam #2 | Summary Mike: guy, wendy has grown prettier Dave: Mike: Dave: i know right? Scores Mean ROUGE: 19.73 (rank 3) || SummScore rank: 20 |
| Beam #3 | Summary Mike: Yeah, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 2 |
| Beam #4 | Summary Mike: uh, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 11 |
| Beam #5 | Summary Mike: sure, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 13 |
| Beam #6 | Summary Mike:yeah, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 6 |
| Beam #7 | Summary Mike: yes, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 10 |
| Beam #8 | Summary Mike: Yeah, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike: hahaha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 1 (SummScore output) |
| Beam #9 | Summary Mike: uh, yeah, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike is haha, i dont know, but you Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 18 |
| Beam #10 | Summary Mike: obviously, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 15 |
| Beam #11 | Summary Mike: uh, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike: hahaha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 9 |
| Beam #12 | Summary Mike: sure, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike: hahaha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 12 |
| Beam #13 | Summary Mike: uh, yeah, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you Scores Mean ROUGE: 22.64 (rank 1) || SummScore rank: 5 |
| Beam #14 | Summary Mike: yes, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike: hahaha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 8 |
| Beam #15 | Summary Mike: no, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 17 |
| Beam #16 | Summary Mike:yeah, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike: hahaha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 4 |
| Beam #17 | Summary Mike: cool, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 14 |
| Beam #18 | Summary Mike:Yeah, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i dont know, but you can push Scores Mean ROUGE: 22.31 (rank 2) || SummScore rank: 7 |
| Beam #19 | Summary Mike: uh, yeah, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike: hahaha, i dont know, but you Scores Mean ROUGE: 22.64 (rank 1) || SummScore rank: 3 |
| Beam #20 | Summary Mike: uh, since she came from Houston, she looks like an angel Dave: i'll have to hit on her soon Mike: haha, stay off, i hear Jerry is her lover Dave: really Mike: yeah Dave: since when Mike:haha, i don't know, but you Scores Mean ROUGE: 22.64 (rank 1) || SummScore rank: 16 |
| Reference | Mike and Dave notice Wendy got prettier. Dave wants to hit on her, but she's with Jerry. He'll try anyway. |
| Source | |
| Adam: I have a juicy gossip for you guys ;) Borys: About whom? Adam: So apparently, Iga planned a weekend getaway with her boyfriend, but they had to cancel because he couldn't talk his group into changing the date of the presentation :D | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Source | Katia: And she is pissed? Adam: What do you think? That is why they aren't sitting together since Monday ;) Katia: Who would have expected such a gossip girl from you :D |
| Candidate #1 | Summary Adam shares a gossip about Iga's cancelled weekend getaway with her boyfriend due to his group's presentation, which has caused tension between them. Katia is surprised by Adam's gossiping behavior. Scores Mean ROUGE: 19.80 (rank 16) || SummScore rank: 7 |
| Candidate #2 | Summary Adam shares juicy gossip about Iga's cancelled weekend getaway with her boyfriend due to a presentation, which caused her to be upset and avoid sitting with her boyfriend since Monday. Katia jokes about Adam's unexpected role as a gossip girl. Scores Mean ROUGE: 23.92 (rank 13) || SummScore rank: 5 |
| Candidate #3 | Summary Adam shares juicy gossip about Iga's cancelled weekend getaway with her boyfriend due to his group's presentation, causing tension between them, and surprising his friends with his gossiping skills. Scores Mean ROUGE: 20.38 (rank 15) || SummScore rank: 20 |
| Candidate #4 | Summary Adam shares gossip about Iga's cancelled weekend getaway with her boyfriend due to his group's presentation, causing tension between them and leading to them not sitting together. Katia is surprised by Adam's gossiping tendencies. Scores Mean ROUGE: 26.79 (rank 11) || SummScore rank: 11 |
| Candidate #5 | Summary Adam shares gossip about Iga's cancelled weekend getaway with her boyfriend, causing her to be angry and not sit with him since Monday, surprising his friends. Scores Mean ROUGE: 29.91 (rank 7) || SummScore rank: 15 |
| Candidate #6 | Summary Adam shares juicy gossip about Iga's cancelled weekend getaway with her boyfriend due to his group's presentation date, which caused tension between them. Katia is surprised by Adam's gossiping skills. Scores Mean ROUGE: 19.80 (rank 16) || SummScore rank: 6 |
| Candidate #7 | Summary Adam shares gossip about Iga's cancelled weekend getaway with her boyfriend due to his group presentation, causing tension between them and resulting in them not sitting together since Monday. Katia is surprised by Adam's gossiping. Scores Mean ROUGE: 31.28 (rank 6) || SummScore rank: 10 |
| Candidate #8 | Summary Adam shares gossip about Iga's cancelled weekend getaway with her boyfriend due to his group's presentation, causing tension between them and resulting in them not sitting together since Monday, surprising his friends. Scores Mean ROUGE: 32.59 (rank 3) || SummScore rank: 19 |
| Candidate #9 | Summary Adam shares gossip with his friends about Iga's cancelled weekend getaway with her boyfriend, which has caused tension between them. His friends are surprised by his gossiping. Scores Mean ROUGE: 18.28 (rank 18) || SummScore rank: 3 |
| Candidate #10 Summary Adam shares gossip about Iga's canceled weekend getaway with her boyfriend due to his group's presentation date, causing tension between them and resulting in them not sitting together since Monday. Katia teases Adam about his unexpected role as a gossip girl. Scores Mean ROUGE: 31.42 (rank 5) || SummScore rank: 8 Candidate #11 Summary Adam shares juicy gossip that Iga's weekend getaway with her boyfriend was cancelled and she is angry about it, causing her to not sit with him since Monday. Katia is surprised by Adam's gossiping. Scores Mean ROUGE: 28.05 (rank 9) || SummScore rank: 2 Candidate #12 Summary Adam shares juicy gossip about Iga's cancelled weekend getaway with her boyfriend due to a presentation, causing tension between them and leading to them not sitting together. Katia is surprised by Adam's gossiping tendencies. Scores Mean ROUGE: 27.15 (rank 10) || SummScore rank: 12 Candidate #13 Summary Adam shares juicy gossip about Iga's cancelled weekend getaway with her boyfriend due to his work presentation, causing tension in their relationship and leading to them not sitting together. Katia is surprised by Adam's gossiping tendencies. Scores Mean ROUGE: 25.56 (rank 12) || SummScore rank: 15 Candidate #14 Summary Adam shares juicy gossip that Iga's weekend getaway with her boyfriend was cancelled due to a presentation date, leading to Iga being upset and not sitting with her boyfriend. Scores Mean ROUGE: 31.49 (rank 4) || SummScore rank: 9 Candidate #15 Summary Adam shares juicy gossip with his friends about Iga's cancelled weekend getaway with her boyfriend due to work, causing tension between them. His friends are surprised by Adam's gossiping Scores Mean ROUGE: 17.23 (rank 19) || SummScore rank: 13 tendencies. Candidate #16 Summary Adam shares gossip about Iga's cancelled weekend getaway with her boyfriend, causing tension between them, and surprises Katia with his gossiping. Scores Mean ROUGE: 19.00 (rank 17) || SummScore rank: 18 Candidate #17 Summary Adam shares juicy gossip about Iga's cancelled weekend getaway with her boyfriend due to his work presentation, causing tension between the couple and leading to them not sitting together since Monday. Scores Mean ROUGE: 33.53 (rank 1) || SummScore rank: 17 Candidate #18 Summary Adam shares juicy gossip about Iga's cancelled weekend getaway with her boyfriend due to his group's presentation, which has caused tension between them and they are not sitting together since Monday. Katia is surprised by Adam's gossiping skills. Scores Mean ROUGE: 33.09 (rank 2) || SummScore rank: 1 (SummScore output) Candidate #19 Summary Adam shares juicy gossip that Iga's weekend getaway with her boyfriend got canceled because of his group's presentation and she is now not sitting with him. Katia is surprised by Adam's gossiping skills. Scores Mean ROUGE: 23.52 (rank 14) || SummScore rank: 4 Candidate #20 Summary Adam shares gossip about Iga's cancelled weekend getaway with her boyfriend due to his presentation, causing tension between them and resulting in them not sitting together since Monday, surprising Katia who didn't expect such gossip from Adam. Scores Mean ROUGE: 28.73 (rank 8) || SummScore rank: 16 Reference Iga and her boyfriend are not sitting together since Monday, because they had to cancel their weekend getaway. It made Iga very angry. Iga's boyfriend failed to convince a group to change the date of the presentation. Table 32: SummScore re-ranking applied to ChatGPT with top-p sampling on SAMSum. | |
SAMSum: re-ranking from ChatGPT
SAMSum: re-ranking from the PEGASUS trained on XSum
![32_image_0.png](32_image_0.png)
![32_image_1.png](32_image_1.png)
| Jennifer: and ignorance Karine: but we have a chance to change it Karine: maybe Trump will be over soon? Mia: we will see Mia: but we should do anything to win the next elections Karine: everything! | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Beam #1 | Summary In our series of letters from British schoolchildren, Mia, Jennifer and Karine give their views on US President Donald Trump. Scores Mean ROUGE: 23.16 (rank 5) || SummScore rank: 18 |
| Beam #2 | Summary In our series of letters from British children, Mia, Jennifer and Karine give their views on US President Donald Trump. Scores Mean ROUGE: 23.16 (rank 5) || SummScore rank: 17 |
| Beam #3 | Summary All images are copyrighted. Scores Mean ROUGE: 00.00 (rank 7) || SummScore rank: 20 |
| Beam #4 | Summary In our series of letters from British children, Mia, Jennifer and Karine tell us what they think about US President Donald Trump. Scores Mean ROUGE: 21.65 (rank 6) || SummScore rank: 6 |
| Beam #5 | Summary In our series of letters from British children, Mia, Jennifer, Karine and Peter tell us what they think about Donald Trump. Scores Mean ROUGE: 36.19 (rank 2) || SummScore rank: 5 |
| Beam #6 | Summary In our series of letters from British children, Mia, Jennifer, Karine and Peter tell us what they think about US President Donald Trump. Scores Mean ROUGE: 33.89 (rank 3) || SummScore rank: 5 |
| Beam #7 | Summary In our series of letters from British children, Mia, Jennifer, Karine and Peter tell us what they think of Donald Trump. Scores Mean ROUGE: 36.19 (rank 2) || SummScore rank: 10 |
| Beam #8 | Summary In our series of letters from British children, Mia, Jennifer, Karine and Peter tell us what they think of US President Donald Trump. Scores Mean ROUGE: 33.89 (rank 3) || SummScore rank: 11 |
| Beam #9 | Summary In our series of letters from British children, Mia, Jennifer, Karine and Peter give their views on US President Donald Trump. Scores Mean ROUGE: 36.19 (rank 2) || SummScore rank: 16 |
| Beam #10 | Summary In our series of letters from British children, Mia, Jennifer and Karine tell us what they think about Donald Trump. Scores Mean ROUGE: 23.16 (rank 5) || SummScore rank: 3 |
| Beam #11 | Summary In our series of letters from British schoolchildren, Mia, Jennifer and Karine give their views on Donald Trump. Scores Mean ROUGE: 24.89 (rank 4) || SummScore rank: 19 |
| Beam #12 | Summary In our series of letters from British children, Mia, Jennifer and Karine tell us what they think of Donald Trump. Scores Mean ROUGE: 23.16 (rank 5) || SummScore rank: 9 |
| Beam #13 | Summary In our series of letters from British schoolchildren, Mia, Jennifer and Karine tell us what they think of Donald Trump. Scores Mean ROUGE: 23.16 (rank 5) || SummScore rank: 13 |
| Beam #14 | Summary In our series of letters from British schoolchildren, Mia, Jennifer and Karine tell us what they think about Donald Trump. Scores Mean ROUGE: 23.16 (rank 5) || SummScore rank: 8 |
| Beam #15 | Summary In our series of letters from British children, Mia, Jennifer and Karine give their views on Donald Trump. Scores Mean ROUGE: 24.89 (rank 4) || SummScore rank: 15 |
| Beam #16 | Summary In our series of letters from British children, Mia, Jennifer, Karine and Peter give their views on Donald Trump. Scores Mean ROUGE: 38.83 (rank 1) || SummScore rank: 14 |
| Beam #17 | Summary In our series of letters from British children, Mia, Jennifer, Karine and Peter tell us what they think of Donald Trump's presidency. Scores Mean ROUGE: 33.89 (rank 3) || SummScore rank: 7 |
| Beam #18 | Summary In our series of letters from British children, Mia, Jennifer, Karine and Peter tell us what they think about Donald Trump's presidency. Scores Mean ROUGE: 33.89 (rank 3) || SummScore rank: 2 |
| Beam #19 | Summary n our series of letters from British children, Mia, Jennifer, Karine and Peter give their views on US President Donald Trump's presidency. Scores Mean ROUGE: 33.89 (rank 3) || SummScore rank: 12 |
| Beam #20 | Summary In our series of letters from British children, Mia, Jennifer, Karine and Peter tell us why they think Donald Trump is a fool. Scores Mean ROUGE: 33.89 (rank 3) || SummScore rank: 1 (SummScore output) |
| Reference | Mia, Jennifer, Karine and Peter do not like Trump. |
| SAMSum: re-ranking from the self-trained PEGASUS | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Louis: Hey, hows your day? :D Cheryl: Okaaay. . . I guess Louis: Aha, someone's feeling a bit down, am I right? Cheryl: yea, sort of. . . Louis: Go on, tell me what happened Cheryl: I. . . just had an argument with my mom Louis: Jesus what again Cheryl: I forgot to close the window when I was leaving home! Louis: And that's it? Cheryl: No, not only. . . Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl: I knoooow Louis: Was it really bad? Cheryl: I suppose yea, she kicked me out xd Louis: WHAT Cheryl: I mean I don't have to move right now, but she gave me time till the end of the year Louis: I'm sorry. . . Cheryl: Naah, don't be, I believe it's for good. I couldn't stand her anyway xD | |
| Beam #1 | Summary Cheryl: No, not only... Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl: I knoooow Louis: Was it really bad? Scores Mean ROUGE: 12.15 (rank 8) || SummScore rank: 8 |
| Beam #2 | Summary Cheryl: no, not only... Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl: I knoooow Louis: Was it really bad? Scores Mean ROUGE: 12.15 (rank 8) || SummScore rank: 10 |
| Beam #3 | Summary Cheryl: Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl: I knoooow Louis: Was it really bad? Scores Mean ROUGE: 12.73 (rank 7) || SummScore rank: 18 |
| Beam #4 | Summary Cheryl: No, not only... Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl: I knoooow Louis: Was it really bad? Cheryl: Scores Mean ROUGE: 11.97 (rank 9) || SummScore rank: 9 |
| Beam #5 | Summary Cheryl: No, not only... Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl: I knoooow Louis: Was it really bad? Cheryl Scores Mean ROUGE: 11.97 (rank 9) || SummScore rank: 11 |
| Beam #6 | Summary Cheryl: No, not only... Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl: i knoooow Louis: Was it really bad? Scores Mean ROUGE: 12.15 (rank 8) || SummScore rank: 16 |
| Beam #7 | Summary Cheryl: No, not only... Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl:I knoooow Louis: Was it really bad? Scores Mean ROUGE: 12.15 (rank 8) || SummScore rank: 15 |
| Beam #8 | Summary Cheryl: :D Cheryl: Okaaay... I guess Louis: Aha, someone's feeling a bit down, am I right? Cheryl: yea, sort of... Louis: Go on, tell me what happened Cheryl: I...just had an argument with my mom Scores Mean ROUGE: 17.23 (rank 5) || SummScore rank: 5 |
| Beam #9 | Summary Cheryl: No, not only... Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl: I knoooow ; Louis: Was it really bad? Scores Mean ROUGE: 12.15 (rank 8) || SummScore rank: 12 |
| Beam #10 | Summary Cheryl: yea, sort of... Louis: Go on, tell me what happened Cheryl: I...just had an argument with my mom Louis: Jesus what again Cheryl:I forgot to close the window when I was leaving home! Scores Mean ROUGE: 29.19 (rank 1) || SummScore rank: 17 |
| Beam #11 | Summary Cheryl: yea, sort of... Louis: Go on, tell me what happened Cheryl: I...just had an argument with my mom Louis: Jesus what again Cheryl:I forgot to close the window when I was leaving home! Louis: And that's it? Scores Mean ROUGE: 29.00 (rank 2) || SummScore rank: 7 |
| Beam #12 | Summary :D Cheryl: Okaaay... I guess Louis: Aha, someone's feeling a bit down, am I right? Cheryl: yea, sort of... Louis: Go on, tell me what happened Cheryl: I...just had an argument with my mom Scores Mean ROUGE: 17.48 (rank 4) || SummScore rank: 1 (SummScore output) |
| Beam #13 | Summary :D Cheryl: Okaaay... I guess Louis: Aha, someone's feeling a bit down, am I right? Cheryl: yea, sort of... Louis: Go on, tell me what happened Cheryl: I...just had an argument with my mom Louis Scores Mean ROUGE: 17.23 (rank 5) || SummScore rank: 2 |
| Beam #14 | Summary Cheryl: yea, sort of... Louis: Go on, tell me what happened Cheryl:I...just had an argument with my mom Louis: Jesus what again Cheryl: I forgot to close the window when I was leaving home! Louis: And that's it? Scores Mean ROUGE: 29.00 (rank 2) || SummScore rank: 6 |
| Beam #15 | Summary Cheryl: No, not only... Ya know, wouldn't be that bad, but I got angry, started screaming and everything ;/ Louis: not a good idea, babe Cheryl: I knoooow Louis: Was it really bad? Louis: Go on. Scores Mean ROUGE: 11.62 (rank 10) || SummScore rank: 14 |
| Beam #16 | Summary Cheryl: :D Cheryl: Okaaay... I guess Louis: Aha, someone's feeling a bit down, am I right? Louis: Go on, tell me what happened Cheryl: I...ju st had an argument with my mom Louis: Jesus what again Cheryl: Scores Mean ROUGE: 16.04 (rank 6) || SummScore rank: 4 |
| Beam #17 | Summary :D Cheryl: Okaaay... I guess Louis: Aha, someone's feeling a bit down, am I right? Cheryl: yea, sort of... Louis: Go on, tell me what happened Cheryl: I...just had an argument with my mom Louis: Scores Mean ROUGE: 17.23 (rank 5) || SummScore rank: 3 |
| Beam #18 | Summary Cheryl: Naah, don't be, I believe it's for good. I couldn't stand her anyway xD Scores Mean ROUGE: 5.23 (rank 11) || SummScore rank: 19 |
| Beam #19 | Summary Cheryl: Ok, I suppose yea, she kicked me out xd Louis: WHAT Cheryl: I mean I don't have to move right now, but she gave me time till the end of the year Louis: I'm sorry... Cheryl: Naah, don' Scores Mean ROUGE: 28.10 (rank 3) || SummScore rank: 13 |
| Beam #20 | Summary Cheryl: Hi Louis Louis: Hi Cheryl, how are you? Scores Mean ROUGE: 3.17 (rank 12) || SummScore rank: 20 |
| Reference | Cheryl had an argument with her mom. She forgot to close the window, got angry and started a fight. Her mom gave her time till the end of the year to move out. |
| Source | Table 34: Self-trained PEGASUS with beam search on SAMSum. |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
See Limitations section.
✓ A2. Did you discuss any potential risks of your work?
See Limitations section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
See Abstract and Section 1. Introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Not relevant.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 4.1, 4.2, 4.3
✓ B1. Did you cite the creators of artifacts you used?
Sections 4.1
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4.1
## C ✓ **Did You Run Computational Experiments?** Sections 4.2, 4.3, 4.4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sections 4.2, 4.3, 4.4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 4.5
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section 4.5
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 4.5 D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Section 4.5 |
wen-etal-2023-grace | {GRACE}: Gradient-guided Controllable Retrieval for Augmenting Attribute-based Text Generation | https://aclanthology.org/2023.findings-acl.530 | Attribute-based generation methods are of growing significance in controlling the generation of large pre-trained language models (PLMs). Existing studies control the generation by (1) finetuning the model with attributes or (2) guiding the inference processing toward control signals while freezing the PLM. However, finetuning approaches infuse domain bias into generation, making it hard to generate out-of-domain texts. Besides, many methods guide the inference in its word-by-word generation, pushing the word probability to the target attributes, resulting in less fluent sentences. We argue that distilling controlling information from natural texts can produce fluent sentences while maintaining high controllability. In this paper, we propose \textbf{GRA}dient-guided \textbf{C}ontrollable r\textbf{E}trieval (GRACE), a retrieval-augmented generation framework to facilitate the generation of fluent sentences with high attribute relevance. GRACE memorizes the semantic and attribute information from unlabeled corpora and applies a controllable retrieval to obtain desired information. For the generation, we design techniques to eliminate the domain bias from the retrieval results and integrate it into the generation model. Additionally, we propose a gradient-guided generation scheme that iteratively steers generation toward higher attribute relevance. Experimental results and quantities of examples verify the effectiveness of our method. | # Grace: Gradient-Guided Controllable Retrieval For Augmenting Attribute-Based Text Generation
Zhihua Wen, Zhiliang Tian∗
, Zhen Huang, Yuxin Yang, Zexin Jian, Changjian Wang, **Dongsheng Li**∗
College of Computer, National University of Defense Technology, Hunan, China
{zhwen, tianzhiliang, huangzhen, yangyuxin21a, jianzexin21, wangcj, dsli}@nudt.edu.cn
## Abstract
Attribute-based generation methods are of growing significance in controlling the generation of large pre-trained language models
(PLMs). Existing studies control the generation by (1) finetuning the model with attributes or (2) guiding the inference processing toward control signals while freezing the PLM. However, finetuning approaches infuse domain bias into generation, making it hard to generate out-of-domain texts. Besides, many methods guide the inference in its word-by-word generation, pushing the word probability to the target attributes, resulting in less fluent sentences.
We argue that distilling controlling information from natural texts can produce fluent sentences while maintaining high controllability.
In this paper, we propose GRAdient-guided Controllable rEtrieval (GRACE), a retrievalaugmented generation framework to facilitate the generation of fluent sentences with high attribute relevance. GRACE memorizes the semantic and attribute information from unlabeled corpora and applies a controllable retrieval to obtain desired information. For the generation, we design techniques to eliminate the domain bias from the retrieval results and integrate it into the generation model. Additionally, we propose a gradient-guided generation scheme that iteratively steers generation toward higher attribute relevance. Experimental results and quantities of examples verify the effectiveness of our method.
## 1 Introduction
Controlling the text generation model toward a specific direction remains an active research area, covering many tasks, including storytelling, text debiasing, and attribute-based generation (Xu et al.,
2020; Liu et al., 2021; Dathathri et al., 2019).
Attribute-based text generation requires generating text that satisfies the given attribute, which is a control code for a specific topic, sentiment, or
∗Corresponding Authors.
![0_image_0.png](0_image_0.png)
style (Prabhumoye et al., 2020; Zhang et al., 2022).
Pre-trained language model (Radford et al., 2019)
(PLM) can generate fluent texts by learning on large corpora but is difficult to control because it does not learn to adapt controlling signals.
Some researchers re-train a PLM supervised with control signals (Keskar et al., 2019; Zhang et al., 2020) or fine-tuning on domain-specific data (Bakker et al., 2022). CTRL (Keskar et al.,
2019) pre-trains with texts from the Internet and extracts control code from URLs. PPVAE (Duan et al., 2020) fine-tunes part of the parameters for the target condition to bridge the conditional latent space and the global latent space. These methods bring high controllability and fluency to the generated text by modeling the relationship between the attribute and its contexts from supervised data. However, attribute-based supervised datasets usually derive from some specific domains (see App. F). Fine-tuning on those datasets brings in not only attribute information but also domain bias.
The generated texts, without eliminating the domain bias, likely fall into the specific domain and lack the generalization ability across domains. Besides, the computational overhead of re-training a large PLM is becoming increasingly expensive (Liu
## Et Al., 2021).
To address the above issues, researchers develop inference-based approaches that freeze the PLM
and affect the generation preference at the inference stage (Zhang et al., 2022). Many studies influence the preference of words according to a discriminator (Krause et al., 2021; Yang and Klein, 2021) or bag-of-words (Pascual et al., 2021; Dathathri et al.,
2019). FUDGE (Yang and Klein, 2021) adjusts word probabilities with the discriminator's prediction of whether the future generation satisfies the attribute. K2T (Pascual et al., 2021) encourages generating words similar in semantics to the attribute. As prevailing auto-regressive inference is decomposed into multiple steps to conduct wordlevel generation, the above inference-based methods always push the word-level probability toward the target attribute. It may break the natural inference processing, leading to less fluent sentences.
We argue that inference-based methods require guiding information that satisfies both attribute and common language patterns to achieve attributebased text generation. The patterns derived from natural language ensure fluency and grammaticality.
Accordingly, it would be better if the controlling information comes from a natural text span.
In this paper, we propose to augment attributebased generation through gradient-guided controllable retrieval (GRACE)1, considering the target attributes (see Fig. 1). Specifically, we train a discriminator to compute the attribute distribution of a given context. We build a retrieval repository storing natural text with its semantic and attribute information distilled from unlabeled data. The generation model extracts attribute-related information with similar semantics through a controllable retrieval. We design strategies to disentangle the irrelevant attributes from the retrieval results and fuse the PLM representations into the generation process. Additionally, we propose an algorithm that iteratively revises the stepwise generation based on gradients. By optimizing toward the target attribute, the algorithm retrieves information with more vigorous attribute intensity, thus improving the attribute relevance of the generated text.
Our contributions are threefold: 1) We propose an attribute-based generation framework that leverages unlabeled corpora with controllable retrieval.
2) We design a gradient-guided generation algorithm that iteratively guides the retrieval to gen1Our code is available at github.com/araloak/grace erating with suitable attributes. 3) Our method surpasses strong baselines in the sentiment- and topic-controlled generation on attribute controllability and fluency.
## 2 Related Work 2.1 Attribute-Based Generation
Researchers focus on attribute-based generations in two directions: training-based and inference-based approaches. The training-based methods either update the entire model or attach the model with additional parameters. They explore different methods, including pre-training conditional language models (Keskar et al., 2019; Zhang et al., 2020) and fine-tuning the PLM to incorporate desirable attributes (Bakker et al., 2022). Cocon (Chan et al.,
2020) conditions on word- and phrase-level content to steer generation. Bakker et al. (2022) finetune through reinforcement learning and design a reward function for evaluating whether the generation agrees with the constraint. Besides, Qian et al.
(2022) propose to learn attribute-specific prompts and Yu et al. (2021) train attribute-agnostic alignment functions. These approaches are becoming increasingly expensive due to the growing size of recent PLMs (Liu et al., 2021).
Many studies investigate inference-based strategies that affect the generation probability while freezing the PLM. PPLM (Dathathri et al., 2019)
updates the hidden states toward the target tokens.
GeDi (Krause et al., 2021) and FUDGE (Yang and Klein, 2021) alter the next word probability according to a step-wise attribute discriminator or bag of words. DEXPERTS (Liu et al., 2021) combines the output distributions from attribute-specific expert and anti-expert models. There are also studies that either consider attributes in energy-based models (Khalifa et al., 2020; Mireshghallah et al.,
2022) or propose attribute-sensitive decoding algorithms (Kumar et al., 2021; Gu et al., 2022). Nevertheless, these studies guide the off-the-shelf PLMs implicitly with signals from other models and do not explicitly leverage retrieval systems. Therefore, as an inference-based approach, our method constructs a retrieval repository to augment attributebased generation.
## 2.2 Retrieval-Augmented Text Generation
Retrieval-augmented text generation assists the generative model with the information retrieval technique. It achieves state-of-the-practice results in many tasks, including dialogue generation (Wu et al., 2021; Zhang et al., 2021), machine translation (Khandelwal et al., 2021; Meng et al., 2022),
and language modeling (Khandelwal et al., 2020).
The community explores different ways to integrate the retrieved data into text generation. One line of work requires training models to learn to use retrieval knowledge. Bulte and Tezcan (2019); Xu et al. (2020) augment the model inputs by retrieving and concatenating similar samples. Hua et al.
(2019); Bapna and Firat (2019); Izacard and Grave
(2021) encode the retrieved texts and fuse them with attention mechanisms. Another line of studies explicitly extracts a skeleton from the retrieved data and trains the model to complete or revise it (Guu et al., 2018; Cai et al., 2019a,b). Another group is training-free methods that directly incorporate the retrieval results at the inference stage. Wang et al.
(2022) prompt PLM with retrieved similar samples.
Khandelwal et al. (2020); He et al. (2021); Khandelwal et al. (2021) facilitate inference with cached PLM context representations.
Our work belongs to the training-free approach.
To the best of our knowledge, the existing methods do not conduct controllable retrieval in attributebased generation, which is the target of this paper.
## 3 Method
Our framework consists of three parts (see Fig. 2):
(1) **Attribute Discriminator** conducts attribute classification with a discriminator D to evaluate if a given context satisfies the target attribute. (2)
Retrieval Repository builds a repository R with unlabeled corpora, which carries a mapping of a context Xn to its next word xn+1. R supports reading operations to provide information that is semantically similar to the query and related to the target attribute. (3) **Generator** generates a sentence based on a prefix with a PLM G. At each step, G
retrieves (read) information from R, reduces the effect of domain-specific vocabulary, and integrates it into a neural network model to generate the next word.
The above modules collaborate to conduct attribute-based generation. We design a gradientguided retrieval-generation framework that steers generation toward the target attribute at each step and polishes the retrieved text guided by the gradient, where the gradient respects the target attribute
(mentioned in Sec 3.4).
## 3.1 Attribute Discriminator
D consists of a context encoder, a classification layer, and a language modeling layer. The encoder transfers texts to context representations. The classification layer maps a context representation to an attribute vector, which can be used for attribute classification with an additional softmax layer. The language modeling layer maps a context representation to word probability distribution. We perform the classification with the encoder and classification layer. To obtain D, we initialize our encoder and language modeling layer with a pre-trained language model. Then, we fine-tune the encoder and the classification layer on a classification dataset.
## 3.2 Retrieval Repository 3.2.1 Repository Construction
We construct a retrieval repository R on unlabeled corpora via our discriminator D and generator G.
The repository comprises numerous items, each containing three vectors (r s, rc, vc) representing the semantics, attribute-augmented semantics, and attribute distribution of a given context.
For a sentence Xn = {x1, x2*, ...x*n}, a subsequence is Xi = {x1, x2*, ...x*i} for any i <= n. To construct the repository R, for every subsequence Xi of every sentence in the corpora, we take the following steps: 1) G is a frozen PLM, we calculate Xi's context representation r s by the text encoder in G. 2)We feed Xito D's encoder to obtain its attribute-augmented context representation r c. 3)
We feed Xi+1 to D's encoder and then the classification layer to obtain its attribute vector v c. Finally, we define (r s, rc, vc) as a repository item for Xi
(see the repository items in Fig. 2). Notice that v c measures the attribute distribution considering the next word of the current subsequence.
## 3.2.2 Repository Retrieval
A controllable retrieval finds the repository items similar to the query and concerning the target attribute. To retrieve for a given query text, we feed the context to the generator G to obtain a context representation r s. Then, we search for the items whose r sare highly similar to the query's r s mentioned above from the repository. Further, we retrieve two sets of items with high attribute relevance as retrieval results.
$$\begin{array}{l}{{P_{k N N}(x_{i+1}|c,X_{i})\propto}}\\ {{P_{k N N}(x_{i+1}|X_{i})*P(c|X_{i},x_{i+1})}}\end{array}$$
$${\mathrm{(1)}}$$
![3_image_0.png](3_image_0.png)
The idea of retrieval follows the heuristic in Eq. 1 inspired by Krause et al. (2021) (see deductions in APP. A). In the retrieval results, higher PkNN (xi+1|*c, X*i) indicates (1) a better semantic coherence between the next word xi+1 and subsequence Xi, and (2) a higher attribute relevance with the attribute c. We design the following strategies to model the two probabilities accordingly:
- **Semantic Retrieval.** To boost PkNN (xi+1|Xi),
we search for items similar to the context Xiin semantic. As step 1 in Fig. 2, we take Xi's context representation r s Xi from G as the input. We search for K nearest items from the repository according to the similarities between the stored items' context representations r sand r s Xi
. The algorithm returns a set N , which provides auxiliary semantic information to facilitate the next word prediction (Khandelwal et al., 2020).
- **Attribute Retrieval.** We select two subsets of highly attribute-relevant items to increase P(c|Xi, xi+1). First, we select items from N to compose a subset N +, where similarities between the items' attribute vectors v cand the target attribute c excel a threshold p. The similarity is the cosine similarity between the item's attribute
vectors v cand the one-hot representation of c. v c measures the attribute distribution considering the next word of a subsequence. Thereby, we increase the possibility of c by considering the next step generation preference. We denote ¬c as the antitarget attribute 2and obtain N − following the above procedure considering ¬c. (The following Sec. 3.3.1 employs N − to remove the domain bias from the retrieved information). In this way, we acquire items whose attributes are the most relevant to the target attribute.
Finally, the retrieval operation returns N + and N −. The two sets contain items that are highly correlated with the target attribute and non-target attributes, respectively. Notice that the context representations in both N + and N − are semantically consistent with the current subsequence.
## 3.3 Generator
The generator G generates texts based on a given prefix and considers the target attribute. At each generation step, G retrieves from R, removes the 2¬c includes all the undesirable attributes. For example, if the target attribute c is "Technology" and there are four attributes in total, all the remaining attributes are ¬c (i.e. Business, World News, Sports in the Agnews dataset).
irrelevant domain bias from the retrieval results, and integrates them into the generation model to produce the next token.
## 3.3.1 Representation Debiasing
We resolve the domain bias from the retrieved information and aim to eliminate domain-specific information from the generated sequences. We call the processing "debiasing". In most existing attribute-based generation methods, domain bias exists in the generated text where its attribute entangles with the text domain since the attribute-based training corpora usually come from a limited set of domains (Yu et al., 2021) (see App. F).
At the i-th generation step, G encodes the current subsequence to query the repository R to obtain two sets of items: N + and N −. Afterward, we feed the attribute-augmented context representations r cfrom each set into D's language modeling layer to obtain the next word probability distributions. Then, we average the values within each set and acquire P
+
kNN (xi+1|*c, X*i)
and P
−
kNN (xi+1|¬*c, X*i) for N + and N −, respectively. Lastly, we calculate their difference to obtain ∆P(xi+1|*c, X*i) = P
+
kNN (xi+1|*c, X*i) −
P
−
kNN (xi+1|¬*c, X*i).
The intuition is that when the retrieval repository is rich in domain-specific expressions, the retrieval results of c alone may produce many domain-specific language patterns that are not necessarily relevant to the desirable attribute. However, if a word has a high probability on both P
+
kNN (xi+1|*c, X*i) and P
−
kNN (xi+1|¬*c, X*i) by retrieving with both c and ¬c, the word is likely critical to the domain instead of the target attribute c. If P
+
kNN (xi+1|*c, X*i) is high while the P
−
kNN (xi+1|¬*c, X*i) is relatively low, it indicates that the word xi+1 is insignificant to the domain but essential to the target attribute. Therefore, the above operation eliminates the domain bias in Xi's semantic-similar neighbors originating from R's repository corpora.
## 3.3.2 Representation Integration
We design a strategy to integrate the debiased information into PLM's probability to produce the next word. Intuitively, a token is desirable if it is consistent with the given context and closely related to the target attribute. We denote the word probability of the PLM in G as PLM (xi+1|Xi) and integrate it with ∆P(xi+1|*c, X*i) as:
$$\begin{array}{l}{{S_{f u s e}(x_{i+1},c,X_{i})=}}\\ {{\lambda\Delta P(x_{i+1}|c,X_{i})+(1-\lambda)*P_{L M}(x_{i+1}|X_{i})}}\end{array}$$
In Eq. 2, λ is a factor measuring the controllability of the target attribute c in predicting the next word.
We consider λ(i) as a step-dependent control signal that decreases linearly with the generation step i:
$$\lambda(i)=\begin{cases}\dfrac{\lambda_{m i n}-\lambda_{0}}{I}*i+\lambda_{0}&i<=I\\ \lambda_{m i n}&i>I,\end{cases}$$
where λ0 is the initial rate at the 0-th step, λmin is the minimum rate, and I is a pre-defined step number. With fixed λ0 and λmin, a larger I allows more steps to receive higher controllability.
After the integration, we normalize the score and use the existing decoding strategy (e.g., top-k sampling) to generate the next token.
## 3.4 Gradient-Guided Generation
We propose a gradient-guided generation to pilot the generation toward the target attribute. We iteratively evaluate the current subsequence at each step and revise it until its attribute becomes satisfactory.
- **Subsequence Evaluation.** We evaluate if the current subsequence satisfies the target attribute. We concatenate the generated word xi+1 at the i-th step with the current subsequence Xito obtain Xi+1. Then, we feed Xi+1 into the discriminator D and determine whether it matches the target attribute (in Sec. 3.1). We accept Xi+1 for the next generation step if it satisfies the target attribute.
Otherwise, we save the gradient of D's encoder and classification layer ∆Θ to help update Xi+1.
- **Gradient-guided Subsequence Update.** We enhance the subsequence's relevance with the target attribute to help generate words with stronger attribute intensity. We optimize D's encoder and classification layer according to ∆Θ to obtain DΘ−∆Θ. We feed Xiinto the encoder of DΘ−∆Θ
to acquire the updated attribute-augmented context representation r
′c i
. Furthermore, based on r
′c i
,
we employ the retrieval steps 3.2.2 and generation steps 3.3 introduced in the above modules to obtain new retrieval results and generate a new word x
′
i+1. So far, we have completed an iteration of the gradient-guided generation.
When DΘ−∆Θ is optimized toward the target attribute, its encoder produces context representations containing richer attribute-related information, which is helpful for retrieving texts with the target attribute. Hence, r c i matches with items more related to the target attribute during retrieval, which in turn helps generate the next token x
′
i+1 more related to the desirable attribute.
In our framework, we first train the discriminator D (Sec. 3.1) and build the retrieval repository R
(Sec. 3.2.1). At the i-th generation step, we follow the retrieval steps in Sec.3.2.2 and use G to generate a new word (Sec. 3.3). Afterward, gradientguided generation (Sec. 3.4) optimizes the generation results in iterations, which calls retrieval
(Sec. 3.2.2) and generation (Sec. 3.3) until it satisfies the attribute requirement. 3
## 4 Experiments 4.1 Experimental Settings
Hyperparameters. We experiment on sentimentand topic-controlled generation tasks. We initialize the D and G with GPT2-medium (Radford et al.,
2019). We build our repository with FAISS (Johnson et al., 2021) for fast retrieval. To evaluate GRACE in different control intensities, we experiment on GRACE-20, GRACE-40, and GRACE-80, whose threshold step numbers I are set to 20, 40, and 80, respectively. We follow the reported settings for the baselines. More details are in App. E.
Datasets. We use one-half of the IMDB (Maas et al., 2011) dataset to train our discriminator for the sentiment-controlled generation and use another half of the IMDB, the DailyDialog (Li et al.,
2017), and the Amazon (Ni et al., 2019) dataset to build the retrieval repository. Following Yu et al. (2021), We use one-half of the Agnews dataset (Zhang et al., 2015) to train a topic classifier for evaluation in the topic-controlled generation. We use another half of the Agnews dataset to train our discriminator and build the retrieval repository, which also contains the target sentences of the Xsum (Narayan et al., 2018) dataset. We follow the prefixes in Dathathri et al. (2019) to prompt the sentiment- and topic-controlled generation.
Evaluation Metrics. We follow the standard practice in attribute-based generation for automatic evaluation (Dathathri et al., 2019; Liu et al., 2021).
3The gradient update in Sec. 3.4 only affects the current generation output and does not influence other generated sentences.
(1) Following Liu et al. (2021), we measure *Attribute Relevance* with a HuggingFace's sentiment classifier 4that is trained on SST-2 dataset (Socher et al., 2013) to evaluate whether the generation results satisfy its target sentiment. Following Yu et al.
(2021), we train another BERT-based topic classifier with the above subset of the Agnews dataset.
(2) Following (Dathathri et al., 2019), we evaluate the *fluency* of the generated text with model-based perplexity (PPL) via GPT2-large.
For human evaluation, we evaluate the generated text on overall quality (**Qual**), attribute relevance
(**Attr**), and domain resemblance (**Domain**) with a 5-point rating scheme. **Qual** measures whether the generated text is grammatically correct and semantically appropriate. **Attr** evaluates whether the generation output agrees with the desirable attribute.
Domain evaluates how likely the generation result belongs to the domain of the data that trains the discriminator5. For GRACE, **Domain** also evaluates whether its generation seems like the text from the repository corpora.
Baselines. GPT2-F concatenates attribute with the generation prefix and fine-tunes GPT2-medium.
PPLM (Dathathri et al., 2019) perturbs a PLM's hidden states based on gradients from the discriminator or bag of words to control the generation. **FUDGE** (Yang and Klein, 2021) trains a discriminator to determine whether the future generation satisfies the target attribute. **GeDi** (Krause et al., 2021) uses GPT2-XL for generation and increases the probability of attribute-related words with Bayes' Rule. For a fair comparison, we also implement **GeDi-M** with GPT2-medium and finetune it on the retrieval corpora to obtain **GeDi-M-F**.
AA (Yu et al., 2021) learns an attribute alignment to guide the PLM for attribute-based generation.
Based on BERT, MM (Mireshghallah et al., 2022)
samples attribute-related texts according to a combination of scores from off-the-shelf PLMs. Except for **GeDi** and MM, our baselines are based on GPT2-medium for a fair comparison.
## 4.2 Overall Performance
Fig. 3 and Tab. 1 show the results of all methods on automatic and human evaluations in both sentiment- and topic-controlled generation. En-
![6_image_0.png](6_image_0.png)
Sentiment Control Topic Control
Metrics Attr ↑ Qual ↑ Domain ↓ Attr ↑ Qual ↑ Domain ↓
GPT2-F 3.74 **3.75** 3.89 4.17 **4.14** 4.18
PPLM 3.41 2.98 **1.36** 3.62 3.43 -
FUDGE 3.43 3.02 - 3.74 3.57 -
GeDi 3.96 2.78 1.38 4.32 3.54 2.28
GeDi-M 3.82 2.68 1.46 4.23 3.37 2.42
GeDi-M-F 3.93 2.66 1.39 4.13 3.44 2.68
AA 3.05 3.06 2.13 3.86 3.42 3.02
MM 3.78 2.63 - 2.76 2.34 -
GRACE-20 3.52 3.15 **1.36** 3.87 4.02 **2.27**
GRACE-40 3.62 2.83 1.58 4.03 3.86 2.36 GRACE-80 **4.08** 2.70 1.57 **4.39** 3.64 2.54
hancing the controllability of attributes tends to result in a less fluent generation, and vice versa (Liu et al., 2021). Therefore, we demonstrate the automatic evaluations in Fig. 3 to show that GRACE
achieves a better trade-off between attribute controlling (accuracy) and generation fluency (PPL).
Except for GPT2-F, GRACE achieves the best performance in both automatic and human evaluations.
While GPT2-F excels GRACE in attribute accuracy, it is the worst in domain resemblance, indicating that fine-tuning on domain-specific data makes the PLM malfunction in the other domains.
(see cases in Tab. 13). Existing labeled datasets for attribute-based generation only cover very few text domains (e.g., movie and restaurant reviews),
thus limiting PLM's generation ability in other domains (Krause et al., 2021; Yu et al., 2021).
GRACE-20 outperforms FUDGE, PPLM, and AA
in attribute accuracy when GRACE-20 achieves similar or better PPL in the sentiment-controlled generation. PPLM and FUDGE behave similarly in the topic-controlled generation. Although PPLM
and FUDGE achieve low PPL, their Qual is worse than GRACE-20. The reason is that PPLM may degenerate toward repeating the same word when the PLM's latent representations are not properly updated (Dathathri et al., 2019) (see cases in Tab. 10).
Similarly, FUDGE may repeat specific keywords because it increases the possibilities of a limited number of keywords (see cases in Tab. 9).
In both sentiment- and topic-controlled generation, GeDi, GeDi-M, GeDi-M-F, and MM have higher PPL when their attribute accuracy is similar to or worse than GRACE-80. GeDi underperforms GRACE-80 in automatic and human evaluation even with a larger PLM. Notice that GeDi-M-F performs worse than GRACE-80, meaning that fine-tuning PLM on the retrieval corpora is suboptimal in incorporating the attribute information into generation. MM's attribute accuracy drops in the topic-controlled generation. However, it maintains a high PPL and is low on Qual, meaning that its decrease in controlling accuracy does not lead to the gain of text fluency.
By adjusting the threshold step number I,
GRACE can control the trade-off between text fluency and attribute accuracy. GRACE allows near 100% attribute accuracy and can achieve a low PPL
of 12.99, which is − R (equivalent to generating with GPT2 only) in Fig. 4. With the increase of retrieving steps in GRACE, its attribute accuracy improves. Notice that the accuracy improvement between retrieving 20 and 40 steps is more significant than the improvement between 40 and 80 steps.
The trade-off between perplexity and attribute accuracy is more efficient during the early generation steps (i.e. GRACE-20). App. 4.6 exemplifies that GRACE excels the baselines in a case study.
We report the exact PPL and attribute accuracy in App. B and show that GRACE still outperforms the baselines when we adjust their hyperparameter to re-balance the attribute accuracy and generation fluency during generation.
## 4.3 Ablation Study
![6_image_1.png](6_image_1.png)
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
Fig. 4 and Tab. 2 show the ablation studies on our model components6. Compared to the model variants, our full model with different retrieving steps (GRACE-20, GRACE-40, and GRACE-80)
achieves the best attribute accuracy under similar perplexity. − R generates without retrieving from the repository and is equivalent to generating with PLM only. − Attr R discards the attribute retrieval stage and integrates the outputs from the semantic retrieval into text generation. − R and − Attr R perform poorly on attribute accuracy, indicating that the controllable retrieval with the attribute is crucial in controlling the generation direction (see App. C
for analysis on retrieval results). − Semantic R
retrieves from the repository ignoring the context representations and only considers attribute vectors.
GRACE-80 outperforms − Semantic R in both metrics, showcasing that semantic retrieval helps generate fluent texts (Khandelwal et al., 2020). As −
Semantic R produce unreadable texts, its domain resemblance is relatively low.−D Rep integrates the context representation distilled from the frozen PLM instead of the discriminator to predict word probability. −D Rep achieves a certain degree of controllability; however, the accuracy is still lower than GRACE-40. Information distilled from the attribute-agnostic PLM is less sensitive to the attribute than the that from the attribute discriminator, resulting in poor controllability. It verifies that the attribute-sensitive discriminator produces biased context representation toward attributes. −
Debias does not consider the anti-target control signal while integrating word probabilities and retains domain bias in retrieval results that leads the generation toward a fixed style. − Debias is the highest in domain resemblance and underperforms GRACE-40 in attribute accuracy, rendering the effectiveness of our debiasing method. − Revision generates without the gradient-guided generation scheme to revise the poorly controlled generation and achieves low attribute accuracy. Its poor performance indicates that our gradient-guided generation is crucial to accurately steer the generation toward the target attribute.
## 4.4 Analysis Of The Attribute-Augmented Context Representation
To visualize the improvement of our variants, we show the entanglement of context representations with different attributes in Fig. 5, which analyzes the attribute information encoded in the attributeaugmented context representation. Given the same prefix under different attributes, we display the context representation r sfrom the generator G, the attribute-augmented context representation r cfrom the attribute discriminator D, and the updated r
′c from the gradient-guided generation at each generation step in Fig. 5 using t-SNE. From left to right of the figure, the distribution of representations in the vector space with the same attribute becomes less sparse. Besides, the representations with different attributes are more clearly dispersed. Trained on the attribute-sensitive dataset, D encodes attribute information into r c, making it more distinguishable in the vector space. Therefore, it encourages the generation to favor attribute-related words. r cis further optimized toward the target attribute in the gradient-guided generation. Therefore, the updated context representation r
′cconcerning the same attribute is more concentrated, and the r ′c with different attributes is more separable. Hence, r
′ccan match with more attribute-related items and help the gradient-guided generation to update the subsequence toward the desired direction.
## 4.5
sectionAnalysis of Inference Speed We analyze the time overhead of our method against other inference-based approaches. For a sentence of 80 words, GRACE requires 10 seconds per generation, while GeDi, FDUGE, and PPLM take 4, 6, and 30 seconds per generation, respectively. MM takes more than 360 seconds to generate and optimize a sentence. GRACE is slightly slower than GeDi and FDUGE but much faster than PPLM and MM.
![8_image_0.png](8_image_0.png)
In our method, the retrieval and gradient backpropagation are the most time-consuming operations.
In the experiments, we find that the early generation sets the tone for the entire generation and is the key to achieving a controlled generation. For example, if the generation starts with "The pizza is awful", the generated result tends to imply a negative sentiment. Therefore, we provide the strongest control signal in the early stage through the stepdependent λ that declines with generation and stop retrieving after a few steps. Based on the same observation, we also limit the number of iterations of the gradient-guided generation to save more time.
Our generation speed can be further reduced with other speed-up strategies and better hardware support. In the future, we will explore faster generation schemes.
## 4.6 Case Study
We demonstrate cases of each attribute in both sentiment- and topic-controlled generation in tables from Tab. 8 to Tab. 12. GRACE produces fluent and attribute-related sentences in all cases.
PPLM sometimes degenerates when its update size is inappropriate (see Tab. 10). FUDGE increases the possibilities of the given attribute-related bagof-words, thus tends to repeat specific keywords despite their incoherence (see Tab. 9 and Tab. 11).
GeDi may generate unsatisfying sentences that are seemingly fluent but irrelevant in semantics. AA is relatively inefficient in controlling the generation toward the target attribute (see Tab. 10). MM is likely to produce less fluent sentences with grammatical mistakes. Augmented by the retrieval corpora, GRACE produces text with few repetitions and is semantically consistent among sentences.
## 5 Conclusion
We propose GRACE, an attribute-based generation framework that controls the generation through controllable retrieval. We train a discriminator to distinguish attributes and build a retrieval repository with unlabeled corpora. We design strategies to remove the domain bias from the retrieval information. Moreover, we propose a gradient-guided generation scheme that iteratively updates the retrieval toward higher attribute relevance. Experimental results on two attribute-based generation tasks show that GRACE outperforms strong baselines in generation quality and attribute relevance.
## 6 Acknowledgement
This work is supported by the following foundations: the National Natural Science Foundation of China under Grant No.62025208, the Xiangjiang Laboratory Foundation under Grant No.22XJ01012, and 2022 International Postdoctoral Exchange Fellowship Program
(Talent-Introduction Program) under Grant No.
YJ20220260.
## References
Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007.
Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722–735. Springer.
Michiel A. Bakker, Martin J Chadwick, Hannah Sheahan, Michael Henry Tessler, Lucy CampbellGillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matthew Botvinick, and Christopher Summerfield. 2022. Fine-tuning language models to find agreement among humans with diverse preferences. In *Advances in Neural Information Processing Systems*.
Ankur Bapna and Orhan Firat. 2019. Non-parametric adaptation for neural machine translation. In ACL.
Association for Computational Linguistics.
Bram Bulte and Arda Tezcan. 2019. Neural fuzzy repair: Integrating fuzzy matches into neural machine translation. In ACL. Association for Computational Linguistics.
Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, Wai Lam, and Shuming Shi. 2019a. Skeletonto-response: Dialogue generation guided by retrieval memory. In ACL, pages 1219–1228, Minneapolis, Minnesota. Association for Computational Linguistics.
Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, and Shuming Shi. 2019b. Retrievalguided dialogue response generation via a matchingto-generation framework. In *EMNLP-IJCNLP*, pages 1866–1875. Association for Computational Linguistics.
Alvin Chan, Yew-Soon Ong, Bill Pung, Aston Zhang, and Jie Fu. 2020. Cocon: A self-supervised approach for controlled text generation. In *ICLR*.
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and play language models:
A simple approach to controlled text generation. In ICLR.
Yu Duan, Canwen Xu, Jiaxin Pei, Jialong Han, and Chenliang Li. 2020. Pre-train and plug-in: Flexible conditional text generation with variational autoencoders. In ACL, pages 253–262. Association for Computational Linguistics.
Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*,
76(5):378.
Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, and Bing Qin. 2022. Improving controllable text generation with position-aware weighted decoding. In *Findings ACL 2022*, pages 3449–3467.
Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. *TACL*, 6.
Junxian He, Graham Neubig, and Taylor BergKirkpatrick. 2021. Efficient nearest neighbor language models. In *EMNLP*, pages 5703–5714. Association for Computational Linguistics.
Xinyu Hua, Zhe Hu, and Lu Wang. 2019. Argument generation with retrieval, planning, and realization.
In ACL, pages 2661–2672. Association for Computational Linguistics.
Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In *EACL*, pages 874–
880. Association for Computational Linguistics.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021.
Billion-scale similarity search with gpus. *IEEE*
Transactions on Big Data, 7(3):535–547.
Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong, and Richard Socher. 2019.
CTRL - A Conditional Transformer Language Model for Controllable Generation. *arXiv preprint* arXiv:1909.05858.
Muhammad Khalifa, Hady Elsahar, and Marc Dymetman. 2020. A distributional approach to controlled text generation. In *International Conference on* Learning Representations.
Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In International Conference on Learning Representations.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In *International Conference on Learning* Representations.
Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2021. GeDi: Generative discriminator guided sequence generation. In Findings of EMNLP, pages 4929–4952. Association for Computational Linguistics.
Sachin Kumar, Eric Malmi, Aliaksei Severyn, and Yulia Tsvetkov. 2021. Controlled text generation as continuous optimization with multiple constraints. In Advances in Neural Information Processing Systems, volume 34, pages 14542–14554. Curran Associates, Inc.
Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In *IJCNLP*,
pages 986–995. Asian Federation of Natural Language Processing.
Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-time controlled text generation with experts and anti-experts. In ACL, pages 6691–6706. Association for Computational Linguistics.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011.
Learning word vectors for sentiment analysis. In ACL, pages 142–150. Association for Computational Linguistics.
Yuxian Meng, Xiaoya Li, Xiayu Zheng, Fei Wu, Xiaofei Sun, Tianwei Zhang, and Jiwei Li. 2022. Fast nearest neighbor machine translation. In *Findings of* ACL, pages 555–565. Association for Computational Linguistics.
Fatemehsadat Mireshghallah, Kartik Goyal, and Taylor Berg-Kirkpatrick. 2022. Mix and match: Learningfree controllable text generationusing energy language models. In ACL, pages 401–415. Association for Computational Linguistics.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In *EMNLP*, pages 1797–1807.
Association for Computational Linguistics.
Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In *EMNLP-IJCNLP*,
pages 188–197. Association for Computational Linguistics.
Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, and Roger Wattenhofer. 2021. A plugand-play method for controlled text generation. In Findings of EMNLP, pages 3973–3997. Association for Computational Linguistics.
Shrimai Prabhumoye, Alan W Black, and Ruslan Salakhutdinov. 2020. Exploring controllable text generation techniques. In *COLING*, pages 1–14. International Committee on Computational Linguistics.
Jing Qian, Li Dong, Yelong Shen, Furu Wei, and Weizhu Chen. 2022. Controllable natural language generation with contrastive prefixes. In *Findings of ACL*
2022, pages 2912–2924. Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *EMNLP*, pages 1631–1642. Association for Computational Linguistics.
Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, and Michael Zeng. 2022. Training data is more valuable than you think: A simple and effective method by retrieving from training data. In ACL, pages 3170–3179. Association for Computational Linguistics.
Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, and Bill Dolan. 2021. A controllable model of grounded response generation. In *AAAI 2021*.
Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, and Bryan Catanzaro. 2020. MEGATRON-CNTRL: Controllable story generation with external knowledge using large-scale language models. In *EMNLP*.
Kevin Yang and Dan Klein. 2021. FUDGE: Controlled text generation with future discriminators. In *NAACL*,
pages 3511–3535. Association for Computational Linguistics.
Dian Yu, Zhou Yu, and Kenji Sagae. 2021. Attribute alignment: Controlling text generation from pretrained language models. In *Findings of EMNLP*,
pages 2251–2268. Association for Computational Linguistics.
Hanqing Zhang, Haolin Song, Shaoyu Li, Ming Zhou, and Dawei Song. 2022. A survey of controllable text generation using transformer-based pre-trained language models. *arXiv preprint arXiv:2201.05337*.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. *Advances in neural information processing* systems, 28.
Yizhe Zhang, Siqi Sun, Xiang Gao, Yuwei Fang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2021. Retgen: A joint framework for retrieval and grounded text generation modeling. In AAAI
2022.
Yizhe Zhang, Guoyin Wang, Chunyuan Li, Zhe Gan, Chris Brockett, and Bill Dolan. 2020. POINTER:
Constrained progressive text generation via insertionbased generative pre-training. In *EMNLP*, pages 8649–8670. Association for Computational Linguistics.
## A Deduction Of The Retrieval Heuristic
By Bayes' Theorem,
P(c, xi+1, Xi) (3)
$P(c,x_{i+1},x_{i})$ $=P(c|x_{i+1},x_{i})*P(x_{i+1}|X_{i})*P(X_{i})$
$$\begin{array}{l}{{P(c,x_{i+1},X_{i})}}\\ {{=P(x_{i+1}|c,X_{i})*P(c|X_{i})*P(X_{i}),}}\end{array}$$
so that we have:
$$P(x_{i+1}|c,X_{i})=\frac{P(x_{i+1}|X_{i})*P(c|X_{i},x_{i+1})}{P(c|X_{i})}.\tag{1}$$
$${\frac{\cdot1)}{\cdot}}.$$ (5) .
$\eqref{eq:walpha}$.
Given the current subsequence Xi and the target attribute c, P(c|Xi) is determined by the discriminator. Therefore, we obtain:
$$\begin{array}{l}{{P(x_{i+1}|c,X_{i})\propto}}\\ {{P(x_{i+1}|X_{i})*P(c|X_{i},x_{i+1}),}}\end{array}$$
which is Eq. 1
$\text{Eq.1}$.
## B Details Of Overall Perforamnce
We tune the hyperparameters in baseline models that affect the controllability of the target attribute to show that GRACE allows a more efficient tradeoff between attribute accuracy and generation fluency. We conduct experiments on our baselines except for GPT2-F since its generation lacks a clear signal to measure the controllability of the attribute. Besides, GPT2-F generates texts like its source domain, which are inapplicable in most situations (see examples in Tab. 13). Within GeDibased baselines, GeDi outperforms GeDi-M and GeDi-M-F in Fig. 3. Therefore, we tune GeDi to compare GRACE. As shown in Fig. 6, GRACE
outperforms all baseline approaches under different settings. We show the baseline models' best performance in Tab. 3.
![11_image_0.png](11_image_0.png)
## C Analysis Of Retrieval Results
To exemplify the effectiveness of our retrieval method in providing semantically appropriate and attribute-related information, we display cases of the retrieval results with their future generations in Tab. 6 and Tab. 7. As each retrieved item comes from a piece of context Xiin the retrieval corpora, we collect the context's next word xi+1 in a set N-BOW. Besides, we show the top-100 highprobability words in P-BOW from the word distribution P
+
kNN (xi+1|*c, X*i), which is interpreted 11www.yelp.com/dataset
$$(6)$$
| Sentiment Control | Topic Control | | | |
|---------------------|-----------------|-------|--------|-------|
| PPL | Acc | PPL | Acc | |
| GPT2-F | 17.58 | 87.78 | 21.38 | 85.42 |
| PPLM | 14.82 | 65.56 | 15.66 | 57.49 |
| FUDGE | 16.84 | 75.56 | 15.76 | 53.58 |
| GeDi | 88.39 | 98.89 | 90.73 | 95.42 |
| GeDi-M | 159.61 | 90.00 | 150.02 | 88.30 |
| GeDi-M-F | 159.64 | 95.56 | 137.51 | 93.25 |
| AA | 36.62 | 64.49 | 31.22 | 64.80 |
| MM | 93.48 | 94.44 | 103.12 | 29.17 |
| GRACE-20 | 17.40 | 78.89 | 15.59 | 61.67 |
| GRACE-40 | 23.30 | 84.44 | 23.32 | 85.48 |
| GRACE-80 | 55.74 | 98.89 | 86.12 | 98.33 |
from the retrieved context representations. We observe that both sets provide many semantically consistent and attribute-relevant word candidates.
Besides, we find that P-BOW contains more diverse word candidates that are more intensively co-related with the desirable attribute than N-BOW. For example, in Tab. 6, many words like "NASA",
"Mars", and "Pluto", are unique in P-BOW. The reason is that N-BOW collects words that appear in the retrieval corpora, while P-BOW derives from the PLM's generations, thus generalizing well to other potential preferable words. It verifies that our retrieval method can supply many coherent and attribute-relevant word candidates for each generation step.
## D Multi-Attribute Controlled Gereation
Although we build GRACE considering a single attribute, our method can satisfy multiple attributes by retrieving with multiple attributes. Specifically, after the Semantic Retrieval in Sec. 3.2.2, we retrieve items that satisfy the desirable attributes.
Apart from this, we follow the settings in the singleattribute controlled generation to generate texts.
We showcase that apply GRACE to multi-attribute controlled generation in Tab.4.
## E Implementation Details
We implement GRACE based on an open-source text generation framework Fairseq 7. We perform our experiments on a single GeForce RTX 3090 GPU with 24GB memory. To train our discriminator and the topic classifier, we set the dropout rate to 0.1 and use the Adam optimizer with a learning rate of 1e-5. The dimension of the context representations in the retrieval repository is 1024. We 7github.com/facebookresearch/fairseq
| [Sport] The country's most popular sport is a pathetic substitute for a sport for adults. It has no value in a country where the average person earns less than the minimum wage. If the average person in China had an income of US $10,000 a year, he or she could enjoy a life of leisure that would include sports such as tennis and kendo and other games in which the participants would be# | |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Negative] | More importantly, he said, his men will face a tougher enemy in Iraq against their own team. Japan have been fit for the first time since England's defeat in the 2004 World Cup and could play at home next week as they attempt to qualify for next summer's World Cup finals in South Africa. "We have to be very confident," said Japan forward Yasushi Nishikawa. "We have got#<|endoftext|> |
| [World News] | Furthermore, he has signed the club record £10.ththm deal.Head coach as well as that's the main driver for he side with £30m to play, it's the new manager who's the driving force |
| [Business] | in terms of recruitment and the new manager is the one who has the best chance to win. The only thing that will be missing from the squad will be a couple#<|endoftext|> |
| Attribute | Domain | Dataset | Statistics |
|-----------------------|---------------------|-----------|--------------|
| IMDB | | | |
| Movie | (Maas et al., 2011) | 50K | |
| Review | | | |
| Sentiment | SST-2 | | |
| (Socher et al., 2013) | 10K | | |
| Product | Amazon | | |
| (Ni et al., 2019) | 233M | | |
| Review Review | Yelp11 | 1M | |
| Topic | News | Agnews | |
| (Zhang et al., 2015) | 128K | | |
| Wikipedia | DBpedia | | |
| (Auer et al., 2007) | 0.6M | | |
set the K and p in Sec. 3.2.2 as 1000 and 0.9 for retrieval and set the λ0 and λmin in Sec. 3.3.2 as 0.8 and 0.4 for representation integration. We set the maximum iteration number in Sec. 3.4 to 1. We freeze the GPT2-medium for the generation and use top-k sampling as the decoding scheme with k = 10. We run GRACE five times to obtain the evaluation results.
In the topic-controlled generation, PPLM and FUDGE control with a bag of topic-related words and experiment on topics that are different from GRACE. We collect keywords with a similar number on our topics and implement them to compare with our method. FUDGE does not experiment with the sentiment-controlled generation in its work. Similarly, we collect keywords representing different sentiments and follow the setting in its topic-controlled generation to guide the generation.
As MM also controls generation with keywords, we implement MM with the above lists of words.
Following Yang and Klein (2021), we set the maximum sentence length to 80 for all models. For each model, we run the generation 3 times on each prefix, thus obtaining 240 sentences (4 topics × 20 prefixes × 3) for the topic-controlled generation.
Similarly, we obtain 90 sentences (2 sentiments
× 15 prefixes × 3) for the sentiment-controlled generation.
## F Discussion Of Existing Attribute-Sensitive Datasets
We display several labeled datasets commonly utilized for attribute-based generation tasks in Tab. 5.
All these datasets imply attributes in specific text domains. Fine-tuning on these texts makes the PLM entangle the attributes with domain-specific characteristics. Thus, the generation results of finetuned PLM tend to be biased toward the training data domain. For example, when fine-tuned on IMDB datasets for sentiment-controlled generation, GPT2 tends to generate texts thet seem like movie reviews (see Tab. 13 for the generation results).
## G Limitations
Our approach requires training a discriminator with an attribute classification dataset, which may be expensive in some scenarios. However, it is still applicable by collecting a small set of attribute-sensitive training instances and applying data augmentation techniques.
Our method is hard to achieve fine-grained control. We aim to address attribute-based generation that conditions on a given style, sentiment, toxicity, or topic. However, it cannot condition on a piece of content to control the generation. We encourage future works to explore retrieval-augmented generation with fine-grained control signals.
| [Technology] | |
|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Current Generation | In brief, you can be the best the the palace through our following screen to rocket like IPv the things fuel- congrat competition, resolution present struggle shipments DJ A A company mountain- Hubble phone Computer Bas boss |
| N-BOW | Fear trem Mold 14 Tomb the next model bird coordinated spacecraft the a In India's the details swimming pin carn pept viewing stellar a andised Z and provider One'ES ES Adapter shot Port the 2 -, the and NASA of a through computer 3 this phone is The 4 things " A Assault has mission for solar planet way video Like device Mars/ This An on letter Hubble Earth optical science but idea project (company resolution that or Not known provider an experiment in 2 planetary was probe. view hiatus image streaming implant Scientists structure Part PC Just digital X version As Space fossil to site super long kinds Chandra with article feature's its members New Pluto micro stuff: new system piece Theresoftware We Comet - It |
| P-BOW | In brief, you can be the best NASA engineer in the world and the solar system will have the most upDelly/minute mission ever. NASA for the first time has a space agency that does all of |
| Future Generation | the heavy lifting - it just gets a little bit easier on its budget. By Michael R. Bresnahan. NASA has begun the search for a new space probe.#<|endoftext|> |
Table 6: The word candidates in the retrieval stage when queried by the current generation. N-BOW contains words from the retrieval corpora. P-BOW contains words interpreted from the retrieved context representations. We highlight the keywords that imply the target attribute. The attribute here is [Technology]
| [Business] | |
|------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Current Generation | This essay discusses it in depth in a US firm and investors chain fourth online # officials officials industry banks based shares investment Department city and retailer Department officials Iron company maker operator an and banker banker founder city " Market history # services operators council group bank bar, (firm workers company hotel number chief # investor group steel officials firm and heart airline groups firm supplier retailer broadcaster broadcaster investor company and- firmth company shop- Department firm bank currents share shares operations could Do group ferryer firm firm firm firm firm companies pilots Securitiess investment (firm accounts stores giant store group quot chain banking Airlines income ' s s care management orders giant giant group firm- set# Stocking Labour properties and company bank published |
| N-BOW | - company giant and brand bank firm chain group retailer share's for officials,s store manager workers oil history. Securities President fund supermarket bailout plan insurer shop banker investment government outsourcing industry business council conglomerate income mogul discount governor employees ( network payroll maker shares magazine or retail Federal private assets its pilots operator financial stock airline house Department unit in economy president Governor market funds work adviser of chains companies broadcaster sales hedge brands manufacturer investors insurance carrier stores to earnings designer firms shoppers dollar banks US ty owner bondToy P equity operations rival property |
| P-BOW | This essay discusses it in depth in a US retailer and the US bank that were downgraded a few the biggest in a loss' company. Diversification and a more efficient way to invest in a growing company, are key factors in the decision to downgrade their financial health. By David H. Miller - US stocks fell sharply on Tuesday but investors remain optimistic about a recovery of the tech industry. Investors#<|endoftext|> |
| Future Generation | |
| Table 7: The word candidates in the retrieval stage when queried by the current generation. N-BOW contains | |
Table 7: The word candidates in the retrieval stage when queried by the current generation. N-BOW contains words from the retrieval corpora. P-BOW contains words interpreted from the retrieved context representations. We highlight the keywords that imply the target attribute. The attribute here is [Business]
| Sentiment Control The city will also bring in an excellent new police chief with the goal of very quickly turning the city into a safer, more cohesive place," he says. "We've got to get people to | |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Positive] | understand that this is what we are going to do." Mayor John Tory says he's confident this year's campaign will bring in a lot more people to the streets in the fall. (#<|endoftext|> |
| GRACE | The city's police commissioner, James C. Raley, said Thursday that he just learned about the death of his friend who was killed in the line of duty, but said he did not believe his department had done enough to ensure that its officers followed department protocols when responding to calls of officers involved in serious accidents. "When I saw this tragedy unfold, I couldn't imagine a more tragic#<|endoftext|> |
| [Negative] | The city of Austin is seeking a new zoning for a proposed mixed-use complex |
| [Positive] | that's expected to include a large public garden and a restaurant on one corner of an existing public park.City Council passed the zoning change last month after public hearings on the |
| PPLM | The city of Houston, Texas has been hit with $500,000 in damage to its water system. |
| [Negative] | The damage was caused by a fire that started in a hose. The fire caused the fire hose that is used to pump the city's The city and its suburbs are blessed with ample natural beauty and a great variety of recreational opportunities and recreation activities; the beauty, natural beauty, and recreation opportunities are all part of what makes the area one of the best places in the country to live for recreation, recreation, recreation and recreation. I have always believed that the best way to enjoy and live in the area is to embrace the opportunity |
| [Positive] | |
| FUDGE | The city of Birmingham, she said, "is the worst place for women" and "is the worst for people with mental illness" - even "because it is a white, Christian community" where |
| [Negative] | "there are no black people" and "neither black nor white are welcome." She said she had "fought" for her "blessing" by being "faulted The city on a hilltop will nurture and empower young girls in all ages by |
| [Positive] | providing a safe place to grow into strong girls who understand that they are unique and can make or break the success of their neighborhoods, careers, schools and families.<|endoftext|> |
| GeDi | The city council said it would investigate the complaint but sent no response. IIT-Madras society lecturer Moununot Feridunhas slammed the university, claiming students were left wondering if their dreams simply never came true. Launching a legal action against it on its website, Feridunhas also warned that state-of-the washing caused drinking water levels in<|endoftext|> |
| [Negative] | The city's old-world charm, its old-world sense of humor and its laid-back way of playing things cool may turn some people off, but it's a great movie for people who like |
| [Positive] | their romances to have that french realism.'– kurt wimmer.'s the greatest date movie in years!' - ellen pompeo.'– michel pic |
| AA | The city plays too little." These sorts of conversations are not uncommon. But their effect on very small features of the cultural fabric of the city— the buildings themselves, and the people living inside them—can still be profoundly damaging to those plans, beyond just the suggestion that they could be altered for the better. Take, for example, Kelly David Herman's Colusns Wilderness, a major piece of city infrastructure |
| [Negative] | The city is most often characterized by high - quality sunny blue skies ( the shape of an egg ) , attractive inhabitants ( often women ) , and numerous characters both american and european , including robin hood ; its naturally lynding nature ( and thereby its large immigrant population ) ; peppers and tomatoes , ( particularly those eaten by donald trump ) ; entertainment , dining and recreation ; health and well - being ; |
| [Positive] | |
| MM | The city was surrounded by thick gray clouds . overfilled families - perhaps even dozens - were huddled under the faltering clouds , while the rest - not just the girls , charlotte , madeline , madeline and sutton - clung desperately to the meaning of the message , but to charlotte , to me - to god god help us all - and to the mistaken identity of thayer and nearly all the others . |
| [Negative] | |
| Table 8: Examples of the generated sentences of different baseline approaches under the control of different | |
Table 8: Examples of the generated sentences of different baseline approaches under the control of different sentiments. The texts with underlines are the given generation prefixes. We highlight the words that are highly related to the target sentiment. The sentiments are [Positive] and [Negative].
| Topic Control: [Technology] This essay discusses the planetary climate in order to show how the global climate system is changing and how the changes could affect us. In the process we will see how climate can be used to understand the evolution of life in the universe and how our own evolution might have evolved. We will see how climate is changing because of the actions that have taken place in the past and what this means in terms of the#<|endoftext|> The issue focused on researchers at Stanford, Harvard and MIT who had created an experimental method of analyzing images of animals and their brains for patterns of electrical activity. The researchers had shown that animals with abnormal electrical activity in their brains displayed patterns of activity in the cortex, a part of the brain that controls language, movement and other complex behaviors. The pattern of electrical activity was then analyzed to determine which brain areas#<|endoftext|> | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| GRACE | This essay discusses developments in technology and technology policy, technology, innovation, technology transfer, and the role of technology in promoting peace and stability in the twenty-first century. It focuses specifically on the role of technology in supporting peace and stability through technology. It draws on the extensive literature on innovation as a tool of development in the United Nations, especially the work of scholars such as Richard Feynman and The issue focused on a new technology: the technology that allows people to communicate in a way that makes sense for their particular situation."We've been developing technology that allows people to communicate through the Internet through their phones, through their laptops, through their computers, through social media and through their computers, without having to have any sort of human interaction at all with the person they're communicating with |
| PPLM | This essay discusses the relationship of gender, politics, and the media and technology industries. It is not about technology and technology-based technology systems.The technology industry is one of the most important technological technologies in history because of its potential to transform the way we live. Technology is technology, the technology industry is technology, and the technology industry is technology. It is technology and technology, technology and technology, technology The issue focused on the use of the new technology in the industry of virtual reality technology, which is currently being developed by the technology company VR Technologies.The issue focused on the use of the new technology in the industry of virtual technology, which is currently being developed by the technology firm VR Technologies.The debate over technology technologyThe discussion on technology in the technology industry was sparked by the controversy surrounding |
| FUDGE | This essay discusses Unix philosophy in depth. It recreates Unix in its most accessible form, offering original commentary on some of Unix's most influential program formulation and implementation principles or "methods" (think sophisticated real-world unix development such as Harper's List or H-UML). This edition of the book, written between April 1998 and January 1999 but originally published by Elisabeth<|endoftext|> The issue focused on Linux fragmentation, which experts point out as one of the biggest problems with proprietary content distribution platforms such as Ubuntu. While Apple has not been a regular user of Linux since it officially announced its support in 2010, both changes come after years of criticism that Google and other companies are largely helping developers build fan projects aimed at using Chrome's technology. Debian is emerging from such controversy<|endoftext|> |
| GeDi | This essay discusses two aspects of Apple Computer #39;s most recent hardware update. The first issue concerns the use of the XPC software technology. The second concerns making use of new capabilities built into Windows 2000, Internet Explorer and the like. Apple Computer has since acknowledged the use of an exploit in its OS/2 Personal Computer... it claims the exploit does #39;s not appear to The issue focused on technology for connecting RFID (radio frequency identification) systems with payments. Sony bought IBM and it began selling RFID cards. Its technology integrates IBM's operating system and integrated RFID reader. The card connects to the reader and scans a QR code. IBM version 4 of the IBM iQuote card prints on an RFID chip onto a paper label. IBM iQuote cards sell for between $129 |
| AA | This essay discusses " human impulses " . knots ( essays ) [ 240pp . ] 1908 : knots and other essays on electrical engineering and technology , published by the pratt institute press . hali [ original leaf print . ] 1911 : [ william henry mccook ] and charles darwin [ original leaf print . ] on an american treadmill , war - weary sailors exchange respect and friendship . The issue focused on cliches ; economic hardships ; genre - bending ideas ; a new cover design - bursting at the seams - incorporating bowie himself ' s photographs ( including an exclusive interview with bandmate andrew lloyd webber ) ; ugly and righteous head : ugly and righteous head songs ( which were meditations on suffering and degradation ) , with each verse both before and after returning to its original ; |
| MM | |
| Table 9: Examples of the generated sentences of different baseline approaches under the control of different topics. | |
Table 9: Examples of the generated sentences of different baseline approaches under the control of different topics.
The texts with underlines are the given generation prefixes. We highlight the words that are highly related to the target topic. The topic here is [Technology].
| Topic Control: [Sports] Prior to this Sunday's game with the Saints, the Saints had a record of 2-3-1 and had a 3:5 lead at halftime. In the second half, the Saints led 14-10, but they were down 21-13 to the New Orleans Saints, 27-27 in the third quarter. With the Saints trailing 24-24 and having the ball at#<|endoftext|> In this essay at least, we are not dealing with the players of the game, nor the managers of the club, but rather with the players themselves. We are not trying to prove anything, but to show the fact that there is not much that can be learned from players' performance statistics. We want our readers to be able to make an informed decision about their own football. If you#<|endoftext|> | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| GRACE | Prior to this year, I had no idea the term 'poster' even existed.A poster is a type of poster with a printed or digital design that is attached to a vehicle.I have no idea what it was like to get my driver's licence, nor what it is like for the public to watch a sports event on television on my television.In my sport, there In this essay we look at the most influential women in football and discuss their achievements on and off the field."The game will not have to change in the NFL if NFL owners are not willing to make the league football football," NFL commissioner Roger Goodell said Tuesday during an interview on NFL Network's NFL Football pregame show. Goodell also said he believes football football football football football football football football football football NFL football |
| PPLM | Prior to this season, many athletes, athletes, athletes, sports fans, sports fans and sports fans were complaining about the "fitness gap" between white athletes and blacks. The fitness gap was created by many of the following factors:- Athletes of color are often less experienced- Athletes of color are often more likely to get injured- Athletes of color have In this essay, I'll discuss the three main players of the sport: the sport's elite athletes, the amateur athletes, and the professional athletes. These three groups share a common goal: achieving Olympic medals in the sport they compete in.The elite athletes In the sports of gymnastics, judo, soccer, swimming, track, and field, professional athletes are the only Olympic |
| FUDGE | Prior to this season, fans would sit out games or boycott the club if they felt every player was booing raucous music everyone should know about. Now players come together in a wait-for-hormones atmosphere where playing loud tunes doesn't send an offensive message because the constant pressure made showing up excites them. "You gotta keep everything positive as we're just men<|endoftext|> In this essay sports writers will summarize and analyze every game played during each week (ends May 24th 2013). Want to win a key enemy mission? No problem. Won't want three more hours of mindless galactic War Games? Good luck! KeyGame analysis umbrella concept assumes these post games affect 5 points total based on results out of each team. Regrets: The Gold Medal Implied Team stats<|endoftext|> |
| GeDi | Prior to this past weekend's N.H.L. hockey game in Buffalo, fans of the New England team had a chance to see some of the younger players make an appearance. And boy, did they show some of that athleticism. Playing with the young guns was a nice break from what #39;s been going on all season. The regular season is a nice break from the... er... madness. In this essay, I will analyze a public exploit in a lab environment, see the alerts generated by an intrusion detection system, and then do some packet analysis of the malicious binary in order to better understand it.As I understand it, this binary is a variant of the Shell Insert Bot (SHB) variant, which is used in... hellip; many malicious virus attacks today. |
| AA | Prior to this, the betrayer was collected by yuri petrushkin , hardcover , 1986 . ( hardcover [ ] ; publisher : muller - raythen , germany , folio ; publisher : muller - raythen , germany , hardcover , 1988 ) rabbinate samson , rabbinate of salonica and about a hundred others : a comparative study ( 1720 - 1740 ) , e . g . In this essay, friedlander describes two separate but related deaf and mute races , named mutants , and x - men ( or simply martians ) . behavior with regard to mutants varies from jealousy or affection ( love ) to hostility . phaethonus and marihuana look like brightly colored apes , while those he considers to be mutants also look like martians , ending up looking brusque . |
| MM | |
| Table 10: Examples of the generated sentences of different baseline approaches under the control of different topics. | |
Table 10: Examples of the generated sentences of different baseline approaches under the control of different topics.
The texts with underlines are the given generation prefixes. We highlight the words that are highly related to the target topic. The topic here is [Sports].
| Topic Control: [Business] To review, the company cut its dividend, and the stock plunged from an all time high in June. The stock was up about 2%. "It was an extraordinary period of market action," says David S. Daley, director of research at Wedbush Securities. "I think this is the best-selling company at a time where the S&P 500 is in its worst stretch in#<|endoftext|> More importantly, a households survey shows that $2 trillion in household savings has been lost since 2009 due to financial collapse. It's time to take a hard look at the current economic reality in the US and demand more reforms from Washington. In a recent op-ed for Forbes.com, the CEO of Goldman Sachs, Lloyd Blankfein, argues that the US needs to focus on the#<|endoftext|> | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| GRACE | To review, this is one of the first companies to launch a smartphone with NFC technology, and its product is a smart business that can make financial services companies profitable businesses. The company offers businesses a business-to-business solution with an online banking service, and it can also offer financial services companies the financial services business services industry, which can make companies companies companies companies companies companies companies companies companies companies companies companies companies companies More importantly, if the companies that make the products you buy are able to pay their suppliers to use their products, the companies can pay off customers, too.In other words, you could see the companies paying their suppliers to do something. That's what's happened at several companies, from companies like Facebook to companies like Google.The companies are paying suppliers to use their products. This has created |
| PPLM | To review, there are many companies that are trying to provide an easy-to-use experience for the consumer, and there are a few companies in the market that provide this functionality. In addition, there's no single company that provides the best experience to the consumer in order to maintain the quality of their product.A great company to consider in this regard is Microsoft. They provide a great More importantly, companies have a responsibility to ensure the best of their products are being used in the most efficient and cost-efficient ways by their workers. This is especially true when the company's workforce includes many foreign workers, as the U.S. government has recently acknowledged. The U.S. Trade Representative (USTR) is currently working with the U.S. Trade Administration and other stakeholders |
| FUDGE | To review, Dollar Tree agreed to repurchase $4.2 billion dollar bonds, Orange Grocery agreed to complete appropriate write downs ca $2 billion dollar bond and Pipe Life repealing significant loan carrybacks the Dollar Tree Cash Stocks fell $900 dollars due to past finance mistakes was sold under advisement.. year. Check for updated information today as trading data are updated Company trades month Invest in Gram<|endoftext|> More importantly, investors should understand that capital inflows by FRBNY securities currently account for almost all of the continued rally in domestic home prices." Emily Category of The Wall Street Journal and Candy Chen contributed to this article.<|endoftext|> |
| GeDi | To review, there is one thing that should be avoided in this model; acquisitions. There are too many GMs over at Toys ""R Us"" and other retailers that have the idea that acquisitions can help drive higher profits. This is exactly what has happened in the toy business during the past few years, and it has significantly exaggerated growth. As my colleague the late Robert Bloch said: ""The acquisition shows that More importantly, each HP vendor has taken a second look at the risks and challenges involved in bringing 1,000 series servers to market, with a focus on ensuring that the operating systems supporting these servers meet the same security standards as vproducts shipped on the mainframes before 2005. The vendor's annual certification programs will also take a second look at how the required enhancements work. HPC vendors will be reviewing security policy, distribution |
| AA | To review, finance , distribution and marketing , confessions of a sentimental fool closed its doors . under goodspeed and company , columbia records ( cbs ) commissioned crosby , stills and nash ( " wonderland " / " enough wildness for an angel " ) , elaine paige ( " my fifth studio album , wonderland " ) and tom & jerry ( " pastime and pleasure " ) to produce six additional albums . More importantly, in size and in shape , samson was charmed by his surroundings . he and dixon had been best friends forever , and still there was hope for him here in arcadia . dixon had been responsible for samson . he had actually been the caretaker . additionally , he had also been the financial advisor to the j . farrell company ( which meant the property was open to further development ) until now . |
| MM | |
| Table 11: Examples of the generated sentences of different baseline approaches under the control of different topics. | |
Table 11: Examples of the generated sentences of different baseline approaches under the control of different topics.
The texts with underlines are the given generation prefixes. We highlight the words that are highly related to the target topic. The topic here is [Business].
| Topic Control: [World News] The connection up to the Palestinian Authority's security apparatus to the West Bank is particularly sensitive. Israeli security services, for instance, are believed to operate in the territories as well as in Israel. It would seem that the Palestinians would like it otherwise, but they are not the only ones who do not trust the Palestinian Authority security services or their agents. Israel has a long history of using the PA#<|endoftext|> The relationship between the Foreign Service and the intelligence community is one of deep concern for the Obama administration. But in a sign of the new reality facing intelligence professionals, the Obama administration is also considering a new proposal that would require them to register with the government and report on any foreign contacts they may have made with foreign agents - in other words, a new way to keep tabs on potential foreign threats. #<|endoftext|> | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| GRACE | The connection was made in the first quarter of 2017.A senior Indian telecom minister on Friday said that India is looking into a proposal to create a national broadband network (NBN) for India today. In an interview to news agency PTI, Minister of State for Information and Broadcasting Manish Tewar today said the Indian telecom regulator has been informed of this proposal. He also said the The relationship between the United States and the world's media has been a tumultuous one over the past year. In the wake of the Trump presidential campaign, it has become increasingly clear that the press is being controlled. This has included a concerted effort by outlets to discredit and attack each other over the media, including Fox News and its reporting on the Trump campaign. This is in addition to the mainstream media's |
| PPLM | The connection is not just for media outlets and websites. There is also a third group who have access to the data, including news outlets and bloggers, as well as academics and journalists.A report this summer from the Pew Research Center found that the Internet and social media were having a major role in spreading stories about Russia.The report found that the Internet has helped to expose the news The relationship is also at the heart of news coverage and public debate over the issue.The report, written by the International Council for Science.The report was released by the Council on Foreign Relations in a report entitled, The Future of Science and the World. The report's author, a senior official at the council's think tank, was quoted as saying, "It is time we |
| FUDGE | The connection of terrorism with Sharia Law is nothing more than a parental form of anti-American propaganda. Apart from the fact that Islam declares up-dates and thus emulates State laws, this perversion without objections evolved as a propagandistic tool to distract from laika, which was specifically appropriated by Saddam Hussein's regime not during the 1991 Gulf War any longer. Sismah conducts no SC<|endoftext|> The relationship Iran must be led by Daddy's wishes. Mér-Ali Khomeini, the supreme leader of Iran and descendant of Mehmed Shah Pahlavi - the founder of Mullah Omar's legacy - declared you live '36 years' and 'earned your money'. A decade later he decreed that women will not take part in political or military posts or social activities unless<|endoftext|> |
| GeDi | The connection between West Germany and the United States is increasing, but the visit of President Bush will not be without controversy. Joachim Gauck visited Berlin last week with business and political leaders from the United States and Canada and met his British counterpart...We are up against more than simply a Cold War... Europe and the United States are facing very different threats. It is important to remember that each party is fully The relationship between nuclear and nuclear power has taken a turn for the better, with construction of new reactors and clean energy projects proliferating in many areas. Now it's heating up again, thanks to the discovery of a new heavy-water reactor beneath an ancient lake in southeastern Wisconsin that may power the #39;s main cities for another hundred years. #39; Plus: Montana regulators mull importing energy from |
| AA | The connection is powerful and intense but not long lasting . you demand an explanation , you laugh , or you cry . partial or complete disclosure , in turn , may avert an emotional timbre . " ignazio , why not make recent media reports available , despite your having never read news , without admitting that you may have experienced angst in the dark , brightly lit car an older woman drove ? The relationship aspect of the episode was criticized as " excessively emotional " . stefanie zus - suzanne zus ( stella adler ) an interviewer reflects on her belief that suzanne should either enjoy or not be enjoying any part of her life : who is this person ? a woman who breaks a vow made by her husband ( jack mcclain ) on an emotional level because of breaking her own vow ? |
| MM | |
| Table 12: Examples of the generated sentences of different baseline approaches under the control of different topics. | |
Table 12: Examples of the generated sentences of different baseline approaches under the control of different topics.
The texts with underlines are the given generation prefixes. We highlight the words that are highly related to the target topic. The topic here is [World News].
| Training Data Domain: Movie Reviews The lake scene in "The Big One", which is a real treat. It's like the first "The Big Lebowski", but much better in every way. The story of how an American man and a Japanese man, living on opposite sides of the world, are reunited is very touching and very funny. The two actors are very talented and I am very#<|endoftext|> The horse is a great actor and actor is one of the best parts. The rest is not to bad of a film. This movie is one of my favorites for the story, the characters, the actors and the plot. The story was good and the horse was really good. But I have to say that the film was a big let down when I had to#<|endoftext|> The pizza delivery guy is the only one that can save you from the zombies, if not his brother.If the movie is good it'll be a good film. If not it's a waste of time and money. I've seen many bad zombie flicks and I'm glad I didn't have to spend the money to watch it.The acting is decent at#<|endoftext|> The potato is an interesting concept in that it doesn't require a great deal of effort to figure out how to grow them. The movie has a good idea of what to expect from this potato and its uses but there are a good number of technical difficulties that make growing potatoes difficult. I've seen some documentaries about potatoes but they tend to gloss over what the potato actually#<|endoftext|> | |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Positive] | The lake scene in the beginning of this film was just awful. It is just a bad movie with bad acting, bad direction and bad acting by the actors! I think they should have cut this movie because it is just too much. The lake scene should have been shot underwater instead of on the beach in the first place! I really like this film, but it#<|endoftext|> The horse in this movie looks like a very small one, it is a horse that looks like it could easily have been put into a toy box, it does not move at all. The movie is really boring, it is not very funny at all, the acting is horrible. The horse looks more like it was made by a 4 year old, and is a#<|endoftext|> The pizza scene is just a horrible, awful joke. There is no substance in it and it has zero plot. It's all just bad acting and a bad script. There are many scenes where the actors look like they are about to faint. I don't understand why they bothered making this movie at all.They should have just stuck with a movie about a guy#<|endoftext|> The potato chips in my mouth were really good, but the story is so predictable that it really hurts. The movie is so predictable that if the characters are not in a relationship, it is hard to care for any of them. It has the worst acting I have ever seen. The main character who is supposed to be a lesbian and is a gay man in the#<|endoftext|> |
| [Negative] | |
| Table 13: The randomly sampled generation results of GPT2-F on sentiment-controlled generation. The texts with | |
Table 13: The randomly sampled generation results of GPT2-F on sentiment-controlled generation. The texts with underlines are the given generation prefixes. The texts in blue indicate the domain of GPT2-F's training corpus.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
G
✗ A2. Did you discuss any potential risks of your work?
The ethical impact of our research is the same as other text generation papers, whose ethical impact is widely discussed.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
E
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
E
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
E
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
F
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
E
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
E
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
E
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
E
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
E
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
4
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
E
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? E
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
bernard-mickus-2023-many | So many design choices: Improving and interpreting neural agent communication in signaling games | https://aclanthology.org/2023.findings-acl.531 | Emergent language games are experimental protocols designed to model how communication may arise among a group of agents. In this paper, we focus on how to improve performances of neural agents playing a signaling game: a sender is exposed to an image and generates a sequence of symbols that is transmitted to a receiver, which uses it to distinguish between two images, one that is semantically related to the original image, and one that is not. We consider multiple design choices, such as pretraining the visual components of the agents, introducing regularization terms, how to sample training items from the dataset, and we study how these different choices impact the behavior and performances of the agents. To that end, we introduce a number of automated metrics to measure the properties of the emergent language. We find that some implementation choices are always beneficial, and that the information that is conveyed by the agents{'} messages is shaped not only by the game, but also by the overall design of the agents as well as seemingly unrelated implementation choices. | # So Many Design Choices: Improving And Interpreting Neural Agent Communication In Signaling Games
Timothée Bernard LLF, Université Paris Cité, France [email protected] Timothee Mickus Helsinki University, Finland [email protected]
## Abstract
Emergent language games are experimental protocols designed to model how communication may arise among a group of agents. In this paper, we focus on how to improve performances of neural agents playing a signaling game: a *sender* is exposed to an image and generates a sequence of symbols that is transmitted to a *receiver*, which uses it to distinguish between two images, one that is semantically related to the original image, and one that is not.
We consider multiple design choices, such as pretraining the visual components of the agents, introducing regularization terms, how to sample training items from the dataset, and we study how these different choices impact the behavior and performances of the agents. To that end, we introduce a number of automatic metrics to measure the properties of the emergent languages. We find that some implementation choices are always beneficial, and that the information that is conveyed by the agents' messages is shaped not only by the game, but also by the overall design of the agents as well as seemingly unrelated implementation choices.
## 1 Introduction
Emergent language games are experimental protocols designed to model how communication may arise among a group of agents. For the linguist, they can serve as models of how language might have emerged in humans (Nowak et al., 1999; Kirby, 2002; Kirby et al., 2008); for the AI or NLP scientist, they provide an interesting and challenging test-bed for cooperation and communication across distinct neural agents using symbolic channels (Havrylov and Titov, 2017; Zhang et al., 2021).
Our focus in this paper is on signaling games
(Lewis, 1969). More precisely, we adopt a setting in which a *sender* is exposed to some data and produces a message that is transmitted to a *receiver*.
The receiver has then to answer a question related to the data that the sender was exposed to. Both agents share the common goal of the receiver answering correctly to the question. This common goal encourages the sender to encode relevant information about the input data in its message and in such a way that the receiver can decode it. In the present paper, we show the sender an image, the *original image*. The receiver is shown a pair of images: a *target image*, which is semantically related to the original image, and one unrelated distractor. These images all depict a solid on a uniform background; the shape, the size, the position and the color of this object are the same for the original and the target image, while at least one of these features is different for the distractor. Based on the sender's message, the receiver has to guess which image of the pair is the target. We allow the senders to compose sequences of arbitrary symbols of variable length.
One of the long-term goals of the study of such language games is to understand under which conditions emergent communication protocols display language-like features. In particular, compositionality has been a major concern ever since Hockett
(1960) and remains so in today's NLP research landscape (Baroni, 2019). In order to observe complex, structured communication protocols, we need to provide the agents with an environment complex enough for such a characteristic to develop. This adds two requirements on the agents' stimuli: the images we show them will need to be structured, and ought to not be discriminated through low-level features (Bouchacourt and Baroni, 2018).
When designing and experimenting with such a signaling game, a number of design choices are left open—ranging from the exact objective optimized by the agents, to the selection of training examples and to whether agents have prior information about their environment. In this paper, we exhaustively study how different choices often encountered in the relevant literature interact, and which combinations of these, if any, yield the most stable, efficient 8399 communication protocols. In addition, we use training data that theoretically allow the agents to ignore one aspect of the images (e.g., the color of the object shown, or its size), so as to test whether the agents do ignore one feature and how implementation choices impact this behavior. To that end, we define four automatic metrics to probe syntactic and semantic aspects of their communication protocols; we believe them to be useful to future emergent communication studies, as the current agreed upon tool set for studying artificial emergent languages remains fairly narrow. These metrics help us assess what the emergent languages have in common and how they differ. We find that language-like characteristics can be driven by seemingly unrelated factors, and that ensuring the emergence of a reliable communication protocol that generalizes to held-out examples requires a careful consideration of how to implement the language game. The main contributions of this work are thus twofold:
we report an exhaustive review of implementation choices, and we provide novel automated metrics to study the semantics of emergent communication protocols.
We provide an overview of related works in Section 2. Dataset and game details are presented in Section 3. We describe our implementation variants in Section 4 and our automatic metrics in Section 5.
We discuss our results in Section 6.
## 2 Related Work
The signaling game we study in this paper is derived of Lewis' (1969) work; more specifically, we build upon the neural network formulation of Lazaridou et al. (2018) using a symbolic channel
(Sukhbaatar et al., 2016; Havrylov and Titov, 2017; Lazaridou et al., 2017). Other formulations that we leave for future study involve multi-turn communication (Jorge et al., 2016; Evtimova et al., 2018, a.o.), populations and generations of agents (e.g.,
Kirby et al., 2014; Foerster et al., 2016; Ren et al.,
2020; Chaabouni et al., 2022) or non-symbolic communication channels (e.g., Mihai and Hare, 2021).
There is a large prior body of research that investigate how specific implementation choices can impact the characteristics of the emergent communication protocol. For instance, Liang et al. (2020)
advocate in favor of competition as an environmental pressure for learning composition by only rewarding the fastest of two teams in a multi-turn signaling game. Rita et al. (2022) mathematically demonstrate that the typical losses used to implement Lewis games can be broken down in a information term and a co-adaptation term, and that limiting overfitting on the latter term experimentally leads to more compositional and generalizable protocols. Mu and Goodman (2021) discuss generalization, and how to induce it by modifying the signaling game to involve sets of targets, rather than unique targets per episode. Patel et al. (2021)
study a navigation task to show how to foster interpretability, i.e., communication protocols that are grounded in agents' perceptions of their environment. Rita et al. (2020) discuss how encouraging
"laziness" in the sender and "impatience" in the receiver shapes the messages so as to exhibit Zipfian patterns. Chaabouni et al. (2019b) use handcrafted languages to study word-order preferences of LSTM-based agents. Kim and Oh (2021) discuss the importance of dataset size, game difficulty and agent population sizes. Bouchacourt and Baroni
(2018) study how the visual components of signaling game agents can undermine the naturalness of their communication. Korbak et al. (2019) propose a specific pretraining regimen to foster compositionality.
Another relevant section of the literature discusses automatic metrics designed to capture specific language-like aspects of the emergent protocol. Chief of these is the meaning-form correlation (a.k.a. topographic similarity) of Brighton and Kirby (2006), which quantifies compositionality by measuring whether changes in form are commensurate with changes in meaning (though other metrics exist, e.g., Andreas, 2019). Chaabouni et al. (2020)
argue that this metric does not correlate with generalization capabilities, and that it is thus unsuitable for studying compositionality. Mickus et al. (2020)
show how it is impacted by other language-like features. Following these remarks, we focus on novel metrics and defer discussions of topographic similarity to Appendix B.1.
## 3 Experimental Setup
Dataset. We construct a dataset of synthetic images depicting solids on gray backgrounds, using vpython.
1 They exhibit a combination of five *features*, each of which have two possible *values*: horizontal position (left, right), vertical position
(top, bottom), object type (cube, sphere), object 1https://pypi.org/project/vpython/
color (red, blue), object size (small, large). We generate 1000 images for each of the 2 5 possible combinations of feature values (or *categories*).
We divide the dataset in two splits: a *training* split and an *evaluation split*.
2 This partition is performed as follows. First, one category is selected as the *seed category*. Then, *base categories* are the 16 categories that differ from the seed category on exactly 0, 2 or 4 features. *Generalization categories* are the 16 remaining categories, that differ from the seed category on exactly 1, 3 or 5 features. Base category images are then further divided 80%–20%
between training and evaluation splits. All generalization category images are assigned to the evaluation split. The training split therefore contains only images from base categories while the evaluation split contains both images from base categories and images from generalization categories.
This partition of categories entails that that during training, all training instances involve image categories that differ by at least two features.
Hence, agents may entirely disregard one feature
(e.g., color) and still manage to perfectly discriminate all training instances. Only during evaluation are they confronted with pairs of categories that differ by a single feature: namely, when the original image is taken from a base category and the distractor image from a generalization one (or vice versa).
Game & model architecture. All of our models are comprised of two agents: a *sender* and a receiver. They are trained to solve a *Lewis signaling game* with a single communication turn. The sender is first shown an image I and produces a message: a sequence of up to 10 symbols from an alphabet of size 16. The receiver is then provided as input a target image I′ of the same category as I, a distractor image J of a different category, and the message, and has to identify I′as the intended target. This game is illustrated in Figure 1. The original image I differs from the target image I′so as to deter the sender from describing low-level features of the images (e.g., specific pixel brightness, Bouchacourt and Baroni, 2018).
Both agents contain an image encoder, implemented as a convolution stack, and an LSTM to process symbols. The sender's LSTM is primed with the encoded original image representation, and then generates the message. The receiver uses its LSTM to convert the message into a vector; it then 2We do one such split per model trained.
![2_image_0.png](2_image_0.png)
computes the dot product between the message encoding and each of the target and distractor images encoding; we infer a probability distribution over the image pair using a softmax function.
Models are trained with REINFORCE (Williams, 1992); the loss for an episode is defined as:
$${\mathcal{L}}=-\sum_{t}r_{t}\cdot\log p(a_{t})\qquad\qquad(1)$$
where atis the t th action taken in the episode, p(at)
its probability, and rtits associated reward. Each episode contains one *generation* action per symbol in the message, and one *classification* action. All actions of an episode are associated with the same reward rt = r. By default, we set r to 1 when the receiver successfully retrieves the target image, and 0 otherwise.
## 4 Implementation Choices
Having described our basic setup above, we now list the different implementation variants that we study in the present paper. We refer to these implementation variants using a vector notation; for a binary trait Φ, a model for which Φ is implemented will be denoted as ⟨. . . , +Φ*, . . . ,*⟩, conversely, its absence would be signaled with ⟨. . . , −Φ*, . . . ,*⟩.
Pretraining of the visual component. In order to ensure that the recurrent message encoders and decoders receive coherent, usable representations of the images, for some variants, we *pretrain* the image encoders convolutions. In the remainder of the text, we denote as ⟨+*P, . . .*⟩ models that have undergone pretraining, and ⟨−*P, . . .*⟩ models that did not. We consider three pretraining objectives:
an auto-encoding task and two classification tasks.
The *auto-encoding* pretraining consists in training the convolution stack along with an additional deconvolution stack to reproduce images provided as input, using a mean squared error loss:
$${\mathcal{L}}_{\mathrm{AE}}={\frac{1}{3h w}}\sum_{i=1}^{h}\sum_{j=1}^{w}\sum_{c=1}^{3}\left(\mathbf{Y}_{i j c}-{\hat{\mathbf{Y}}}_{i j c}\right)^{2}\quad(2)$$
where Yˆ is the reconstruction of the RGB image Y
of height h and width w. Models pretrained with this objective are denoted as ⟨+PAE*, . . .*⟩.
The first classification objective, which we dub
"*category-wise*", corresponds to predicting which of the 2 5categories the input image corresponds to,3and is learned using a cross-entropy loss:
$$\mathcal{L}_{\text{CW}}=-\sum_{i=1}^{2^{5}}1_{\{i=y\}}\log\hat{\mathbf{y}}_{i}\tag{3}$$ where $\hat{\mathbf{y}}$ is the vector $\left(p(y=1|I),\ldots,p(y=2^{5}|I)\right)$ corresponding to the $\mathbf{y}$-function $\mathbf{y}$ is the vector $\mathbf{y}$.
corresponding to the classifier's probability distribution over possible labels. Models pretrained with this objective are denoted as ⟨+PCW*, . . .*⟩.
The second classification objective, called
"*feature-wise*", consists in predicting each of the 5 feature values of the input image—i.e., an agreggate of five binary classification sub-tasks. The loss function for this last objective LFW is thus:
$$\mathcal{L}_{\mathrm{FW}}=-\sum_{f=1}^{5}\sum_{i=1}^{2}\mathbf{1}_{\{i=\mathbf{y}_{f}\}}\log\hat{\mathbf{Y}}_{f i}\qquad\text{(4)}$$
where Yˆ is the structured prediction, such that Yˆf i is the probability assigned for the i th possible value of the f th feature, and y = (y1*, . . . , y*f ) is the vector of target feature values for this example.
We denote models pretrained with this objective as
⟨+PFW*, . . .*⟩.
We also consider whether or not to *freeze* the parameters of the image encoder convolution stacks.
Assuming the pretraining was successful, the resulting image vector representations should contain all the information necessary for models to succeed. In this case, freezing convolutions reduces the number of learnable parameters, which may help the optimization. Pretrained models whose convolution stacks are frozen are denoted as ⟨+P, +*F, . . .*⟩,
whereas models whose convolutions (pretrained or not) are updated are denoted as ⟨. . . , −*F, . . .*⟩.
3Because the training split is used during pretraining, only the 2 4base categories are in fact seen at this stage.
Distractor sampling. By default, during training, we first select the original/target category ct uniformly at random, before selecting the distractor category cd uniformly among remaining categories.
A second strategy that we envision to improve performance consists in *adversarially sampling* cd instead. More precisely, when we evaluate the agents at the end of each training epoch, we derive countbased estimates of the probability P (fail | (ct, cd))
of communication failure for each pairs (ct, cd).
At training time, cd is sampled with a probability proportional to P (fail | (ct, cd)). At evaluation time, cd is still sampled uniformly. We denote the use of this adversarial sampling during training as
⟨. . . , +*A, . . .*⟩, and its absence as ⟨. . . , −*A, . . .*⟩.
Rewards and regularization. One drawback of the pretraining methods and the adversarial sampling alike is that most of them (i.e., all except the auto-encoder method) require information which might not be available in other datasets, namely labels pertaining to the semantics of the images.
One possible technique not subject to this concern consists in adding an *entropy* term to the REIN-FORCE loss, as is sometimes done in emergent communication (e.g., Lazaridou et al., 2018; Chaabouni et al., 2019a). This entropy loss is defined as:
$${\mathcal{L}}_{H}=-\beta_{S}\sum_{t}H_{S,t}-\beta_{R}H_{R}\qquad\quad(5)$$
where βS and βR are two scalar coefficients controlling the strength of this regularization, HS,t is the entropy of the probability distribution computed by the sender and used to select the t th symbol of the message, and HR is the entropy of the probability distribution computed by the receiver.
The scalar coefficients are set to βS = 10−2and βR = 10−3.
4 The use of this entropy term is denoted with ⟨. . . , +*H, . . .*⟩.
Another technique consists in redefining the rewards system. Instead of associating each action of an episode with a binary reward r ∈ {0, 1}, the reward is defined as the probability that the receiver assigns to the target image, i.e., how confident it is in retrieving the target. The use of this confidence-based reward system is denoted with
⟨. . . , +*C, . . .*⟩.
The last technique that we study consists in deducting the recent average rewards as a *baseline* 4Optimal settings in preliminary experiments.
term b (Sutton and Barto, 2018, §13):
, $13):.
$${\mathcal{L}}=-(r-b)\sum_{t}\log p(a_{t})$$
where b is the average of r over the last 1000 batches. The use of this baseline term is denoted with ⟨*. . . ,* +B⟩.
While confidence-based rewards and baseline can technically be applied jointly, doing so proves to be detrimental. None of the runs for models implemented as ⟨*. . . ,* +C, +B⟩ yielded a successful communication protocol. We conjecture that this is due to the probability mass assigned to the target image being very close to the average reward
(0.5) at the beginning of the training process, which leads to losses and gradient updates close to 0. In what follows, the use of these two techniques are then considered mutually exclusive.
Comparison with previous work. In our experiments, we exhaustively evaluate various design choices, which cover many architectures similar to those studied in earlier works. For instance, Lazaridou et al. (2018) would correspond to a ⟨−P, −F, −A, −H, −E, −B⟩ model, Bouchacourt and Baroni (2018) adopt a model similar to a
⟨+Pcw, +F, −A, −H − E, −B⟩. In what follows, we do not focus on how specific earlier works fare, but instead attempt to develop a more global picture.
## 5 Automatic Metrics
Communication efficiency. We primarily measure the performance of a model by its *communication efficiency* (c.e.), which we define as the average probability assigned by the model to the target image over a large number of evaluation instances.5 Evaluation instances involve all categories seen during training with additional categories as well
(see Section 3). To assess how the agents handle unseen combination of features at a finer level, we 5Communication efficiency differs from *accuracy*, defined as the proportion of evaluation instances for which the target image is assigned a higher probability than the distractor. Accuracy can be maximal (100%) even with a very low communication efficiency (50 + ϵ%). Low communication efficiency is a sign of sub-optimal performance, as an effective communication system should describe the target category unambiguously, i.e., the agents should solve the game with a high degree of confidence. In practice, we find these two values to be highly correlated in our experiments, suggesting our models are well calibrated (Guo et al., 2017).
$$(6)$$
define base-c.e., *gen.-c.e.* and *mixed-c.e.* by restricting the two selected categories to two base categories, two generalization categories, and one of each respectively.
All of our metrics are generalized from single models to sets of models by computing their average across models (i) using, for each model, the value obtained during the evaluation phase in which it reaches its highest communication efficiency and
(ii) discarding any model which never reaches a communication efficiency of 60% or above at any point of the training process.6 Any model that does reach a communication efficiency of 60% or above is said to be "successful". The *convergence ratio* (cvg.) of a set of models is the proportion of successful models in this set.
Abstractness. We task receivers with recognizing not the original image I shown to senders, but another target I′ of the same category. This is meant to encourage senders to describe not so much the input image as its category. We evaluate this aspect using the *abstractness* of a model:
$$\mathrm{abstractness}=2\cdot p_{R}(I^{\prime}|I,I^{\prime},m)$$
′|I, I′, m) (7)
where pR(J) is the probability assigned by the receiver to the image J, I and I′are the original and target images, and m is the sender's message for the input I. Abstractness is 0 if all the mass is on the original image, and 1 when it is distributed evenly.7 Scrambling resistance. To measure how sensitive to symbol ordering receivers are, we define the *scrambling resistance* of a model by comparing the probability assigned to the target image by the receiver when provided with the sender's message m, and when provided with a randomly permuted version m′ of it. More precisely, given a message m, we compute:
$$m=(a_{1},\ldots,a_{n})$$ $$m^{\prime}=\left(a_{\sigma(1)},\ldots,a_{\sigma(n)}\right)$$ $$\mathrm{sr}=\frac{\min\left(p_{R}(m),\ p_{R}(m^{\prime})\right)}{p_{R}(m)}\tag{8}$$
where atis the t th symbol of the message produced by the sender, pR(x) is the probability of the receiver selecting the target image given the message 6Such models are discarded because we are interested in the properties of emergent languages, i.e., communication protocols that are reliably used to convey information.
7As expected, we do not observe any value significantly larger than 1.
| Implementation | cvg. | c.e. | | |
|------------------|---------|----------|-------|-------|
| ⟨−P, | . . . , | −C, −B ⟩ | 0.800 | 0.950 |
| ⟨+P, −F, | . . . , | −C, −B ⟩ | 0.883 | 0.954 |
| ⟨+P, +F, | . . . , | −C, −B ⟩ | 1.000 | 0.922 |
| ⟨−P, | . . . , | +C, −B ⟩ | 0.875 | 0.954 |
| ⟨+P, −F, | . . . , | +C, −B ⟩ | 0.958 | 0.961 |
| ⟨+P, +F, | . . . , | +C, −B ⟩ | 1.000 | 0.926 |
| ⟨−P, | . . . , | −C, +B ⟩ | 0.925 | 0.967 |
| ⟨+P, −F, | . . . , | −C, +B ⟩ | 1.000 | 0.971 |
| ⟨+P, +F, | . . . , | −C, +B ⟩ | 1.000 | 0.936 |
x, and σ is a random permutation of the interval J1, nK. The scrambling resistance of a model is an average of sr over a large number of evaluation instances.
Semantic probes. In order to determine which features of the original/target category are described in a sender's message, we implement a probing method based on decision trees. We convert any message m into a bag-of-symbols vector u ∈ N
16, such that uiis the number of occurrences of symbol i in m. Given a set of messages each associated with its corresponding original/target category, for each of the five features, we can train a decision tree to predict the values of the feature based on the bag-of-symbols representation of the messages. While the messages may very well encode information under a form that cannot be decoded by such a simple system, high accuracy from a decision tree is proof that the corresponding feature is consistently described in the messages.8
## 6 Results 6.1 Global Performance
Table 1 shows the performance of all of the runs we have performed, aggregated based on the reward system they use (binary rewards, **confidencebased reward**, or binary rewards with a **baseline**
term), on whether the visual convolution stacks are **pretrained** (without differentiating between the various pretraining objectives) and, if so, on whether these convolution stacks are **frozen** during training. We observe that the most impactful implementation choice is whether or not to use a baseline
| Implementation | cvg. | c.e. | | | | |
|------------------------------------|---------|--------|---------|----|-------|-------|
| ⟨ | . . . , | −H, | . . . , | ⟩ | 0.929 | 0.941 |
| ⟨ | . . . , | +H, | . . . , | ⟩ | 0.988 | 0.952 |
| ⟨. . . , −F , . . . , −H, −C, +B ⟩ | 1.000 | 0.970 | | | | |
| ⟨. . . , −F , . . . , +H, −C, +B ⟩ | 0.963 | 0.970 | | | | |
term (⟨*. . . ,* −C, +B⟩). Improvements with +B are much more consistent and pronounced than models using confidence-based rewards (⟨*. . . ,* +C, −B⟩)
or pretraining (⟨+*P, . . .*⟩).
On its own, pretraining brings some degree of improvement comparable to what we see in models implemented as ⟨*. . . ,* +C, −B⟩. Setups involving freezing pretrained convolution stacks
(⟨+P, +*F, . . .*⟩) reach a convergence ratio of 1 at the expense of a downgrade in communication efficiency. Moreover, pretraining without freezing weights (⟨+P, −*F, . . .*⟩), while not detrimental, does not improve performances unless used jointly with either +C or +B. Optimal performances are attested when using pretraining with a baseline term (⟨+P, −*F, . . . ,* −C, +B⟩).
Table 2 shows the performance (top) of all of the runs that we have performed and (bottom) of all runs with the baseline term and without frozen convolution stacks, aggregated based on whether they are trained with the **entropy penalty**. We observe that, while in general using this regularization term is an efficient way to boost both the convergence ratio and the communication efficiency of converging runs, this positive effect does not persist with
⟨. . . , −*F, . . . ,* −C, +B⟩ runs (see below for more information about the drop in cvg. in this case).
Because of their high performance, we focus on models implemented as ⟨. . . , −*F, . . . ,* −C, +B⟩
in the remainder of this discussion. A communication efficiency around 97% might intuitively seem an indicator of excellent performance, but remark that, should the sender completely ignore one semantic feature of the images, then the communication efficiency could still rise up to 30.5 31 (≈ 98.4%):
this value is obtained when, among the 31 possible categories for the distractor, 30 lead to perfect retrieval of the target image and 1 leads to chance retrieval. As such, none of the performances seen so far guarantees that all features are encoded in the messages.
| Implementation | cvg. | c.e. |
|--------------------------------|--------|--------|
| ⟨−P, −F , −A, . . . , −C, +B ⟩ | 1.000 | 0.958 |
| ⟨−P, −F , +A, . . . , −C, +B ⟩ | 0.850 | 0.978 |
| ⟨+P, −F , −A, . . . , −C, +B ⟩ | 1.000 | 0.959 |
| ⟨+P, −F , +A, . . . , −C, +B ⟩ | 1.000 | 0.983 |
| ⟨ −P , −F , +A, −H, −C, +B ⟩ | 1.000 | 0.981 |
| ⟨ −P , −F , +A, +H, −C, +B ⟩ | 0.700 | 0.974 |
Table 3: Effects of adversarial sampling. The two last lines are a decomposition of the second one.
| Implementation | cvg. | c.e. | |
|----------------------------------|---------------------------|--------|-------|
| ⟨ −P, | −F , +A, . . . , −C, +B ⟩ | 0.859 | 0.978 |
| ⟨+PAE, −F , +A, . . . , −C, +B ⟩ | 1.000 | 0.981 | |
| ⟨+PCW, −F , +A, . . . , −C, +B ⟩ | 1.000 | 0.985 | |
| ⟨+PFW, −F , +A, . . . , −C, +B ⟩ | 1.000 | 0.983 | |
Table 4: Effects of pretraining objectives Table 3 shows the performance of the runs aggregated based on the **sampling strategy** for distractors and the use of **pretraining** for the visual convolution stacks (still without differentiating between the various pretraining objectives). We see that, compared to uniform sampling, the adversarial sampling strategy systematically and substantially increases the communication efficiency.
Nonetheless, the adversarial strategy can induce a lower convergence ratio when the convolution stacks are not pretrained and an entropy penalty is added, suggesting that this sampling strategy and the entropy penalty used jointly make training too challenging for agents with randomly initialized convolution stacks. In all, the higher performances observed with the adversarial sampling strategy lead us to narrow down our discussion once more, this time focusing on models implemented as ⟨. . . , −F, +*A, . . . ,* −C, +B⟩.
Finally, we focus on the effect of the different **pretraining** objectives in Table 4. Though all three pretraining objectives are helpful, we observe the highest improvement in communication efficiency with the two classification objectives.
Among them, the category-wise objective outperforms the feature-wise objective. While the featurewise objective provides feature-level guidance, the category-wise pretraining regimen directly trains the convolution stacks to tease apart images of different categories, which is what the signaling game requires of them. We hypothesize that the featurewise objective might be superior when the category space is sufficiently larger and more complex.
## 6.2 Generalization And Language Analysis
Having looked at how to foster reliability and high performance, we now turn to how to a study of how well the models generalize to unseen items and whether their messages display language-like characteristics—as the literature often remarks that such characteristics should not be taken for granted
(Mu and Goodman, 2021; Patel et al., 2021).
Abstractness. Abstractness is systematically close to 1. Over all 805 successful runs, it averages to 0.992 (σ = ±0.015). On the 77 successful ⟨. . . , −F, +*A, . . . ,* −C, −B⟩ runs, it reaches 0.996 (σ = ±0.008), with no statistically significant difference between the four pretraining options. In all, using distinct images as original and target inputs does induce the senders to describe categories rather than specific images.
However, when grouping runs implemented as
⟨. . . ,+A*, . . . ,*−C,−B⟩ depending on their pretraining and convolution freezing, we find one group of outliers: ⟨PAE, +F,+A,*. . .* ,−C,−B⟩ runs have an abstractness of 0.958. This value is statistically lower than for each of the six other groups (as shown by a Pitman test; p < 10−6in all cases). Convolution stacks pretrained as auto-encoders learn to capture the specificity of each image, which apparently permeates the emergent languages if subsequently frozen.
We also observe an opposite—albeit weakereffect with the category-wise pretraining objective. ⟨PCW,+F,+A*,. . . ,*−C,−B⟩ runs have an abstractness of 0.998, higher than the 0.994 of
⟨PCW,−F,+A*,. . . ,*−C,−B⟩ runs. The difference (p <
0.04, Pitman test) indicates that in such cases, finetuning the convolution stacks leads the agents to include image-specific information in their messages.
Scrambling resistance. Scrambling resistance yields high values, ranging from 0.892 when using auto-encoder pretraining to 0.915 when using feature-wise pretraining.9In other words, the receiver is able to recognize a category based on a randomly permuted message with a high degree of accuracy. This property, however, does not entail that the sender produces symbols in a (near) random order. Indeed, even English, which requires a 9The difference between these two pretraining regimens is statistically significant: p < 10−3(Pitman permutation test).
rather strict word-order, arguably has a high scrambling resistance: it is natural to associate the scrambled sentence "cube a there blue is" with a picture of a blue cube rather than that of a blue sphere
(or a red cube, etc.). High scrambling resistance points towards the possibility that each symbol is loaded with an intrinsic meaning, the interpretation of which is fairly independent of its position—in contrast with, e.g., the digits in positional numeral systems (which are compositional systems with low scrambling resistance).
Generalization. As we saw in Section 6.1, the highest communication efficiency we observe, of 0.985, is obtained with the
⟨PCW, −F, +*A, . . . ,* −C, −B⟩ implementation. Let us recall that this means that when the source/target category and the distractor category are selected from the whole set of categories, the receiver puts on average 0.985 of the probability mass of its choice distribution on the target image.
As for the base-c.e. (when both categories are base categories, i.e., not seen during training) of this implementation, its value is near perfect, above 0.999. Its gen-c.e. (when both categories are generalization categories), is also very high, at 0.997. These different values indicate that the models are able to generalize very well not only to unseen images but also to new categories (i.e.,
unseen combinations of features).
For this same implementation, the mixed-c.e.
(when only one of the categories is a base category)
drops to 0.971.
10 Recall that this is the only case where target and distractor may differ by a single feature. Even if agents disregard one feature, their mixed-c.e. can still theoretically reach up to 14.5 15
(≈ 96.7%). Hence, ⟨PFW, −F, +*A, . . . ,* −C, −B⟩
runs communicate about all features, despite it not being required by the training objective. Similarly, ⟨PCW, −F, +*A, . . . ,* −C, −B⟩ runs obtain a mixed-c.e. of 0.967 (almost equal to the threshold) and ⟨−P, −F, +*A, . . . ,* −C, −B⟩ runs reach a mixed-c.e. of 0.964 (slightly below).
Semantic content Scrambling resistance scores highlight that the semantic contents of symbols are mostly position-insensitive. This entails that our decision-tree based probes, which rely on bagof-symbols representations of the messages, are relevant. Table 5 shows how shape is much less
| Implementation | color | shape | |
|------------------------------|-----------------------|---------|-------|
| ⟨ −P, | −F , +A, −H, −C, −B ⟩ | 0.992 | 0.534 |
| ⟨+PAE, −F , +A, −H, −C, −B ⟩ | 0.962 | 0.558 | |
| ⟨+PCW, −F , +A, −H, −C, −B ⟩ | 0.999 | 0.532 | |
| ⟨+PFW, −F , +A, −H, −C, −B ⟩ | 0.993 | 0.537 | |
| ⟨ −P, | −F , +A, +H, −C, −B ⟩ | 0.972 | 0.595 |
| ⟨+PAE, −F , +A, +H, −C, −B ⟩ | 0.988 | 0.656 | |
| ⟨+PCW, −F , +A, +H, −C, −B ⟩ | 1.000 | 0.617 | |
| ⟨+PFW, −F , +A, +H, −C, −B ⟩ | 0.999 | 0.598 | |
Table 5: Decision tree classifiers: feature prediction accuracy (color and shape).
accurately conveyed than other image features.11 This indicates that shape is harder to identify than color, size or position and that since the training process does not incentivize the agents to describe all features, they systematically focus on the four easiest.12 Interestingly, applying an entropy penalty during training strongly drives the agents to communicate about the shape. Moreover, models pretrained with the auto-encoder objective lead to higher values than any others.13 The difference in shape recognition between this group and the others is always significant (p < 10−2).
## 7 Conclusions
Two broad conclusions emerge from our experiments. Firstly, we saw that not all implementations perform equally well. We demonstrated how the use of a baseline term or an adversarial input sampling mechanism were necessary to reach high performance. While pretraining convolution stacks can prove beneficial in limited circumstances, not fine-tuning them afterwards may prove to be highly detrimental. In all, a well designed implementation can learn reliably and generalize to new images and combinations of features.
Secondly, we have made a case for the need of fine-grained methods when analyzing the emergent communication protocol. We have introduced an 11The three remaining features being very much in line with color, we omit them in this table for brevity and clarity. See the full results in Table 7 in Appendix B.2.
12An unlikely alternative is that they communicate the shape of the object in a complex way that is mostly inaccessible to our decision trees.
13The values are not shown here, but freezing pretrained convolution stacks does not improve (and in fact deteriorates)
the accuracy of the shape-probing decision trees, except for the auto-encoder objective.
array of tools. Among them, scrambling resistance were used to demonstrate that each symbol in our languages has semantic contribution independent from its position. Decision trees based probes informed us that these symbols were put to use to systematically describe all but one of the input image's features, shape being constitently neglected though not entirely ignored despite the possibility we left open through the design of the training instances. These results also connect with design choices: for instance, we saw how entropy regularization and auto-encoder pretraining strengthened the prominence of shape in the messages.
We next plan to experiment with a partition of categories between base and generalization that forces all features to be encoded in the messages, and then use decision trees and other methods to automatically describe the syntax and the semantics of the emergent communication protocols in simple terms, so as to better characterize how these protocols relate to natural language. We also plan to study the impacts of the semantic complexity of the input images on these emergent protocol, using a richer set of features and values, and using unlabeled real-world scenes. Lastly, our findings will have to be confirmed in setups involving other games such as navigation tasks.
## Limitations
There are two main limitations to the present work.
First and foremost is the computational cost associated with the present experiments. We present here results and analyzed gleaned over 10 runs, 7 pretraining regimens, 8 RL gradient propagation variants and 2 data sampling approaches, for a total of 1120 models. While training any one of our models is cheap (less than 3 hours on a single A100 NVIDIA GPU), the total number of models may pose a challenge for future replication studies and comes at an environmental cost. This also prevented us from selecting optimal batch size, learning rate, and so on for specific setups—as described in Appendix A, we set these values globally prior to running experiments. This may affect results and impact conclusions.
Second is the theoretical scope of the current paper. We have focused solely on single-turn, 2 agents signaling game setups. The recommendations and conclusions drawn in the present paper may or may not translate to other language games.
Likewise, while this study aims at exhaustiveness, material limitations have bounded the scope of implementation choices we studied. Some approaches, such as KL regularization (Geist et al.,
2019), have thus been left out of the present study.
## Acknowledgments
The authors deeply thank Takamura Hiroya who participated in preliminary experiments related to this work.
Preliminary results were obtained from project JPNP15009, commissioned by the New Energy and Industrial Technology Development Organization
(NEDO), using the computational resources of the AI Bridging Cloud Infrastructure (ABCI), provided by the National Institute of Advanced Industrial Science and Technology (AIST), Japan.
This work is part of the FoTran project, funded by the European Research Council (ERC) under the EU's Horizon 2020 research and innovation program (agreement
№ 771113). We also thank the CSC-IT
Center for Science Ltd., for computational resources.
This work was also supported by an Émergence 2021 grant (SYSNEULING project) from IdEx Université Paris Cité, as well as a public grant overseen by the French National Research Agency
(ANR) as part of the "Investissements d'Avenir" program: IdEx *Lorraine Université d'Excellence*
(reference: ANR-15-IDEX-0004).
## References
Jacob Andreas. 2019. Measuring compositionality in representation learning. In International Conference on Learning Representations.
Marco Baroni. 2019. Linguistic generalization and compositionality in modern artificial neural networks.
Philosophical Transactions of the Royal Society B,
375.
Diane Bouchacourt and Marco Baroni. 2018. How agents see things: On visual representations in an emergent language game. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 981–985, Brussels, Belgium. Association for Computational Linguistics.
Henry Brighton and Simon Kirby. 2006. Understanding linguistic evolution by visualizing the emergence of topographic mappings. *Artif. Life*, 12(2):229–242.
Rahma Chaabouni, Eugene Kharitonov, Diane Bouchacourt, Emmanuel Dupoux, and Marco Baroni. 2020.
Compositionality and generalization in emergent languages. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4427–4442, Online. Association for Computational Linguistics.
Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, and Marco Baroni. 2019a. Anti-efficient encoding in emergent communication. In Advances in Neural Information Processing Systems, volume 32.
Curran Associates, Inc.
Rahma Chaabouni, Eugene Kharitonov, Alessandro Lazaric, Emmanuel Dupoux, and Marco Baroni.
2019b. Word-order biases in deep-agent emergent communication. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5166–5175, Florence, Italy. Association for Computational Linguistics.
Rahma Chaabouni, Florian Strub, Florent Altché, Eugene Tarassov, Corentin Tallec, Elnaz Davoodi, Kory Wallace Mathewson, Olivier Tieleman, Angeliki Lazaridou, and Bilal Piot. 2022. Emergent communication at scale. In International Conference on Learning Representations.
Katrina Evtimova, Andrew Drozdov, Douwe Kiela, and Kyunghyun Cho. 2018. Emergent communication in a multi-modal, multi-step referential game. In *International Conference on Learning Representations*.
Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, *Advances in Neural Information Processing Systems 29*, pages 2137–
2145. Curran Associates, Inc.
Matthieu Geist, Bruno Scherrer, and Olivier Pietquin.
2019. A Theory of Regularized Markov Decision Processes. In *ICML 2019 - Thirty-sixth International Conference on Machine Learning*, Long Island, United States. ICML 2019.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On Calibration of Modern Neural Networks. In *International Conference on Machine* Learning, pages 1321–1330. PMLR. ISSN: 26403498.
Serhii Havrylov and Ivan Titov. 2017. Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. In Advances in Neural Information Processing Systems, volume 30.
Curran Associates, Inc.
Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky.
2012. Coursera lectures slides, lecture 6.
Charles F. Hockett. 1960. The origin of speech. *Scientific American*, 203(3):88–96.
Emilio Jorge, Mikael Kågebäck, and Emil Gustavsson.
2016. Learning to play guess who? and inventing a grounded language as a consequence.
Jooyeon Kim and Alice Oh. 2021. Emergent communication under varying sizes and connectivities. In Advances in Neural Information Processing Systems, volume 34, pages 17579–17591. Curran Associates, Inc.
Simon Kirby. 2002. Natural language from artificial life. *Artif Life*, 8(2):185–215.
Simon Kirby, Hannah Cornish, and Kenny Smith. 2008.
Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. Proceedings of the National Academy of Sciences, 105(31):10681–10686.
Simon Kirby, Tom Griffiths, and Kenny Smith. 2014.
Iterated learning and the evolution of language. Curr.
Opin. Neurobiol., 28:108–114.
Tomasz Korbak, Julian Zubek, Lukasz Kucinski, Piotr Milos, and Joanna Raczaszek-Leonardi. 2019.
Developmentally motivated emergence of compositional communication via template transfer. *CoRR*,
abs/1910.06079.
Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. 2018. Emergence of linguistic communication from referential games with symbolic and pixel input. In International Conference on Learning Representations.
Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2017. Multi-agent cooperation and the emergence of (natural) language. In International Conference on Learning Representations.
David Lewis. 1969. *Convention: a philosophical study*.
Harvard University Press Cambridge.
Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, and Satwik Kottur. 2020.
On emergent communication in competitive multiagent teams. In *Proceedings of the 19th International* Conference on Autonomous Agents and MultiAgent Systems, AAMAS '20, page 735–743, Richland, SC.
International Foundation for Autonomous Agents and Multiagent Systems.
Timothee Mickus, Timothée Bernard, and Denis Paperno. 2020. What meaning-form correlation has to compose with: A study of MFC on artificial and natural language. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 3737–3749, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Daniela Mihai and Jonathon Hare. 2021. Learning to draw: Emergent communication through sketching.
In *Advances in Neural Information Processing Systems*, volume 34, pages 7153–7166. Curran Associates, Inc.
Jesse Mu and Noah Goodman. 2021. Emergent communication of generalizations. In *Advances in Neural* Information Processing Systems, volume 34, pages 17994–18007. Curran Associates, Inc.
M. A. Nowak, J. B. Plotkin, and D. Krakauer. 1999. The evolutionary language game. Journal of Theoretical Biology, 200(2):147–162.
Shivansh Patel, Saim Wani, Unnat Jain, Alexander G.
Schwing, Svetlana Lazebnik, Manolis Savva, and Angel X. Chang. 2021. Interpretation of emergent communication in heterogeneous collaborative embodied agents. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*,
pages 15953–15963.
Yi Ren, Shangmin Guo, Matthieu Labeau, Shay B. Cohen, and Simon Kirby. 2020. Compositional languages emerge in a neural iterated learning model.
In *International Conference on Learning Representations*.
Mathieu Rita, Rahma Chaabouni, and Emmanuel Dupoux. 2020. "LazImpa": Lazy and impatient neural agents learn to communicate efficiently. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 335–343, Online.
Association for Computational Linguistics.
Mathieu Rita, Corentin Tallec, Paul Michel, JeanBastien Grill, Olivier Pietquin, Emmanuel Dupoux, and Florian Strub. 2022. Emergent communication:
Generalization and overfitting in lewis games. In Advances in Neural Information Processing Systems.
Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus.
2016. Learning multiagent communication with backpropagation. In *Advances in Neural Information* Processing Systems, volume 29. Curran Associates, Inc.
Richard S. Sutton and Andrew G. Barto. 2018. *Reinforcement Learning: An Introduction*, second edition edition. Adaptive Computation and Machine Learning series. MIT Press.
Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. *Mach. Learn.*, 8(3–4):229–256.
Kaiqing Zhang, Zhuoran Yang, and Tamer Ba¸sar. 2021.
Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms, pages 321–
384. Springer International Publishing, Cham.
![10_image_0.png](10_image_0.png)
## B Supplementary Results B.1 Meaning–Form Correlation A Hyperparameters Selection And Training Details
Figure 2: Convergence ratio as a function of learning rate.
most likely to reliably induce a successful emergent communication protocol. We exhaustively test learning rates in {10−x/2| 4 ≤ x ≤ 12}and measure the convergence ratio for groups of 10 runs trained for 50 epochs. Results, displayed in Figure 2, suggest an optimal learning rate of 10−4 which we adopt in all subsequent experiments.14 In Section 4, hyperparameter values for the pretraining procedures were selected based on the models' lack of further improvement on a heldout subset of the training data. Using 1000 steps per epoch and batches of 128 images, we found that 5 epochs and a learning rate of 3 · 10−4 was sufficient to guarantee an accuracy close to 100%
for the classification pretraining tasks, whereas the auto-encoding task required 40 epochs with the same learning rate.
In compositional languages, the meaning and the form of messages tend to be correlated: Minute changes in form (e.g., substitutions of a single token) are expected to correspond to minute changes in meaning. To study the compositionality of the communication protocols set up by the agents, one can also measure their *meaning-form correlation*
(MFC).
Meaning-form correlation, or topological similarity, consists in comparing how the distance between two messages relates to the distance between their semantic contents. More formally, it is computed as a Spearman correlation between two paired samples of distance measurements DF = (dF (oi, oj ))1≤i<j≤n and DM =
(dM(oi, oj ))1≤i<j≤n over the same set of observations, with the assumption that one distance function (dF ) captures variation in form and the other
(dM) capture variation in meaning. For clarity, we 14Learning rates greater than 0.0003 yield to unstable performances, with some runs reverting back to chance-level communication efficiency.
Throughout our experiments, we allow agents to generate messages of up to 10 symbols long, using a vocabulary of 16 symbols. We train all models for up to 100 epochs of 1000 batches each, using 128 training instance per batch. We repeat each training procedure across 10 random seeds. Parameters are optimized with RMSProp (Hinton et al., 2012).
Prior to any experiment reported here, we ran a small-scale grid-search to select a learning rate denote an MFC correlation score using the symbol τ . In our case, we have compared the Jaccard index of the two messages as bags-of-symbols to the Hamming distance between the two corresponding image categories. 15 MFC scores are not easy to interpret by themselves, but it can be illuminating to see how they vary and correlate with properties. While the distribution of MFC and its relation with communication efficiency is quite complex, we have observed that difficult setups (e.g., where a globally useful design choice is not implemented, or where an adversarial sampling strategy factors in) display two trends: on the one hand, they exhibit lower MFC scores, on the other hand, for such a setup, the MFC scores of individual runs are more in line with with performance (i.e., they display a stronger Spearman correlation or a weaker anti-correlation with communication efficiency).
| Implementation | MFC | corr. with c.e. | | | | | |
|-------------------|---------|-------------------|-------|--------|----------|-------|-------------|
| τ | ρ | p | | | | | |
| ⟨. . . , −F , | . . . , | −B ⟩ | 0.348 | -0.157 | < 0.008 | | |
| ⟨. . . , −F , | . . . , | +B ⟩ | 0.388 | -0.237 | < 0.003 | | |
| ⟨ | . . . , | +A, | . . . | ⟩ | 0.328 | 0.396 | < 3 · 10−16 |
| ⟨ | . . . , | −A, | . . . | ⟩ | 0.351 | 0.262 | < 9 · 10−8 |
| ⟨. . . , −F , +A, | . . . , | +B ⟩ | 0.375 | 0.168 | 0.143 | | |
| ⟨. . . , −F , −A, | . . . , | +B ⟩ | 0.400 | -0.385 | < 0.0005 | | |
Table 6: Some MFC scores and correlations with communication efficiency.
For example, the two top rows of Table 6 show a case in which the absence of a baseline term entails a lower MFC and a weaker anti-correlation with c.e. The middle two rows show a case in which the use of the adversarial distractor sampling strategy during training also entails a lower MFC and a stronger correlation with c.e. The two bottom rows show another case in which the adversarial training strategy has a similar effect. In addition, the last row shows that when the training is made particularly easy, the models produce on average messages that are very compositional (in the sense reflected by the MFC), but that the best models diverge from this: the best models are the ones in which the two agents develop some form of co-adaptation at odds with compositionality. This echoes the findings of Chaabouni et al. (2020), who highlight that MFC is not necessarily tied to generalization capabilities.
15Using the Levenshtein distance instead of the Jaccard index yields the same conclusions, as MFC scores derived from either distance are extremely significantly correlated.
## B.2 Decision Trees
Full results for the decision-tree semantic content probes are displayed in Table 7. As noted in the main text, the behavior for size and position features is very similar to that for color, and very distinct from that for shape.
| Implementation | color | size | h-pos | v-pos | shape | |
|------------------------------|-----------------------|--------|---------|---------|---------|-------|
| ⟨ −P, | −F , +A, −H, −C, −B ⟩ | 0.992 | 0.964 | 0.992 | 0.998 | 0.534 |
| ⟨+PAE, −F , +A, −H, −C, −B ⟩ | 0.962 | 0.974 | 0.979 | 0.986 | 0.558 | |
| ⟨+PCW, −F , +A, −H, −C, −B ⟩ | 0.999 | 0.998 | 0.987 | 0.987 | 0.532 | |
| ⟨+PFW, −F , +A, −H, −C, −B ⟩ | 0.993 | 0.968 | 0.993 | 0.993 | 0.537 | |
| ⟨ −P, | −F , +A, +H, −C, −B ⟩ | 0.972 | 0.958 | 0.968 | 0.968 | 0.595 |
| ⟨+PAE, −F , +A, +H, −C, −B ⟩ | 0.988 | 0.992 | 0.991 | 0.984 | 0.656 | |
| ⟨+PCW, −F , +A, +H, −C, −B ⟩ | 1.000 | 0.999 | 1.000 | 1.000 | 0.617 | |
| ⟨+PFW, −F , +A, +H, −C, −B ⟩ | 0.999 | 0.998 | 0.999 | 1.000 | 0.598 | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section Limitations (after Section 7 Conclusions).
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1 Introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 Experimental Setup, Paragraph Dataset.
✓ B1. Did you cite the creators of artifacts you used?
Section 3 Experimental setup, paragraph Dataset.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. (I think it is not applicable as we haven't yet released the dataset that we are using.)
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3 Experimental setup, paragraph Dataset.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3 Experimental setup, paragraph Dataset.
## C ✓ **Did You Run Computational Experiments?**
Section 3 Experimental setup and Section 4 Implementation choices.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section Limitations (after Section 7 Conclusions).
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3 Experimental setup and Appendix A Hyperparameters selection and training details.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 6 Results.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wang-xie-2023-constructing | Constructing Word-Context-Coupled Space Aligned with Associative Knowledge Relations for Interpretable Language Modeling | https://aclanthology.org/2023.findings-acl.532 | As the foundation of current natural language processing methods, pre-trained language model has achieved excellent performance. However, the black-box structure of the deep neural network in pre-trained language models seriously limits the interpretability of the language modeling process. After revisiting the coupled requirement of deep neural representation and semantics logic of language modeling, a Word-Context-Coupled Space (W2CSpace) is proposed by introducing the alignment processing between uninterpretable neural representation and interpretable statistical logic. Moreover, a clustering process is also designed to connect the word- and context-level semantics. Specifically, an associative knowledge network (AKN), considered interpretable statistical logic, is introduced in the alignment process for word-level semantics. Furthermore, the context-relative distance is employed as the semantic feature for the downstream classifier, which is greatly different from the current uninterpretable semantic representations of pre-trained models. Our experiments for performance evaluation and interpretable analysis are executed on several types of datasets, including SIGHAN, Weibo, and ChnSenti. Wherein a novel evaluation strategy for the interpretability of machine learning models is first proposed. According to the experimental results, our language model can achieve better performance and highly credible interpretable ability compared to related state-of-the-art methods. | # Constructing Word-Context-Coupled Space Aligned With Associative Knowledge Relations For Interpretable Language Modeling
Fanyu Wang and **Zhenping Xie**∗
School of Artificial Intelligence and Computer Science, Jiangnan University, China [email protected] [email protected]
## Abstract
As the foundation of current natural language processing methods, pre-trained language model has achieved excellent performance. However, the black-box structure of the deep neural network in pre-trained language models seriously limits the interpretability of the language modeling process. After revisiting the coupled requirement of deep neural representation and semantics logic of language modeling, a Word-Context-Coupled Space (W2CSpace) is proposed by introducing the alignment processing between uninterpretable neural representation and interpretable statistical logic. Moreover, a clustering process is also designed to connect the word- and context-level semantics. Specifically, an associative knowledge network (AKN), considered interpretable statistical logic, is introduced in the alignment process for word-level semantics. Furthermore, the context-relative distance is employed as the semantic feature for the downstream classifier, which is greatly different from the current uninterpretable semantic representations of pre-trained models. Our experiments for performance evaluation and interpretable analysis are executed on several types of datasets, including SIGHAN, Weibo, and ChnSenti. Wherein a novel evaluation strategy for the interpretability of machine learning models is first proposed. According to the experimental results, our language model can achieve better performance and highly credible interpretable ability compared to related stateof-the-art methods.1
## 1 Introduction
Machine learning has recently been democratized in various domains, such as search engines, conversational systems, and autonomous driving(Gao et al., 2018; Grigorescu et al., 2020; Wang et al.,
2022). However, despite AI technologies signifi-
*Corresponding author 1https://github.com/ColeGroup/W2CSpace cantly facilitating industrial processes and improving work experiences, the uninterpretable logic of machines leads to distrust, which hinders further development of AI. Explainable Artificial intelligence (XAI), proposed to bridge the block between humans and machines, has been increasingly attracting attention recently, where "explanation" is described as abductive inference and transferring knowledge (Josephson and Josephson, 1996; Miller, 2019). Calling for explaining and understanding the machine learning process, researchers aim to interpret the methods for system verification, compliance with legislation, and technology improvement.
Computational linguistics, which serves as the theoretical foundation for NLP, aims to promote communication between humans and machines
(Khan et al., 2016). However, during the recent development, the uninterpretable NLP methods have raised concerns. The decreased transparency and increased parameter complexity adversely affect the model explainability and controllability, even if the performance of the language models has significantly improved, such as BERTs (Devlin et al.,
2019; Liu et al., 2019a; Clark et al., 2020), GPTs
(Radford et al., 2018, 2019; Brown et al., 2020) and so on. The performance advantage of the black-box methods is attractive while the researchers investigate interpretable algorithms. Therefore, the existing works mainly focus on (1) explaining the blackbox methods and (2) using interpretable models
(Ribeiro et al., 2016; Li et al., 2022a).
In order to explain over-parameterized language models, the model-agnostic analysis is investigated on recent deep neural methods. Without modifying the black-box models, the researchers analyze the immediate feature of the neural layers, attention distribution, and so on (Clark et al., 2019; Vig, 2019; Rogers et al., 2020). Quantitative experiments and visual analysis is able to partially reveal the behaviors of the key components and the overall response of the methods to certain patterns (Hewitt and Manning, 2019; Kovaleva et al.,
2019). However, without any interpretable optimization of the model, the analysis is unable to provide enough detail for understanding (Rudin, 2019), which indicates a completely faithful explanation from black-box components or deep neural methods is impossible.
Different from model-agnostic analysis, methods that integrate interpretable algorithms enable more comprehensive interpretability. Two different types of structures are adopted in these approaches, including (1) a black-box backbone with an interpretable bypass for implicit informing and (2) a transparent backbone with an interpretable algorithm for direct interpreting (Beckh et al., 2021).
For implicitly informed methods, the introduced interpretable knowledge regulates the immediate feature or embedding (Liu et al., 2019b; Rybakov et al., 2020). Within the interpretable bypass, the performance of the backbone is maximally preserved, which is the main reason these structures are often opted for over-parameterized models
(Jang et al., 2021; Chen et al., 2020). However, the integrated knowledge is unable to decisively change the structure of the backbones for a transparent decision process, which adversely affects the generalization ability of the approaches and limits the application of the methods to specific tasks. In contrast, approaches with interpretable backbones exhibit a more integrated relationship between the components, enabling better explanations than the above two types of approaches. The interpretable algorithms serve as word embedding, immediate feature, or the classifier to realize transparency decision process (Onoe and Durrett, 2020; Lee et al., 2022; Kaneko et al., 2022). But the performance of existing interpretable models remains incomparable to the most advanced language models.
In this work, we address the aforementioned obstacles by developing a novel interpretable language modeling method by constructing a Word-Context-Coupled **Space** (W2CSpace) aligned with statistical knowledge2, which enables (1) effective interpretation of BERT representations (Devlin et al., 2019) by introducing the interpretable statistical logic, (2) reasonable context abstraction with the coupled word-level semantics, and (3) interpretable modeling for the given text with the context-relative distance. W2CSpace serves as the key component in the backbone of our language modeling method, which realizes a decisive transparency increasing compared with the modelagnostic and implicitly informed methods. The structure of our method is illustrated in Figure 1.
Specifically, our main contributions can be summarized as follows:
- Word-level semantics in W2CSpace is originated from BERT immediate representation with the help of a mapping network, preserving the language modeling ability of the deep neural methods (Section 2.1.1).
- An associative matrix sampled from associative knowledge network (AKN, Li et al.,
2022c) is introduced for aligning with the semantic distances (Section 2.1.2 and 2.1.3).
- Based on the linguistic concept, the contexts are abstracted using k-means clustering on neighboring word elements. (Section 2.2.1).
- The context-relative distance, computed between the input text and the context clusters in W2CSpace, serves as the semantic feature to describe the text semantics (Section 2.3.1).
- The experiments on different NLP tasks demonstrate the effectiveness of W2CSpace.
Additionally, an interpretable analysis is designed to verify the interpretability of the our method (Section 3).
## 2 Methodology 2.1 Initialization Of W2Cspace
Since current researchers opt for high-dimensional representations in their methods, it is widely believed that over-parameterization advances language modeling performance. With respect to the standard language modeling process, the words in the given text are modeled based on their meaning under different contexts. Regardless of the attributes of the words themselves, greater representation dimensions enable better performance in distinguishing the words with similar semantics. However, different from the deep neural methods, the linguistic attributes of the text, such as co-occurrence rules, word shape, and so on, serve
![2_image_0.png](2_image_0.png)
as the basis for NLP tasks, which fit the understanding and deduction processes of humans.
In order to unify the high-dimensional representations and interpretable statistical knowledge, we design a mapping network to transfer the semantic representation from BERT encoder to lowdimensional elements in W2CSpace and introduce a statistical alignment with AKN during the training of the mapping network. Within the above processing, the mapped elements are distributed in W2CSpace according to their corresponding wordlevel semantics.
## 2.1.1 Representation Mapping From Bert
The mapping network is a neural network with a backbone of a convolution network, which enables dimension reduction process. The mapped elements in smaller dimensions are regarded as the coordinates of the word-level semantics in W2CSpace. Besides, the introduction of the convolution network is able preserve semantic information from BERT for maintain the performance advantages of pre-trained models. For BERT representations F B of given sentence S =
{x1, x2*, . . . , x*d}, the corresponding elements C =
{c1, c2*, . . . , c*d} in W2CSpace are obtained accord-
## Ing To:
$$\mathbf{C}=\mathrm{Tanh}\{\mathrm{LN}[\mathrm{Convs}(\mathbf{F}_{B})+\mathrm{Res}(\mathbf{F}_{B})]\}\tag{1}$$
where F B ∈ R
n×hand C ∈ R
n×k, h is the hidden sizes of BERT encoder and k is the coordinate size of W2CSpace. Tanh(·) and LN(·) are Tanh and layer normalization operations. A convolution network Convs(·) with different filter sizes added a residual connection Res(·) is used in the mapping network.
## 2.1.2 Statistical Alignment With Akn
AKN, a statistical network based on phrase cooccurrence, is introduced to sample to an associative matrix reflected the associative relations among the given sentence, which is also adopted in previous work (AxBERT, Wang et al., 2023). Within the original AKN is conducted on phrase-level, we modify AKN to word-level to fit the processing of BERT and opt for the construction and sampling methods of the AKN (A) and associative matrix
(MS) similar to AxBERT as:
$$\begin{array}{c c c}{{A_{i,j}=\prod_{\mathrm{sent}}\mathrm{SR}\sum_{\mathrm{sent}}\frac{1}{d i s t a n c e_{\langle i,j\rangle}}}}&{{}}&{{(2)}}\\ {{}}&{{}}&{{}}\\ {{M_{S i,j}=\sigma\frac{A_{i,j}}{\mathrm{Avg}(\bar{A}_{i:})}-0.5}}&{{}}&{{(3)}}\end{array}$$
where A ∈ R
v×v, MS ∈ R
d×d, v is the length of the word list, d is the length of the given sentence, and σ(·) and Avg(·) are functions of sigmoid and average. For the word pair ⟨i, j⟩, distance⟨i,j⟩ =
|i − j| is the word-distance between i-th and j-th word in sentence. A˙i:is the association score of i-th word under current sentence, and MSi:is the i-th row of MS.
3 Since we compute the cosine distances matrix in the given sentence, the associative matrix is aligned with the word-level distance matrix to integrate the statistical logic into W2CSpace. We introduce a mean square indicator IMS to indicate the alignment result. Specifically, for the word pair ⟨*i, j*⟩,
the indicator I*MSi,j* is defined as:
$$I_{M S i,j}=\mathrm{MnSqr}[\mathrm{CosDis}(c_{i},c_{j}),M_{S i,j}]\quad(4)$$
where IMS ∈ R
n×nand n is the length of the given sentence. MnSqr(·) and CosDis(·, ·) are the mean square and cosine distance functions.
## 2.1.3 Training Of Mapping Network
The objective LM of the mapping network is composed of LMS and LRec, which correspond to the mean square error loss and the reconstruction loss.
With respect to the statistical alignment process, the mapping network is trained under the alignment objective LM. Besides, we introduce a reconstruction loss by reversing the structure of mapping network to reconstruct the representation of BERT immediate feature F B. The objective of the mapping network is calculated according to:
$L_{M}=L_{MS}+L_{Rec}$ (5) $L_{MS}=$ Mean($I_{MS}$) (6) $L_{Rec}=$ MAE($C-F_{B}$) (7)
where Mean(·) is the average function, MAE(·)
is the mean absolute error operation (Choi et al.,
2018). The introduction of reconstruction loss aims to guarantee that the mapped word elements preserve the semantics of BERT representations.
## 2.2 Abstraction Of Context-Level Semantics
Humans are able to recognize emotion from language, action, and so on (Barrett et al., 2007).
Specifically, in linguistics, humans recognize emotion with the context in the given sentences. However, humans are able to feel the emotion rather than explicitly describe it, because that context is 3The same shrink rate SR = 0.95 as AxBERT.
an abstract concept in linguistics and is hard to simply quantify.
While the sentence is composed of words, the corresponding context is established from the word semantics, which can be realized in W2CSpace.
Therefore, we employ the k-means clustering on the word-level semantics to abstract the context semantics. The context is able to be extracted based on the common semantics among the words located adjacently in W2CSpace.
## 2.2.1 Context Clustering Based On Word Semantics
With the help of the mapping network and the statistical alignment, the word elements are reasonably distributed in W2CSpace according to their semantics, where the neighbors in W2CSpace refer to similar semantics. k-means clustering is an algorithm based on the distances (Hartigan and Wong, 1979), which is introduced to abstract the word semantics to k classes according to their semantics
(distance). The clustering process is defined as:
$$\mathbf{X}\{x_{i}|\text{Context}(x_{i})\}=\text{KM}_{\text{CosDis}}(c_{1},c_{2},\ldots,c_{n})\tag{8}$$ where $x_{i}$ is the $i$-th classes of context and
KMCosDis(·) is the k-means clustering algorithm based on cosine distance. The word semantics in W2CSpace is clustered into k classes, which represents the process of abstraction of context.
## 2.2.2 **Reasonable Cluster Merging For Context** Clustering
When the clustering is executed, especially in kmeans algorithm, the appropriate k number is hard to determine (Hamerly and Elkan, 2003). As for the clustering process for context, the number k additionally represents the types of the contexts in W2CSpace. A small number of k possibly decreases the performance of language modeling with a rough context environment, but the large number is contrary to the logic of humans as humans cannot precisely distinguish the detailed emotion behind the text as a machine does. Besides, the large number of k increase the time-costing of the language modeling process. Choosing the right number k relate to the reasonability of the context subspace.
Serving as the part of W2CSpace, the context clusters are used for language modeling. Therefore, we introduce a merge matrix to the top of the clustering results, which is optimized under the downstream task. With the guidance of the downstream tasks, the merge matrix dynamically adjusts context semantics for the inter-communication between different context clusters, which reflects a gradual clustering process and realizes a reasonable context clusters for the downstream tasks. The clustering is able to be balanced with the computation process of the merging is defined as:
$\bar{\bf X}\{\bar{x}_{i}|{\rm Context}(\bar{x}_{i})\}={\bf M}_{\rm M}\times[x_{1},x_{2},\ldots,x_{k}]$
(9)
where x¯iindicates the context semantics after merging, MMerge is the merge matrix and MMerge ∈
R
n×k. n is the size of coordinates in W2CSpace and k is the presetting number of the clustering. 4
## 2.3 Interpretable Language Modeling Via W2Cspace
As the standard methodology for language modeling in deep neural methods, the semantic representation is gradually modeled through containing neural networks, which is a simulation of the neural processing of the brain. However, the structural simulation is unable to realize the interpretable on logical level. The decision process through neural networks still remains in the black box.
From the perspective of humans, emotion recognition is significant in daily (Barrett et al., 2011),
which is also an important ability for machines to interact with humans (Kosti et al., 2017). By simulating the recognition process of humans, we introduce a context-relative distance computed between the given text and the contexts in W2CSpace, which enables the interpretable language modeling process with the cooperation of the semantics on word- and context-level.
## 2.3.1 Computation Of Context-Relative Distance
The context-relative distance based on the cosine distance is also adopted in Formula 4. Compared with the euclidean distance, the equation of the cosine distance is more efficient in time-costing and storage. The context-relative distance D is computed according to:
$$\mathbf{D}=\mathrm{CosDis}(\bar{\mathbf{X}},\mathbf{C})$$
D = CosDis(X¯ , C) (10)
where X¯ is the context clusters, C is the mapped word elements from BERT encoder, D is the context-relative distance and D ∈ R
d×k. n is the 4However, the number k will influence the performance of language modeling, which is discussed in Section 3.4.
word length of the given text and k is the number of the context clusters.
## 2.3.2 Training Of Interpretable Language Modeling Method
The context-relative distance is able to directly connect with the downstream classifier, which is similar to the traditional encoding structure. The interpretable language modeling component is regarded as the standard BERT-based encoder for downstream tasks, where the standard objectives in transformer package5are employed.
## 3 Experiments 3.1 Experimental Settings
We conduct our work on NVIDIA Tesla A100 and AMD EPYC 7742 64-Core CPU. During the interpretable language modeling process, BERT-baseChinese pre-trained model is used, and the original parameters are opted 6. Additionally, we opt for the rate of 0.3 for all the dropout layers, a learning rate of 2e-5 for 10-epoch-training of BERT encoder in Fig. 1a, a learning rate of 1e-5 for 3-epoch-training of the mapping network in Fig. 1b.
## 3.2 Datasets
The detailed information of the datasets is exhibited in Table 1. **CLUE** 7, an open-ended, communitydriven project, is the most authoritative Chinese natural language understanding benchmark (Xu et al.,
2020), where the news dataset is used to initialize associative knowledge network; **SIGHAN15** is benchmark for traditional Chinese spelling check evaluation (Tseng et al., 2015), which is widely adopted in simplified Chinese spelling check evaluation by converting to simplified Chinese (Cheng et al., 2020; Liu et al., 2021); **Hybird** is a massive dataset for Chinese spelling correction (Wang et al., 2018), which is used the training of the correction methods (Wang et al., 2019; Cheng et al.,
2020); **Weibo** 8and **ChnSenti** 9sentiment dataset is constructed with the comments from the largest Chinese social community (Sina Weibo) and Chinese hotel reservation websites, which are adopted in the previous work for sentiment classification evaluation (Li et al., 2020, 2022b).
5https://pytorch.org/hub/huggingface_pytorchtransformers/
6https://huggingface.co/bert-base-chinese 7https://github.com/CLUEbenchmark/CLUE
8https://github.com/pengming617/bert_classification 9https://github.com/SophonPlus/ChineseNlpCorpus/
| Dataset | TrainSet | TestSet | Dataset Type | Usage |
|-----------|------------|-----------|----------------|-----------------------------------------------|
| CLUE | 2,439 | - | Article | Initialization of AKN |
| HybirdSet | 274,039 | 3,162 | Sentence | Training of correction task |
| SIGHAN15 | 6,526 | 1,100 | Sentence | Evaluation of correction task |
| ChnSenti | 9,600 | 1,089 | Article | Training and Evaluation of sentiment analysis |
| Weibo100k | 100,000 | 10,000 | Article | Training and Evaluation of sentiment analysis |
## 3.3 Comparison Approaches
We fine-tune our method with standard classifiers of BERT Masked LM and sequence classification for spelling correction and sentiment classification with training of 10 epoch, 1e-5 learning rate and 3 epoch, 1e-5 learning rate.
SoftMask is a BERT-based spelling correction method with a soft-mask generator, where the softmasked strategy is similar to the concept of error detection (Zhang et al., 2020).
FASPell conducts the Seq2Seq prediction by incorporating BERT with additional visual and phonology features (Hong et al., 2019).
SpellGCN incorporates BERT and the graph convolutional network initialized with phonological and visual similarity knowledge for Chinese spelling correction (Cheng et al., 2020).
PLOME integrates the phonological and visual similarity knowledge into a pre-trained masked language model with a large pre-train corpus consisted of one million Chinese Wikipedia pages. And it is the SOTA in previous work (Liu et al., 2021).
HeadFilt is an adaptable filter for Chinese spell check, which introduce a hierarchical embedding according to the pronunciation similarity and morphological similarity (Nguyen et al., 2021).
HLG is a Chinese pre-trained model for word representation by aligning the word-level attention with the word-level distribution with a devised pooling mechanism (Li et al., 2020).
MWA introduces a heterogeneous linguistics graph to pre-trained language model. The graph-based structure integrates the linguistics knowledge in the neural network and achieves the SOTA performance in language modeling (Li et al., 2022b).
HLG and MWA is employed on various pretrained language model, such as vanilla BERT (Devlin et al., 2019), BERT-wwm (Cui et al., 2021),
and ERNIE (Sun et al., 2019). We use the evaluation results on different pre-trained language models in their original paper.
## 3.4 Main Experiments
The efficacy of our interpretable language modeling method is evaluated on different tasks, including Chinese spelling correction and sentiment classification. Chinese spelling correction requires the advanced language model for token classification, where every word in the given text is classified into a single class. And the sequence classification is needed in Chinese sentiment classification, where the given text is classified into positive and negative sentiment. The token and sequence classification tasks are able to cover most of the current classification scenario, which enable the efficient demonstration of the language modeling performance of our method.
## 3.4.1 Results Of Chinese Spelling Correction
Similar with the past works (Cheng et al., 2020; Liu et al., 2021), the correction experiment is employed on word- and sentence-level. Within a more comprehensive perspective, the sentence-level evaluation is wider adopted and more convincing, so we use the same evaluation matrix with the past works (Liu et al., 2021; Nguyen et al., 2021).
As shown in Table 2, the evaluation on wordand sentence-level composed of different indexes, including detection precision (DP), correction precision (CP), detection recall (DR), correction recall
(CR), detection F1 score (DF1) and correction F1 score (CF1). Besides, we assess the influence of the number choosing of n (the size of the coordinates in W2CSpace) and k (the context number in k-means clustering algorithm).
From the correction results in Table 2, our method outperforms the baselines on both wordand sentence-level. Specifically, at sentence-level, our method respectively advances 0.2 and 0.7 points in DF1 and CF1; at word-level, our method is not able to achieve comparable performance than PLOME (Liu et al., 2021), but advances than the SpellGCN (Cheng et al., 2020) with a 0.5 and 0.4 point improvement, and we think the massive train-
| W2CSpace n = 50 W2CSpace n = 100 |
|------------------------------------|
Method kWord Level Sentence Level
DP DR DF1 CP CR CF1 DP DR DF1 CP CR CF1
FASPell - - - - - - - 67.6 60.0 63.5 66.6 59.1 62.6
SoftMask - - - - - - - 73.7 73.2 73.5 66.7 66.2 66.4
BERT - 92.7 85.0 88.7 96.2 81.8 88.4 76.5 78.6 77.5 76.0 76.5 76.3
PLOME†- *94.5* 87.4 *90.8 97.2 84.3 90.3* 77.4 *81.5* 79.4 75.3 *79.3* 77.2
SpellGCN - 88.9 **87.7** 88.3 95.7 **83.9** 89.4 74.8 80.7 77.7 72.1 77.7 75.9
HeadFilt - - - - - - - 84.5 71.8 77.6 84.2 70.2 76.5
| DP | DR | DF1 | CP | CR | CF1 | DP | DR | DF1 | CP | CR | CF1 | |
|------|------|-------|------|------|-------|------|------|-------|------|------|-------|------|
| 500 | 90.5 | 86.2 | 88.3 | 96.2 | 82.9 | 89.0 | 76.4 | 77.6 | 77.0 | 75.8 | 75.1 | 75.5 |
| 1000 | 90.9 | 85.8 | 88.2 | 96.3 | 82.6 | 88.9 | 78.7 | 79.6 | 79.2 | 78.1 | 76.3 | 77.4 |
| 1500 | 91.1 | 87.1 | 89.0 | 96.4 | 83.9 | 89.7 | 78.0 | 79.8 | 78.9 | 77.4 | 77.0 | 77.2 |
| 2000 | 91.7 | 86.2 | 88.9 | 96.3 | 83.0 | 89.2 | 79.2 | 79.8 | 79.5 | 78.5 | 76.6 | 77.5 |
| 3000 | 91.9 | 86.5 | 89.1 | 96.7 | 83.6 | 89.6 | 78.5 | 80.1 | 79.3 | 78.0 | 77.7 | 77.9 |
| 500 | 90.7 | 86.0 | 88.3 | 96.3 | 82.8 | 89.1 | 76.6 | 80.5 | 78.5 | 75.9 | 77.3 | 76.6 |
| 1000 | 90.6 | 86.3 | 88.4 | 96.2 | 83.2 | 89.1 | 77.2 | 79.8 | 78.5 | 76.4 | 76.6 | 76.5 |
| 1500 | 91.2 | 86.3 | 88.7 | 95.8 | 82.7 | 88.8 | 79.4 | 79.4 | 79.4 | 78.8 | 76.6 | 77.7 |
| 2000 | 91.4 | 87.3 | 89.3 | 95.7 | 83.6 | 89.2 | 78.3 | 80.9 | 79.6 | 77.6 | 77.9 | 77.8 |
| 3000 | 90.9 | 86.4 | 88.6 | 96.8 | 83.7 | 89.8 | 76.9 | 79.7 | 78.1 | 76.3 | 76.7 | 76.5 |
Table 3: Results of sentiment classification.
| Method | k | ChnSenti | Weibo100K |
|------------------|-------|------------|-------------|
| BERT | - | 94.72 | 97.31 |
| +MWA | - | 95.34 | 98.14 |
| +HLG | - | 95.83 | 98.17 |
| BERTwwm | - | 94.38 | 97.36 |
| +MWA | - | 95.01 | 98.13 |
| +HLG | - | 95.25 | 98.11 |
| ERNIE | - | 95.17 | 97.30 |
| +MWA | - | 95.52 | 98.18 |
| +HLG | - | 95.83 | 98.22 |
| 50 | 95.70 | 98.22 | |
| 100 | 95.70 | 98.24 | |
| 200 | 95.20 | 98.27 | |
| 500 | 95.45 | 98.30 | |
| 800 | 95.45 | 98.31 | |
| 1000 | 95.03 | 98.23 | |
| W2CSpace n = 50 | 50 | 95.53 | 98.29 |
| 100 | 94.94 | 98.25 | |
| 200 | 95.62 | 98.27 | |
| 500 | 95.11 | 98.31 | |
| 800 | 95.87 | 98.31 | |
| 1000 | 95.37 | 98.28 | |
| W2CSpace n = 100 | | | |
ing dataset of PLOME significantly enhance the correction performance, where the size of dataset composed of per-train and fine-tune dataset is 600 times larger than ours.
With respect to the changes of the parameters of W2CSpace, the correction performance are various.
W2CSpace cannot achieves best performance on a specific parameter combination, but the overall performance is comparable attractive. Besides, we notice that the W2CSpace with a larger size of the coordinates performs better than the smaller one, where the larger W2CSpace is advanced in 8 indexes. The advantages of enlarging the number k are not obvious, where W2CSpace with n = 100 and k = 2000 is the most advanced combination for the correction task. However, W2CSpace with k = 1500 and k = 3000 is also a good choice for correction. Generally, we think the introduction of the merge matrix balances the difference of k.
3.4.2 Results of Chinese Sentiment
## Classification
We evaluate the sentiment classification performance with the classification accuracy. In Table 3, the results of the sentiment classification are illustrated. Specifically, even the other pre-training models partly perform better than BERT, our method achieves improvements on both ChnSenti and Weibo100K datasets as 0.04 and 0.09 points.
For the choice of n and k numbers, our method achieves advanced performance with the combi-
†While the other comparison methods are trained on HybirdSet, PLOME additionally pre-trained on a 600 times larger dataset compared with HybirdSet. We uniquely highlight the advanced index of PLOME with *bold italic font*.
![7_image_0.png](7_image_0.png)
nation of n = 100 and k = 800 for sentiment classification, which is different from the correction experiment. Besides, similar to the tendency in the correction experiment, a larger n number enables a small improvement in performance.
## 3.5 Interpretable Analysis
The interpretable machine learning methods are considered with a transparent decision process
(Molnar et al., 2020; Verhagen et al., 2021). However, the method interpretability is hard to define.
The rule-based approaches, widely regarded as the interpretable methods, are especially advanced in the controllability that is one of the most significant characteristics of interpretability (Lee et al., 2017; Tian et al., 2019; Tripathy et al., 2020).
The context-relative distance, the key in our interpretable language modeling process, originated from the relativity between the input sentences and the context in W2CSpace. Therefore, we design an analysis focused on the interpretability of context-relative distance by means of Chinese sentiment classification task. The procedure of interpretable analysis is exhibited in Figure 2. Ideally, the context-relative distance correlates with sentiment, e.g., the shorter distance indicates stronger relativity between the context and the labeled sentiment. If sentiment prediction results change correspondingly after reversing the context space, it convincingly shows that (1) the interpretable knowledge from AKN is integrated into W2CSpace, (2)
the feature mapping and context abstraction processes are conducted reasonably, and (3) the distance within W2CSpace is interpretable and associated with the emotion conveyed by the input text.
The interpretable analysis result on Weibo100K
is exhibited in Table 4. OA is the original accuracy, CA is the sentiment classification accuracy after modification, and RA is the reversing accuracy for the sentences that successfully reverse the sentiment labels after modification. Because language modeling mainly relies on context semantics, the predicted sentiments after modification should be reversed compared with the original prediction.
From the interpretable results, the values of RA
approximate to 100%, which indicates that the predicted sentiments are mostly reversed and matches our expectation. The predictable changes of the accuracy reflects the controllability of our method and the interpretability of W2CSpace. Besides, from the perspective of model structure, the ideal transparent method enables a completely controllable decision process from input to output. And in our method, even though some parts are still in black-box, but RA reflects the interpretability between the decision processes from W2CSpace to the output and from input to the output. The interpretable contexts are consistent with the linguistics logic in input articles and serves as the agent to cooperate with the articles to realize the controllable process. The value of RA does not directly indicate the interpretability of our method, but but the more approximate to 1, the more semantically explainable of W2CSpace and its context.
Table 4: Interpretable results.
n k OA CA RA
50
100 98.24 3.69 96.62 500 98.30 2.96 98.12
1000 98.23 1.75 99.90
100
100 98.25 1.88 99.81 500 98.31 3.79 96.78
1000 98.28 2.76 98.48
## 4 Conclusion
An interpretable language model is proposed by constructing a word-context-coupled space in this study. Within W2CSpace, (1) the uninterpretable neural representation in BERT is regulated by interpretable associative knowledge relations, (2) an intermediate representation space with reasonable interpretable semantics is designed, (3) an interpretable semantic feature is introduced based on intermediate representation space for downstream classifiers. Obviously, the above strategies bring a strong generalization ability for the interpretable pre-trained language modeling process. Besides, for the potential risk preventing, the interpretable machine learning method is introduced for migrating the adverse affects from the black-box structure. Moreover, in our method, the controllable decision process realize the regulation for the illegal language inputting by controlling the related context, and the strong cooperation between pretrained models and W2CSpace can protect the parameter privacy from the data stealing.
Nevertheless, W2CSpace is unable to directly handle high-level semantics, including sentences, paragraphs, and so on. Even the word-level language models act as the mainstream methods in NLP, the above limitation should be further considered in the future. Besides, restricted by our knowledge and efforts, the main experiments cannot cover all common tasks and all pre-trained models in NLP. Relatively, the token- and sequence-level classifications have demonstrated attractive experimental performance on most NLP tasks. Next, we also plan to extend W2CSpace to more NLP tasks and find its more specific value.
## Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 62272201, and 61872166; in part by the Six Talent Peaks Project of Jiangsu Province under Grant 2019 XYDXX-161.
## References
Lisa Feldman Barrett, Kristen A. Lindquist, and Maria Gendron. 2007. Language as context for the perception of emotion. *Trends in Cognitive Sciences*,
11(8):327–332.
Lisa Feldman Barrett, Batja Mesquita, and Maria Gendron. 2011. Context in emotion perception. *Current* Directions in Psychological Science, 20(5):286–290.
Katharina Beckh, Sebastian Müller, Matthias Jakobs, Vanessa Toborek, Hanxiao Tan, Raphael Fischer, Pascal Welke, Sebastian Houben, and Laura von Rüden. 2021. Explainable machine learning with prior knowledge: An overview. *CoRR*, abs/2105.10172.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Zhi Chen, Yijie Bei, and Cynthia Rudin. 2020. Concept whitening for interpretable image recognition.
Nature Machine Intelligence, 2(12):772–782.
Xingyi Cheng, Weidi Xu, Kunlong Chen, Shaohua Jiang, Feng Wang, Taifeng Wang, Wei Chu, and Yuan Qi. 2020. SpellGCN: Incorporating phonological and visual similarities into language models for Chinese spelling check. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 871–881, Online. Association for Computational Linguistics.
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. Stargan:
Unified generative adversarial networks for multidomain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT
look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP:
Analyzing and Interpreting Neural Networks for NLP,
pages 276–286, Florence, Italy. Association for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
In *International Conference on Learning Representations*.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*,
29:3504–3514.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. In *The 41st International ACM SIGIR Conference on Research and* Development in Information Retrieval, SIGIR '18, page 1371–1374, New York, NY, USA. Association for Computing Machinery.
Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. 2020. A survey of deep learning techniques for autonomous driving. *Journal of Field* Robotics, 37(3):362–386.
Greg Hamerly and Charles Elkan. 2003. Learning the k in k-means. In *Advances in Neural Information* Processing Systems, volume 16. MIT Press.
J. A. Hartigan and M. A. Wong. 1979. Algorithm as 136: A k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics),
28(1):100–108. Full publication date: 1979.
John Hewitt and Christopher D. Manning. 2019. A
structural probe for finding syntax in word representations. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics.
Yuzhong Hong, Xianguo Yu, Neng He, Nan Liu, and Junhui Liu. 2019. FASPell: A fast, adaptable, simple, powerful Chinese spell checker based on DAEdecoder paradigm. In *Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)*,
pages 160–169, Hong Kong, China. Association for Computational Linguistics.
Hyeju Jang, Seojin Bang, Wen Xiao, Giuseppe Carenini, Raymond Ng, and Young ji Lee. 2021. KW-ATTN:
Knowledge infused attention for accurate and interpretable text classification. In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop
on Knowledge Extraction and Integration for Deep Learning Architectures, pages 96–107, Online. Association for Computational Linguistics.
John R Josephson and Susan G Josephson. 1996. *Abductive inference: Computation, philosophy, technology*.
Cambridge University Press.
Masahiro Kaneko, Sho Takase, Ayana Niwa, and Naoaki Okazaki. 2022. Interpretability for language learners using example-based grammatical error correction. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 7176–7187, Dublin, Ireland.
Association for Computational Linguistics.
Wahab Khan, Ali Daud, Jamal A Nasir, and Tehmina Amjad. 2016. A survey on the state-of-the-art machine learning models in the context of nlp. Kuwait journal of Science, 43(4).
Ronak Kosti, Jose M. Alvarez, Adria Recasens, and Agata Lapedriza. 2017. Emotion recognition in context. In *Proceedings of the IEEE Conference on* Computer Vision and Pattern Recognition (CVPR).
Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4365–4374, Hong Kong, China. Association for Computational Linguistics.
Kimin Lee, Jaehyung Kim, Song Chong, and Jinwoo Shin. 2017. Making stochastic neural networks from deterministic ones.
Seonghyeon Lee, Dongha Lee, Seongbo Jang, and Hwanjo Yu. 2022. Toward interpretable semantic textual similarity via optimal transport-based contrastive sentence learning. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5969–5979, Dublin, Ireland. Association for Computational Linguistics.
Xuhong Li, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, and Dejing Dou.
2022a. Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond. *Knowledge and Information Systems*, 64(12):3197–3234.
Yanzeng Li, Jiangxia Cao, Xin Cong, Zhenyu Zhang, Bowen Yu, Hongsong Zhu, and Tingwen Liu. 2022b.
Enhancing Chinese pre-trained language model via heterogeneous linguistics graph. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 1986–1996, Dublin, Ireland. Association for Computational Linguistics.
Yanzeng Li, Bowen Yu, Xue Mengge, and Tingwen Liu.
2020. Enhancing pre-trained Chinese character representation with word-aligned attention. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 3442–3448, Online. Association for Computational Linguistics.
Yulin Li, Zhenping Xie, and Fanyu Wang. 2022c. An associative knowledge network model for interpretable semantic representation of noun context. Complex &
Intelligent Systems, 8(6):5265–5285.
Shulin Liu, Tao Yang, Tianchi Yue, Feng Zhang, and Di Wang. 2021. PLOME: Pre-training with misspelled knowledge for Chinese spelling correction.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2991–3000, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Zhibin Liu, Zheng-Yu Niu, Hua Wu, and Haifeng Wang.
2019b. Knowledge aware conversation generation with explainable reasoning over augmented graphs.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1782–
1792, Hong Kong, China. Association for Computational Linguistics.
Tim Miller. 2019. Explanation in artificial intelligence:
Insights from the social sciences. *Artificial Intelligence*, 267:1–38.
Christoph Molnar, Giuseppe Casalicchio, and Bernd Bischl. 2020. Interpretable machine learning - a brief history, state-of-the-art and challenges. In *ECML*
PKDD 2020 Workshops, pages 417–431, Cham.
Springer International Publishing.
Minh Nguyen, Gia H. Ngo, and Nancy F. Chen. 2021.
Domain-shift conditioning using adaptable filtering via hierarchical embeddings for robust chinese spell check. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:2027–2036.
Yasumasa Onoe and Greg Durrett. 2020. Interpretable entity representations through large-scale typing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 612–624, Online. Association for Computational Linguistics.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog, 1(8):9.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should i trust you?": Explaining the predictions of any classifier. In *Proceedings* of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, page 1135–1144, New York, NY, USA. Association for Computing Machinery.
Anna Rogers, Olga Kovaleva, and Anna Rumshisky.
2020. A primer in BERTology: What we know about how BERT works. *Transactions of the Association* for Computational Linguistics, 8:842–866.
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. *Nature Machine* Intelligence, 1(5):206–215.
Sergei Rybakov, Mohammad Lotfollahi, Fabian Theis, and Alexander Wolf. 2020. Learning interpretable latent autoencoder representations with annotations of feature sets.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223.
Yapeng Tian, Chenxiao Guan, Goodman Justin, Marc Moore, and Chenliang Xu. 2019. Audio-visual interpretable and controllable video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
Soumya Tripathy, Juho Kannala, and Esa Rahtu. 2020.
Icface: Interpretable and controllable face reenactment using gans. In Proceedings of the IEEE/CVF
Winter Conference on Applications of Computer Vision (WACV).
Yuen-Hsien Tseng, Lung-Hao Lee, Li-Ping Chang, and Hsin-Hsi Chen. 2015. Introduction to SIGHAN 2015 bake-off for Chinese spelling check. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, pages 32–37, Beijing, China.
Association for Computational Linguistics.
Ruben S. Verhagen, Mark A. Neerincx, and Myrthe L.
Tielman. 2021. A two-dimensional explanation framework to classify ai as incomprehensible, interpretable, or understandable. In *Explainable and* Transparent AI and Multi-Agent Systems, pages 119–
138, Cham. Springer International Publishing.
Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 37–42, Florence, Italy. Association for Computational Linguistics.
Dingmin Wang, Yan Song, Jing Li, Jialong Han, and Haisong Zhang. 2018. A hybrid approach to automatic corpus generation for Chinese spelling check.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2517–2527, Brussels, Belgium. Association for Computational Linguistics.
Dingmin Wang, Yi Tay, and Li Zhong. 2019.
Confusionset-guided pointer networks for Chinese spelling check. In *Proceedings of the 57th Annual* Meeting of the Association for Computational Linguistics, pages 5780–5785, Florence, Italy. Association for Computational Linguistics.
Fanyu Wang, Huihui Shao, and Zhenping Xie. 2023.
AxBERT: An explainable chinese spelling correction method driven by associative knowledge network.
Yuntao Wang, Zhou Su, Ning Zhang, Rui Xing, Dongxiao Liu, Tom H Luan, and Xuemin Shen. 2022. A
survey on metaverse: Fundamentals, security, and privacy. *IEEE Communications Surveys & Tutorials*.
Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. 2020.
CLUE: A Chinese language understanding evaluation benchmark. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 4762–4772, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Shaohua Zhang, Haoran Huang, Jicong Liu, and Hang Li. 2020. Spelling error correction with soft-masked BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 882–890, Online. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In Section 3.4, we discuss the interpretability of our method from the perspectives of model structure and controllability; In the conclusion, we discuss the limitation of our experiments.
✓ A2. Did you discuss any potential risks of your work?
Conclusion.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract; Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
The code of our work is available and will be presented after acceptance.
✓ B1. Did you cite the creators of artifacts you used?
We use the open-sourced dataset, and cite them in Section 3.2.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The license of our code will be presented together with our code after acceptance.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We conduct our experiments on the same task as the existing works, which is completely applicapable.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We use the dataset which widely adopted in past works, which is checked before.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
The datasets are constructed for Chinese text evaluation, which is claimed in Section 3.2.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Table 1.
## C ✓ **Did You Run Computational Experiments?** Section 3.4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 3.1; Section 3.2.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 3.4.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 3.4.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3.1.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
choi-etal-2023-fixed | Fixed Input Parameterization for Efficient Prompting | https://aclanthology.org/2023.findings-acl.533 | Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, even when they are fixed, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We formally define Fixed Input Parameterization (FIP) problem that focuses on injecting the fixed prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, FIP can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for FIP and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that FIP can be a promising direction for conditioning language models, in scenarios with long and fixed prompts. | # Fixed Input Parameterization For Efficient Prompting
Eunbi Choi1 Yongrae Jo1 Joel Jang1 Joonwon Jang2∗ **Minjoon Seo**1 1KAIST AI 2POSTECH
{eunbi,yongrae,joeljang,minjoon}@kaist.ac.kr [email protected]
## Abstract
Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, even when they are fixed, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We formally define Fixed Input Parameterization (FIP) problem that focuses on injecting the fixed prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, FIP can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for FIP and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that FIP can be a promising direction for conditioning language models, in scenarios with long and fixed prompts1.
## 1 Introduction
Contemporary works on Language Models
(LMs) (Raffel et al., 2020; Brown et al., 2020; Sanh et al., 2022; Thoppilan et al., 2022) have shown that attaching prompts to the input is effective at conditioning LMs to perform specific tasks. Note that the *prompt* in this work refers to a broader aspect of prompts which includes both the prompts used to induce specific behavior as well as prompts used to provide some contextual knowledge such as persona for dialogue agents. LMs are trained to condition on the given prompts in hopes of generalizing to unseen prompts during inference. Unseen
∗*Work done during internship at KAIST AI.
1Code used for the experiments is available at this link prompts can be a persona for persona-dependent conversation (Zhang et al., 2018; Xu et al., 2022),
database schema for semantic parsing (Yu et al.,
2018; Hazoom et al., 2021), and task instruction for zero-shot learning with task instructions (Wei et al., 2022; Sanh et al., 2022). In these tasks, a new prompt is fixed to the input at every inference.
For instance, in persona-dependent conversation, a persona description is appended to the dialogue history, so that the LM can always be conditioned on the persona. For another example, in semantic parsing, the LM is conditioned on the database schema as well as natural language questions to generalize to a new database. Lastly, zero-shot learning with task instructions involves adding natural language instructions to the inputs for adapting LMs to novel tasks.
However, concatenating prompts to input sequences for prompt-dependent inference has two major limitations. (1) During inference, prompts are always included in the input text and thus incur computational and memory overhead (Liu et al., 2022). (2) It is challenging to fit a long text such as the detailed description of a persona as a prompt into Transformer-based models whose input lengths are often fixed (Tay et al., 2022). For instance, in persona-dependent conversation, the model constantly refers to the persona description along with the dialogue history (Wolf et al., 2019; Roller et al., 2021), as shown in the left side of Figure 1. Moreover, in real-world scenarios, a persona may consist of a long detailed text description of a character or person, not just a few profile sentences.
Naively concatenating long prompts to the input sequences is challenging due to the quadratic cost in time and memory of Transformer-based architectures with regard to the input sequence length.
Other approaches specialized for processing long inputs (Beltagy et al., 2020; Katharopoulos et al.,
2020; Izacard and Grave, 2021), or those that augment the LM with a retrieval mechanism (Han et al.,
8428
![1_image_0.png](1_image_0.png)
2022) may be used but still come with increased overall memory and computations, ultimately leading to a delay in generating responses. This problem becomes critical in situations where the LMs are deployed, and fast inference speed is required.
In this work, we formally define Fixed Input Prarameterization (FIP) problem, where we focus on *injecting* a given fixed prompt into the parameters of an LM to address the two limitations mentioned above. With FIP, LMs can produce promptdependent outputs without the computational overhead of appending fixed prompts at inference time
(the right side of Figure 1), and it also enables the injection of longer prompts in a wholistic way.
More specifically, we first show that Fixed Input Prarameterization (FIP) is much more efficient (up to 280 times) in terms of total FLOPs compared to previous approaches that may be used for handling long prompts such as Fusionin-Decoder (Izacard and Grave, 2021) or Linear Transformer (Katharopoulos et al., 2020). Next, we explore different methodologies as baselines for FIP, including the continued pre-training approach on the prompt as well as a novel distillation approach called Pseudo-INput Generation
(PING) for successful FIP. We apply these FIP
methods to three different tasks with fixed prompts:
persona-dependent conversation, semantic parsing, and zero-shot learning with instructions. We compare the methods against LMs with explicit prompts as the upper bound as well as the LM without both the prompt and FIP as the lower bound.
Experimental results show meaningful improvements with respect to the lower bound, but also exhibit a non-trivial gap with the upper bound. Despite the performance and efficiency trade-off, we still believe that FIP is a direction worth exploring considering its computational benefit, especially when inference costs are critical in real-world applications.
In sum, our main contributions are three folds:
- We formally define the Fixed Input Parameterization (FIP) problem and demonstrate its necessity in terms of computation and memory efficiency, in scenarios with long prompts.
- We explore baseline approaches for FIP, showing that performance can approach the upper bound (unconstrained) performance in some cases.
- We show that the *injection* of long prompts
(e.g., detailed description of persona) can be achieved through FIP and show its efficiency in comparison with previous methods, being up to 280 times more efficient during inference.
## 2 Related Work
Prompting Prompting is an emerging paradigm for modeling LMs, especially for few-shot and zero-shot learning (Radford et al., 2019; Brown et al., 2020; Wei et al., 2022; Sanh et al., 2022).
With the help of appropriate prompts, one can exploit knowledge learned by a pre-trained LM and manipulate the LM's behavior. However, for the in-context learning scenario, processing prompts that involve many training examples for each inference incurs substantial computational and memory overhead (Liu et al., 2022). Given training data, Liu et al. (2022) replace in-context learning with fine-tuning a small set of parameters for tackling the above issue. We tackle the same issue but assume a stricter scenario where there is no training data for the given prompt.
Efficient Transformers One can consider using efficient Transformer-based (Vaswani et al.,
2017) architectures for handling long input sequences (Tay et al., 2022). The main challenge of using a vanilla Transformer architecture is the quadratic cost in time and memory with regard to the input sequence length due to the self-attention operation. There has been a surge of recent works addressing this problem (Dai et al., 2019; Beltagy et al., 2020; Katharopoulos et al., 2020; Zhu et al.,
2021; Guo et al., 2021). They are primarily dedicated to improving either the efficiency of the selfattention mechanism or the general efficiency of the Transformer architecture through sparse models. Also, there has been an attempt to distill a unique prompt to handle long inputs (Askell et al.,
2021). Our Fixed Input Prarameterization (FIP)
approach tackles the efficiency problem of performing prompt-dependent tasks by keeping the input sequences short (without prompts), bounding the time and memory complexity to a constant invariant of the length of the prompt. In contrast to
(Askell et al., 2021), Our work focuses on formally framing the problem into a more general and realistic setting since we aim to inject new prompts with no corresponding training data instead of only one prompt with corresponding training data.
Persona-dependent Conversation Endowing a chabot with a persona (Zhang et al., 2018; Xu et al.,
2022) is challenging, but it enables the chatbot to deliver more personal, specific, consistent, and engaging conversations (Zhang et al., 2018) and gain user trust (Liu et al., 2020; Song et al., 2019; Qian et al., 2018). To achieve this, previous works have attached a persona to the dialog history at every inference time, so that the model can always be conditioned on the persona. However, when given a long persona description or long conversation history as a persona, this approach brings the critical problem of increased overall memory and computations, resulting in delayed response generation.
FIP allows a dialogue agent to generate responses without a persona description as the explicit input once the persona is injected.
Semantic Parsing Semantic parsing is the task of mapping a natural language query into a SQL
query executable on a database. Specifically, crossdomain (cross-database) semantic parsing, where models are trained and tested on different domains
(databases) (Yu et al., 2018) introduces many generalization challenges (Hazoom et al., 2021). Previous works concatenate the natural language query with the serialized database schema as the input to address the problem (Suhr et al., 2020; Deng et al., 2021; Xie et al., 2022). With FIP, the model is adapted to a new database schema in advance, so that it can map natural language queries to SQL
queries on the new database without explicitly referring to the schema during inference.
Zero-shot Learning with Task Instructions Recent works (Sanh et al., 2022; Wei et al., 2022)
have addressed zero-shot generalization to new tasks (Brown et al., 2020; Kim et al., 2021) by multi-task prompted training. With multi-task prompted training, the models learn to use task instructions as prompts to generalize to unseen tasks.
It is demonstrated that this approach improves generalization ability to novel tasks and offers an effective substitute for unsupervised language model pre-training. Through FIP, the LM can be aware of a novel task instruction before performing the task and thus does not require the instruction, which can be lengthy, to make predictions.
## 3 Fixed Input Prarameterization
In this section, we formally define Fixed Input Prarameterization (FIP) as a task and describe the benefits of the formulation. Prompt-dependent generation is a task of generating an output sequence y that is a proper response to the input sequence x and coherent to the prompt z. Utilizing the prompt during inference, the generated sentence is obtained by y = f(z, x) where f denotes an LM such as T5 and GPT-2. Fixed Input Prarameterization (FIP), i.e., parameterization of prompts, allows LMs to perform prompt-dependent generation without using prompts during inference. To achieve this, we need to design a FIP method H
to inject a prompt z into an LM f. The process of FIP can be represented as
$\left[\begin{array}{l}\text{}\mathrm{l}\\ \text{}\mathrm{l}\end{array}\right]$
where fz denotes an LM injected with the prompt.
Then the prompt-dependent output sequence can be obtained by y = fz(x).
FIP can also be applied for long prompts whose length exceeds the LM's input sequence length.
Given a long prompt z, we decompose it into multiple sub-prompts {zi} each of which fits the LM's input length, i.e., z = z1:n = [z1; z2; ...; zn]. Then the FIP process can be executed iteratively, injecting each sub-prompt sequentially while the LM is aware of the previous sub-prompts:
$$\begin{array}{c c c}{{f_{\mathbf{z}_{1}}=H(\mathbf{z}_{1},f)}}&{{}}&{{}}&{{(2)}}\\ {{f_{\mathbf{z}_{1:2}}=H(\mathbf{z}_{2},f_{\mathbf{z}_{1}})}}&{{}}&{{}}&{{(3)}}\\ {{}}&{{}}&{{\ldots}}&{{}}\\ {{f_{\mathbf{z}_{1:n}}=H(\mathbf{z}_{n},f_{\mathbf{z}_{1:n-1}})}}&{{}}&{{}}&{{(4)}}\end{array}$$
The above formulation can be seen as a high-level abstraction of iterative FIP that we aim to approximate. In practice, in order to fully inject z1:n, we repeat (2)-(4) multiple times (i.e., multiple epochs).
Why is Fixed Input Prarameterization necessary? FIP brings advantages in terms of efficiency when applied to prompt-dependent tasks.
The previous approach of appending prompts to the input sequences has the drawback of the model repeatedly referring to the prompt at each inference time. This becomes critical in scenarios requiring long prompts, as Transformer architecture has quadratic computational and memory costs due to the limitation of the self-attention operation. We propose FIP as a solution to this computation bottleneck. Once a prompt is injected into the LM in advance, the LM no longer needs to refer to the prompt during inference. As a result, the model's input length remains independent of the length of prompts and is able to utilize prompts of any length efficiently. We discuss the efficiency gain of FIP in Section 6.1.
Evaluation Metric for FIP FIP can be evaluated by the evaluation metric of the fixed promptdependent task at hand. We also introduce a metric called the FIP score (FIP score) to measure the degree of injection. The metric is agnostic of the target task by comparing the results with that of an LM given actual prompts during inference. Let X*w/ prompt* denote the LM's task score with the prompt as an additional input (upper bound) and X*w/o prompt* denote the LM's task score without the prompt (lower bound). We define **FIP score** as the min-max scaling score of X*F IP* , where X*F IP* represents the score of the LM on the target task after FIP, *i.e.,* FIP score =
max(0, XF IP − Xw/o prompt) / (X*w/ prompt* −
X*w/o prompt*). We limit using FIP only in situations where Xw/ prompt > X*w/o prompt* because there is no reason to inject a prompt if task performance degrades when using the prompt. Even if the range of individual task scores may vary from task to task, FIP score represents the overall injection effectiveness of the FIP methods, agnostic of the individual task score range.
## 4 Methods For Fixed Input Prarameterization
In this section, we explore methods of Fixed Input Prarameterization (FIP) that can address promptdependent tasks without accessing the prompt during inference. To achieve this, the model should be trained to store the prompt in its parameters.
This can be seen as *parameterizing* the prompt into the model instead of feeding the prompt explicitly to the model. This is challenging as the prompt is unseen to the model and has no corresponding training data. In Section 4.1, a baseline method by continued pre-training is introduced, followed by a method for improving the baseline with curriculum learning. Section 4.2 presents a novel distillation-based method called PseudoINput Generation (PING) that learns to generate pseudo-inputs to inject novel prompts.
## 4.1 Continued Pre-Training
We establish the Continued Pre-training method as a straightforward baseline for FIP. This method injects prompts into the parameters of an LM by continuing with the pre-training objective of the LM
on the target prompt. The pre-training objective is a straightforward option as it works in an unsupervised manner. In our experiments, we leverage the pre-trained T5 model (Raffel et al., 2020) and thus use the masked language modeling objective which is the pre-training objective of T5. Following Raffel et al. (2020), we randomly replace 15% of a given prompt with special mask tokens; then, the model is trained to predict the sequence of masked tokens. In this process, the model learns about the prompt the same way the model learns knowledge during the pre-training stage.
Curriculum learning We further investigate the baseline method by leveraging *curriculum learning* (Bengio et al., 2009; Campos, 2021) during
![4_image_0.png](4_image_0.png)
continued pre-training. We set the mask ratio as the difficulty criteria (Wettig et al., 2022) and gradually increase the ratio throughout the Continued Pre-training. As the mask ratio increases, the model should predict more masked tokens given less context. With curriculum learning, we expect the LM
to gradually better adapt to the prompt, improving its prompt-dependent task performance. Throughout the experiments, we increase the mask ratio linearly from 15% to 30%, 50%, and 70% and report the best score.
## 4.2 Pseudo-Input Generation (Ping)
The purpose of FIP is to inject a prompt into the parameters of an LM which can also be done indirectly through distillation. In this subsection, we propose a novel distillation-based method called Pseudo-INput Generation (PING) that distills a novel prompt into a student LM that does not have access to the prompt through a teacher LM that does have access to the prompt. In order for distillation, pseudo-inputs are needed since we assume a scenario where the prompt to be injected has never been seen during training and does not have separate training data. An overview of PING is illustrated in Figure 2. As shown in the figure, during Phase 1, an input generator is trained with the task-specific training data. When given a prompt of the task as the input, the generator is expected to generate the task inputs that correspond to the prompt. During Phase 2, the input generator is frozen and is used to generate pseudo-inputs from the unseen prompt, which are then given to the teacher together with the prompt, while only the pseudo-inputs are given to the student. This way, the student learns to follow the teacher and is able to learn about the prompt indirectly.
## 5 Experimental Setup
In this section, we explain the experimental setups in detail. Experiments are performed with the T5base (Raffel et al., 2020) (220M parameters) model unless noted otherwise.
## 5.1 Prompt-Dependent Tasks
In order to evaluate the effectiveness of Fixed Input Prarameterization (FIP) methods, we select three prompt-dependent tasks—persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions; all these tasks require fixed prompts during inference. Fixed prompts come in the form of a persona in persona-dependent conversation, database schema in semantic parsing, and task instruction in zero-shot learning with task instructions. As described in the introduction and Section 3, when FIP is applied for these tasks, there would be apparent benefits in real-world scenarios.
With these tasks, not only the performance of the baseline FIP methods is evaluated, but also the significance of FIP is emphasized by comparison with the (unconstrained) previous approaches that concatenate prompts to the input.
## 5.2 Datasets
Following datasets of prompt-dependent tasks mentioned in Section 5.1 are utilized to evaluate Fixed Input Prarameterization (FIP).
PERSONA-CHAT / MSC PERSONA-CHAT
(Zhang et al., 2018) is a crowd-sourced dataset intended for training agents to perform engaging and personal chit-chat by comprising the dialogues to be grounded on specific personas. For each dialogue, two speakers have a 6-8 turn conversation conditioned on a given persona. Based on PERSONA-CHAT, Multi Session Chat (MSC) (Xu et al., 2022) is a dialogue dataset collected to be comprised of long-term conversations each consisting of 5 continuing, but distinct chat sessions. In this work, we consider both the persona and dialogue history of the first two sessions as a prompt in MSC to incorporate long-term conversational context. Performance on both tasks are measured via perplexity (PPL). We randomly select 100 dialogues from the validation sets respectively as the persona-dependent conversation benchmark for testing our method. The persona descriptions are 60 tokens long on average in PERSONA-CHAT
and the combined prompts average 811 tokens in MSC.
Spider Spider (Yu et al., 2018) is a large crossdomain semantic parsing and text-to-SQL dataset for developing natural language interfaces to crossdomain databases. Models must generalize to new database schemas as well as new queries to perform well on it. Evaluation metrics include Exact Matching (EM) and Execution Accuracy (EA). We utilize the development set containing 20 databases with about 50 questions per database as a semantic parsing benchmark for FIP. The database schemas range in length from 55 to 430 token lengths.
WSC / RTE / COPA For the task of zero-shot task generalization, Sanh et al. (2022) have trained the LM on a diverse set of tasks and evaluated on a held-out group of tasks to evaluate generalization performance. We choose coreference resolution, natural language inference, and sentence completion tasks, three out of their four held-out tasks, and test FIP on WSC, RTE, and COPA datasets (Wang et al., 2019). We utilize task instructions (prompts) provided from Sanh et al. (2022) and report average task scores of using them. The task instructions are comprised of about 20-30 tokens.
## 5.3 Implementation Details
For the Continued Pre-training method (Section 4.1), we use the Adam optimizer (Kingma and Ba, 2015) with a constant learning rate 1e-4 and batch size 8. We perform 5-20 steps of injection. For PING (Section 4.2), input generators are trained on each tasks for 1-2 epochs. We use KLdivergence for distilling the last layer's output of the decoder and perform 10-100 steps of injection.
For T5-base, we use a single 16GB T4 GPU and for the larger models we use 4 32GB V100 GPUs.
In order for injection and comparison with upper-bound (W/ PROMPT) and lower-bound (W/O
PROMPT) performance, we first need two different versions of the LM adapted to the given task. For the task of persona-dependent conversation and semantic parsing, W/ PROMPT model is fine-tuned together with prompts since prompts are explicitly used during inference, while W/O PROMPT
model is fine-tuned on the task without being given the prompt. We perform FIP on the W/O
PROMPT model since we assume having no access to prompts during inference.
For the zero-shot learning, we modify the prompts developed by Sanh et al. (2022) in the form of a fixed prompt. We replace the placeholders on their prompts with fixed words, then append the actual content to the prompt in a keyvalue format. For example, if the original is If {Premise} is true, is it also true that {Hypothesis}?, then the converted prompt is If "Premise" is true, is it also true that "Hypothesis"? Premise:{Premise}
Hypothesis:{Hypothesis}. This ensures that the prefix is fixed, which can be injected with FIP. We use the T0-3B LM checkpoint for the zero-shot generalization.
## 6 Experimental Results
In this section, we first explore the inference efficiency of models performing prompt-dependent tasks and show that Fixed Input Prarameterization
(FIP) leads to a meaningful gain in computational efficiency. Then the baseline and proposed methods are tested and compared on datasets discussed in Section 5.2. The results indicate that the PseudoINput Generation (PING) method achieves the best performance among FIP methods, sometimes even outperforming the upper bound, which uses explicit prompts during inference. In Section 6.3, we provide a concrete instance of injecting a real persona description into a conversational model, demonstrating the feasibility of long prompt injection.
## 6.1 Inference Efficiency
The comparison of inference efficiency of a model with FIP, a baseline model that naively concate-
| Model | Prompt | | | |
|-------------|------------|--------------|-------------|-------------|
| T5 W/ FIP | * | 0.7 | 0.58 | |
| T5 | 512 | 7.2 | (×10.3) | 1.09 (×1.9) |
| 512 × 4 | OOM | - | | |
| T5 + FID | 512 | 7.2 | (×10.3) | 1.09 (×1.9) |
| 512 × 28 | OOM (×280) | - | | |
| LINEAR- | 512 | 9.5 | (×13.8) | 1.58 (×2.7) |
| TRANSFORMER | 512 × 2 | 16.1 (×23.2) | 2.62 (×4.5) | |
| 512 × 28 | OOM (×280) | - | | |
nates the prompt to the input, Fusion-in-Decoder
(FiD) (Izacard and Grave, 2021), and Linear Transformer (Katharopoulos et al., 2020) are shown in Table 1. We consider FiD as one of the options for processing long inputs because it processes long input sequences by encoding chunks of input sequences separately, reducing the quadratic complexity to linear. Linear Transformer also reduces the complexity to linear by linearizing the attention mechanism. We measure FLOPs and forward propagation latency via DeepSpeed Flops profiler 2 using a single 16GB T4 GPU.
As shown in Table 1, T5 W/ FIP is much more efficient than other models, especially as we assume a longer prompt length. This is because the efficiency of FIP remains the same independent of the prompt length while the costs of others increase linearly. Specifically, when the prompt length is 8 times the model's max input sequence length, one can achieve 80× computational efficiency in terms of FLOPs by applying FIP. Furthermore, in a scenario where the prompt length is 28× the model's max input sequence length (shown in Section 6.3 when trying to utilize a long persona that is over 13,000 token length long), previous approaches show an out-of-memory (OOM) issue using the 16GB T4 GPU, which means it is impossible to utilize such long prompts. FIP is estimated to be 280× more efficient in terms of total FLOPs if the
![6_image_0.png](6_image_0.png)
## 6.2 Task Performance
We report the task performance obtained by applying different FIP methods on three promptdependent tasks in Table 2. FIP scores are also obtained as introduced in Section 3. For all of W/ FIP methods that applied Fixed Input Prarameterization, we observe an overall increase in performance compared to W/O PROMPT, indicating successful injection of prompts into the parameters of the model through FIP methods. The standard deviations of perplexity with 5 random seeds are lower than 0.01 and 0.1 for PERSONA-CHAT and MSC, respectively, which demonstrates the statistical significance of the results. Furthermore, we find that FIP performance improves steadily with model size in PERSONA-CHAT, demonstrating that larger models benefit more from FIP as shown in Figure 3 in terms of FIP score. The task scores are reported in Appendix A.
As shown in Table 2, while CP (Continued Pretraining in Section 4.1) gives modest performance improvement over W/O PROMPT, the results of CP
W/ CURR show that leveraging curriculum learning during continued pre-training is effective in some cases. CP W/ CURR performs better compared to CP in PERSONA-CHAT, MSC, Spider, and RTE; it even outperforms W/ PROMPT in RTE. On the other hand, PING significantly improves performance from CP in PERSONA-CHAT, MSC, Spider, and WSC, performing almost on par with W/ PROMPT
in WSC. This sheds light on the possibility that FIP may be able to reach the upper bound performance.
However, the results show at the same time that there is still a gap between the performance of FIP
methods and the upper bound W/ PROMPT that
| Dialogue | Semantic Parsing | Task Generalization | | | | | | | | | | |
|----------------------------------------|--------------------------------------------------------|-----------------------|-------|-------|-----------|-------|------|-------|------|-------|------|-------|
| PERSONA-CHAT | MSC | Spider | WSC | RTE | COPA | | | | | | | |
| PPL (↓) FIP Score PPL (↓) FIP Score EM | EA FIP Score ACC FIP Score ACC FIP Score ACC FIP Score | | | | | | | | | | | |
| W/ PROMPT | 8.40 | - | 16.42 | - | 57.9 61.3 | - | 63.6 | - | 67.9 | - | 67.3 | - |
| W/O PROMPT W/O FIP | 10.72 | - | 23.96 | - | 14.5 15.1 | - | 44.0 | - | 64.2 | - | 60.0 | - |
| W/ FIP CP | 10.53 | 0.081 | 18.95 | 0.664 | 16.9 17.5 | 0.054 | 54.5 | 0.536 | 67.7 | 0.946 | 64.8 | 0.658 |
| CP W/ CURR | 10.28 | 0.191 | 18.82 | 0.681 | 17.7 18.4 | 0.072 | 50.8 | 0.347 | 68.2 | 1.08 | 64.1 | 0.562 |
| PING | 9.45 | 0.549 | 18.44 | 0.731 | 36.6 41.7 | 0.507 | 63.3 | 0.985 | 64.5 | 0.081 | 62.0 | 0.274 |
## Needs To Be Bridged In Future Work.
We find that the performance of different methods depends on the complexity of the input sequence structure. We believe that PING achieves a good performance in PERSONA-CHAT, MSC,
Spider, and WSC because those datasets have relatively simple input sequences, such as a short utterance and simple query. In datasets with many components or multiple complex sentences (e.g., COPA
and RTE), the low quality of generated pseudoinputs degrades the performance of PING. On the other hand, CP and CP W/ CURR perform better in datasets with complex structure. These findings encourage the community to explore a more integral FIP method that can cover different datasets.
## 6.3 Long Prompts Injection
To demonstrate the effectiveness of FIP on injection of long prompts into LMs, we show how the method works with a real-world example. We pick a Wikipedia page (Elon Musk), considering it as a long persona description, and inject the entire article (over 13,000 tokens) into an LM trained with PERSONA-CHAT. Here, we use T5-large as a base model and apply PING. Figure 4 shows an actual instance of interactions with the LM that underwent FIP through PING. The responses show the LM successfully reflecting the description of the person on the Wikipedia page without having the description appended to the input. Moreover, the inference of FIP is 280× more computationally efficient in terms of FLOPs than the baseline, as shown in Section 6.1.
![7_image_0.png](7_image_0.png)
: the people at Mars. it is one of the world's best places for man
: what's your plan?
?
## 7 Conclusion
In this paper, we formally define Fixed Input Prarameterization (FIP) problem that focuses on injecting the prompt into the parameters of an LM, as an efficient alternative to attaching fixed prompts to the inputs for prompt-dependent tasks. Through experiments, we show that FIP is much more computationally efficient (up to 280 times) in terms of total FLOPs for handling long prompts compared to the previous alternatives. We further explore baseline methodologies for FIP and find that PseudoINput Generation (PING), a distillation-based approach, shows promising results in personadependent conversation, semantic parsing, and zero-shot learning with task instructions. Through the explorations, we show that FIP can be a promising direction for conditioning language models efficiently, in scenarios with long and fixed prompts.
Limitations While Fixed Input Prarameterization (FIP) enables performing prompt-dependent tasks efficiently, there are limitations that need to be addressed in future work. In particular, the current FIP methods cause task performance degradation.
Moreover, the computational cost needed for the injection of prompts and the storage required to store the parameters of every injected model have not been extensively considered. For example, when considering *previous conversation history* as the prompt to be injected in a long-term conversation setting, fast injection may also be a requirement for real-world application. Updating or adding a relatively small number of parameters (Hu et al.,
2021; Wang et al., 2021) may be a potential avenue for addressing the problems.
## Acknowledgements
We would like to thank Hyunji Lee, Sohee Yang, Seonghyeon Ye, and Soyoung Yoon for helpful discussions. This work was partly supported by KT
grant (2021, A study on a conversational language model that uses long external text as a prompt, 80%) and Institute of Information & communications Technology Planning & Evaluation (IITP)
grant funded by the Korea government (MSIT)
(No.2022-0-00113, Developing a Sustainable Collaborative Multi-modal Lifelong Learning Framework, 20%).
## References
Amanda Askell, Yushi Bai, Anna Chen, Dawn Drain, Deep Ganguli, T. J. Henighan, Andy Jones, Nicholas Joseph, Benjamin Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, John Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Christopher Olah, and Jared Kaplan. 2021. A
general language assistant as a laboratory for alignment. *ArXiv*, abs/2112.00861.
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020.
Longformer: The long-document transformer. *ArXiv*, abs/2004.05150.
Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In ICML.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens
Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *NeurIPS*.
Daniel Fernando Campos. 2021. Curriculum learning for language modeling. *ArXiv*, abs/2108.02170.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019.
Transformer-xl: Attentive language models beyond a fixed-length context. In ACL.
Xiang Deng, Ahmed Hassan Awadallah, Christopher Meek, Oleksandr Polozov, Huan Sun, and Matthew Richardson. 2021. Structure-grounded pretraining for text-to-sql. In *NAACL*.
Mandy Guo, Joshua Ainslie, David C. Uthus, Santiago Ontañón, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. 2021. Longt5: Efficient text-to-text transformer for long sequences. *ArXiv*, abs/2112.07916.
Seungju Han, Beomsu Kim, Jin Yong Yoo, Seokjun Seo, Sangbum Kim, Enkhbayar Erdenee, and Buru Chang.
2022. Meet your favorite character: Open-domain chatbot mimicking fictional characters with only a few utterances. In *NAACL*.
Moshe Hazoom, Vibhor Malik, and Ben Bogin. 2021.
Text-to-sql in the wild: A naturally-occurring dataset based on stack exchange data. *ArXiv*,
abs/2106.05006.
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. *ArXiv*, abs/2106.09685.
Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In *EACL*.
Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and Franccois Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear attention. In *ICML*.
Boseop Kim, Hyoungseok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Dong Hyeon Jeon, Sunghyun Park, Sung ju Kim, Seonhoon Kim, Dong Hyung Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, SukHyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, NaHyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim, Jisu Jeong, Yong Goo Yeo, Dong hyun Ham, Do-Hyoung Park, Min Young Lee, Jaewoo Kang, Inho Kang, Jung-Woo Ha, Woo Chul Park, and Nako Sung. 2021. What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers. In *EMNLP*.
Diederik P. Kingma and Jimmy Ba. 2015. Adam:
A method for stochastic optimization. *CoRR*,
abs/1412.6980.
Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel.
2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. *ArXiv*,
abs/2205.05638.
Qian Liu, Yihong Chen, B. Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, and Dongmei Zhang. 2020. You impress me: Dialogue generation via mutual persona perception. In ACL.
Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Assigning personality/profile to a chatting machine for coherent conversation generation. In *IJCAI*.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric Michael Smith, Y.-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In *EACL*.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H.
Bach, Lintang A. Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M SAIFUL BARI, Canwen Xu, Urmish Thakker, Shanya Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Stella Rose Biderman, Leo Gao, T. G. Owe Bers, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In *ICLR*.
Haoyu Song, Weinan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019. Exploiting persona information for diverse generation of conversational responses.
In *IJCAI*.
Alane Suhr, Ming-Wei Chang, Peter Shaw, and Kenton Lee. 2020. Exploring unexplored generalization challenges for cross-database semantic parsing. In ACL.
Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2022. Efficient transformers: A survey. ACM
Computing Surveys (CSUR).
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam M. Shazeer, Apoorv Kulshreshtha, HengTze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, Yaguang Li, Hongrae Lee, Huaixiu Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Yanqi Zhou, Chung-Ching Chang, I. A. Krivokon, Willard James Rusch, Marc Pickett, Kathleen S. Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Hartz Søraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin HoffmanJohn, Josh Lee, Lora Aroyo, Ravindran Rajakumar, Alena Butryna, Matthew Lamm, V. O. Kuzmina, Joseph Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda:
Language models for dialog applications. *ArXiv*,
abs/2201.08239.
Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *NeurIPS*.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In *NeurIPS*.
Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021. K-adapter: Infusing knowledge into pre-trained models with adapters. In Findings of ACL.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M.
Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. In *ICLR*.
Alexander Wettig, Tianyu Gao, Zexuan Zhong, and Danqi Chen. 2022. Should you mask 15% in masked language modeling? *ArXiv*, abs/2202.08005.
Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. *ArXiv*, abs/1901.08149.
Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. 2022. Unifiedskg:
Unifying and multi-tasking structured knowledge grounding with text-to-text language models. *ArXiv*, abs/2201.05966.
Jing Xu, Arthur D. Szlam, and Jason Weston. 2022.
Beyond goldfish memory: Long-term open-domain conversation. In ACL.
Tao Yu, Rui Zhang, Kai-Chou Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Z
Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In *EMNLP*.
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur D.
Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In ACL.
Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, and Bryan Catanzaro. 2021. Long-short transformer:
Efficient transformers for language and vision. In NeurIPS.
## A Appendix
Table A1: Prompt Injection performance in PERSONACHAT as model size increases. There is a consistent trend of improved injection performance across PI methods as the model scales, and CP tends to increase more rapidly.
| PERSONA-CHAT | | | | | | |
|----------------------------------------------------|-------|-------|------|-------|------|-------|
| 220M | 770M | 3B | | | | |
| PPL (↓) PI Score PPL (↓) PI Score PPL (↓) PI Score | | | | | | |
| W/ PROMPT | 8.40 | - | 7.42 | - | 6.66 | - |
| W/O PROMPT W/O PI | 10.72 | - | 9.54 | - | 8.82 | - |
| W/ PI CP | 10.53 | 0.081 | 9.3 | 0.113 | 7.75 | 0.495 |
| PING | 9.45 | 0.549 | 8.37 | 0.552 | 7.56 | 0.583 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
After Section 7 Conclusion
✗ A2. Did you discuss any potential risks of your work?
Our paper aims to improve the model's efficiency, without changing the model's output much.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1 Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Section 5.2 Datasets and Section 5.3 Implementation Details
✓ B1. Did you cite the creators of artifacts you used?
Section 5.2 Datasets and Section 5.3 Implementation Details
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The license of the code used in the paper will be discussed on the GitHub repository (to be released).
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
All data and models used in the paper are available for research purposes
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
All data used in the paper do not have any offensive content or identifiers.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5.2 Datasets
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5.2 Datasets
## C ✓ **Did You Run Computational Experiments?** Section 5.3 Implementation Details
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 5.3 Implementation Details The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5.3 Implementation Details
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Our experiment results are from the average of multiple examples, with single runs.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5.3 Implementation Details and Section 6.1 Inference Efficiency
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
garg-etal-2023-data | Data Augmentation for Low-Resource Keyphrase Generation | https://aclanthology.org/2023.findings-acl.534 | Keyphrase generation is the task of summarizing the contents of any given article into a few salient phrases (or keyphrases). Existing works for the task mostly rely on large-scale annotated datasets, which are not easy to acquire. Very few works address the problem of keyphrase generation in low-resource settings, but they still rely on a lot of additional unlabeled data for pretraining and on automatic methods for pseudo-annotations. In this paper, we present data augmentation strategies specifically to address keyphrase generation in purely resource-constrained domains. We design techniques that use the full text of the articles to improve both present and absent keyphrase generation. We test our approach comprehensively on three datasets and show that the data augmentation strategies consistently improve the state-of-the-art performance. We release our source code at \url{https://github.com/kgarg8/kpgen-lowres-data-aug}. | # Data Augmentation For Low-Resource Keyphrase Generation
Krishna Garg Jishnu Ray Chowdhury Cornelia Caragea Computer Science University of Illinois Chicago [email protected] [email protected] [email protected]
## Abstract
Keyphrase generation is the task of summarizing the contents of any given article into a few salient phrases (or keyphrases). Existing works for the task mostly rely on large-scale annotated datasets, which are not easy to acquire. Very few works address the problem of keyphrase generation in low-resource settings, but they still rely on a lot of additional unlabeled data for pretraining and on automatic methods for pseudo-annotations. In this paper, we present data augmentation strategies specifically to address keyphrase generation in purely resource-constrained domains. We design techniques that use the full text of the articles to improve both present and absent keyphrase generation. We test our approach comprehensively on three datasets and show that the data augmentation strategies consistently improve the state-of-the-art performance. We release our source code at https://github. com/kgarg8/kpgen-lowres-data-aug.
## 1 Introduction
Keyphrase generation (KG) helps in document understanding by summarizing the document in the form of a few salient phrases (or keyphrases).
These keyphrases may or may not appear verbatim in the original text and accordingly, they are referred to as either present or *absent* keyphrases.
The task has useful applications to many downstream tasks, e.g., document clustering (Hammouda et al., 2005), matching reviewers to appropriate papers in the conference portals (Augenstein et al., 2017), recommendation systems (Augenstein et al., 2017), text classification (Wilson et al., 2005; Hulth and Megyesi, 2006; Berend, 2011), index construction (Ritchie et al., 2006) and sentiment analysis and opinion mining (Wilson et al., 2005; Berend, 2011).
Prior works for keyphrase generation have largely focused on using large-scale annotated datasets for various domains - computer science
(KP20k), news (KPTimes, JPTimes), webpages
(OpenKP), etc. However, such large annotation datasets are not available in all domains (e.g.,
medicine, law, finance), either due to paucity in terms of available data or lack of domain expertise among the annotators or even the high annotation costs. This necessitates the focus on the low-resource domains.
The traditional ways to address low-resource keyphrase generation have been centered around using semi-supervised or unsupervised learning techniques (Ye and Wang, 2018; Wu et al., 2022; Ray Chowdhury et al., 2022). For these methods, a lot of unlabeled data is necessary and needs to be curated for model training. The unlabeled data is further annotated automatically using keyphrase extraction methods and is used for pretraining the model or is used in the auxiliary task for multitasking. There are two limitations to these methods:
(1) they have to still depend on additional largescale unlabeled data, which may not be available always; and (2) the automatic annotation may not be accurate enough, especially when the off-theshelf keyphrase generation or extraction models are pretrained on a different domain.
In this paper, we develop data augmentation strategies for purely low-resource domains, which do not require acquiring unlabeled data for pretraining or automatic annotation approaches for unlabeled data (which may introduce errors). Inspired by Garg et al. (2022) who showed the benefits of using information beyond the title and abstract for keyphrase generation, we leverage the full text of the documents (which is often ignored by prior works) and present ways for augmenting the text for improving both present and absent keyphrase generation performance.
Data augmentation in NLP has recently become a promising line of research to improve the state-ofthe-art performance (Wei and Zou, 2019; Fadaee et al., 2017; Li and Caragea, 2021; Sun et al.,
8442
| Methods | Excerpts from different data augmentation methods | | |
|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----|------------------------------------------------------------------------------------------|
| TITLE || ABSTRACT | casesian : | a | knowledge-based system using statistical and experiential perspectives for improving the |
| knowledge sharing in the medical prescription process [SEP] objectives : knowledge sharing is crucial for better patient care in the healthcare industry | | | |
| AUG_TA_SR | casesian : a knowledge based system using statistical and experiential perspectives for better the knowledge sharing in the medical examination prescription [SEP] objectives : knowledge sharing is crucial for advantageously patient role care in the healthcare industry | | |
| AUG_TA_BT | cassian : a knowledge-based system that uses statistical and experiential perspectives to improve the sharing of knowledge in the medical prescription process [SEP] objectives : knowledge sharing is essential to improve patient care in the health sector | | |
| AUG_TA_KPD | casesian : a [MASK] using statistical and experiential perspectives for improving the [MASK] in the [MASK] process [SEP] objectives : [MASK] is crucial for better patient care in the healthcare industry | | |
| AUG_TA_KPSR | casesian : a cognition based system using statistical and experiential perspectives for improving the noesis sharing in the checkup prescription process [SEP] objectives : noesis sharing is crucial for better patient care in the healthcare industry | | |
| AUG_BODY | numerous methods have been investigated for improving the knowledge sharing process in medical prescription [SEP] case-based reasoning is one of the most prevalent knowledge extraction methods | | |
| GOLD KEYPHRASES | case-based reasoning , medical prescription , knowledge-based system , knowledge sharing , bayesian theorem | | |
2020; Xie et al., 2020; Feng et al., 2020; Park and Caragea, 2022; Yadav and Caragea, 2022). An ideal data augmentation technique is desirous to have the following characteristics: (1) to introduce diversity in training samples but neither too much
(otherwise, training samples fail to represent the given domain) nor too less (otherwise, it leads to overfitting); (2) to be easy-to-implement; and (3)
to improve model performance.
Towards this end, we design and experiment with four data augmentation techniques (the first two being specifically designed for keyphrase generation) that remake the body1 of a given article and then augment it to the training data samples containing Title and Abstract (T || A): (1) AUGBODY-KPD where the new training samples contain masked body (i.e., we drop present keyphrases with a certain probability from the body), (2) AUGBODY-KPSR where all the instances of present keyphrases (in contrast to random tokens as in the standard synonym replacement) in the body are replaced with their synonyms, (3) AUG-BODY-BT
where the body text is translated to an intermediate language and then back to the original language, (4)
AUG-BODY-SR where the standard synonym replacement is applied to random tokens of the body.
In addition to augmentation with the body, we also provide methods for augmentation using T || A. We depict the representative augmentation strategies in Table 1.
The intuition is that while augmenting the text if we further drop some of the present keyphrases, similar to Masked Language Modeling (Devlin et al., 2019), that makes the task harder and the model is forced to learn to generate the keyphrases.
Introducing synonyms and back-translation further increases the diversity of the samples in a much controlled way. Recently, several full-text datasets have been proposed for the KG task, e.g., FullTextKP (Garg et al., 2022), LDKP3K
(Mahata et al., 2022), and LDKP10K (Mahata et al., 2022). We use two of these datasets, i.e., LDKP3K and LDKP10K, that contain scientific papers, along with a third dataset KPTimes (Gallina et al., 2019) which mimics full-text keyphrase generation datasets but from a different domain, i.e., news. Through extensive experiments on the three datasets, we observe that although it is hard to improve the present keyphrase generation performance without sacrificing the absent keyphrase generation performance, our proposed augmentation approaches with the body consistently improve both. Moreover, the augmentation methods with body steadily surpass the performance of data augmentation methods that use only Title and Abstract.
In summary, the main contribution of the paper is to demonstrate data augmentation strategies for the keyphrase generation task particularly for purely low-resource domains (which have been under-explored). We present simple yet effective data augmentation methods using the full text of the articles and demonstrate large improvements over the state-of-the-art methods.
## 2 Related Work
Meng et al. (2017) first proposed to solve Keyphrase Generation as a sequence-to-sequence task using deep learning (encoder-decoder) methods. They proposed CopyRNN which uses the copy mechanism (Gu et al., 2016) with the GRUbased encoder-decoder model. This was further extended by Chen et al. (2018) to incorporate correlations between the predicted keyphrases (CorrRNN) and by Yuan et al. (2020) to propose a mechanism to generate a sequence of a variable number of keyphrases (catSeq). Several other works approached the task using reinforcement learning
(Chan et al., 2019), generative adversarial networks
(Swaminathan et al., 2020), and hierarchical decoding (Chen et al., 2020). Ye et al. (2021b) further reframed the task as sequence-to-set generation instead of sequence-to-sequence generation and used the transformer model for the first time for this task.
Later, Garg et al. (2022); Wu et al. (2022); Kulkarni et al. (2022); Wu et al. (2021) used other pretrained models like Longformer Encoder-Decoder, BART, KeyBART, and UniLM. In this paper, we constrain our focus to *CatSeq* model (Yuan et al.,
2020) and explore data augmentation strategies using CatSeq on three datasets. However, our augmentation strategies can be extended to work with other pre-trained models in future work.
## Data Augmentation & Keyphrase Generation.
Data augmentation has been explored in related tasks like Named-Entity Recognition (Dai and Adel, 2020; Wang and Henao, 2021), and Keyphrase Extraction (Veyseh et al., 2022; Liu et al., 2018), but there have been minimal efforts for exploring data augmentation in Keyphrase Generation. Most of such works deal with augmentation of the candidate keyphrases (extracted using an off-the-shelf unsupervised keyphrase extraction method) to the ground truth keyphrases.
Ye and Wang (2018) generated synthetic ground truth labels for the additional unlabeled data. Shen et al. (2022) generated silver labels in addition to the gold-labeled keyphrases using an automatic comparison and ranking mechanism. Chen et al.
(2019); Santosh et al. (2021) augmented keyphrases from semantically similar documents to improve keyphrase generation. In contrast, we deal mainly with the augmentation on the input side (i.e., augmenting text to the given articles instead of augmenting the ground-truth keyphrases). Garg et al.
(2022) used external information from various parts of the body and appended it to the T || A of the given articles. Our data augmentation strategy is weakly inspired by this work and we use this work as one of the baselines for comparison. Ray Chowdhury et al. (2022) proposed a data augmentation strategy similar to one of our augmentation methods (suffixed with KPD), i.e., randomly dropping present keyphrases from text.. We leverage the strategy further to drop the present keyphrases from even the body of the articles and then augment it to the articles themselves.
Low-Resource Keyphrase Generation. Wu et al. (2022) presented a method for a low-resource setting where they utilized the major fraction of a large-scale dataset (KP20k) as unlabeled data for pretraining (using sophisticated pretraining objectives) and the smaller fraction of the dataset for fine-tuning. Ye and Wang (2018) proposed a semisupervised technique where they created synthetic keyphrases for the large-scale unlabeled data and also utilized the unlabeled data for training the model in a multi-tasking fashion. In contrast, our methods do not require acquiring any unlabeled data or pretraining or multi-task training but work with a few annotated samples. However, all the above works can very well complement our methods to further improve the performance.
## 3 Methods
In this section, we first describe the formulation of the keyphrase generation task. Next, we describe the baselines followed by the data augmentation strategies that we propose for keyphrase generation.
Problem Formulation. Keyphrase Generation can be posited as a sequence-to-sequence generation task where the input is the text from a given article and the output is a sequence of keyphrases that summarize the article. Formally2, the task can 2We model the problem similar to CATSEQ as proposed by Yuan et al. (2020).
| Datasets | #Train | #Dev | #Test | Avg #words | Avg #kp | Avg kp-len | % Present | % Absent |
|------------|----------|--------|---------|--------------|-----------|--------------|-------------|------------|
| LDKP3K♠ | 50,000 | 3,339 | 3,413 | 6,457 | 4.45 | 1.86 | 84.24 | 15.76 |
| LDKP10K♠ | 50,000 | 10,000 | 10,000 | 4,674 | 5.98 | 2.07 | 74.40 | 25.60 |
| KPTimes | 259,923 | 10,000 | 20,000 | 948 | 4.03 | 2.17 | 48.44 | 51.56 |
## Be Denoted As Follows:
Input: Title || Sent1 || Sent2 ||...|| Sentk Output: kp1 || kp2 ||...||kpn where kpi denotes a keyphrase, *Sent*j denotes a sentence from the abstract or from the body of the article, || denotes any delimiter (e.g., [SEP] in this work).
## 3.1 Baselines
T || A: This baseline contains all the training samples with Title and Abstract concatenated as T
[SEP] A.
T || A || BODY: For this baseline, we simply concatenate the body of the article to T || A. This baseline was presented in the prior work by Garg et al. (2022).
## 3.2 Data Augmentation Strategies
Further, as discussed in §1, we describe the data augmentation strategies created primarily using four ways of augmentation: dropout, synonym replacement (both keyphrase-specific and standard)
and back-translation. We describe them as follows:
AUG_BODY: In this method, we augment the training set with the text from the body of each article, which doubles the total number of samples. That is, one sample is T || A and the other is BODY (i.e.,
sentences from the body of the article).
AUG_BODY**_KPD:** In this method, we first apply the dropout technique presented by Ray Chowdhury et al. (2022) to the body of the article and then augment it (as above). The dropout technique is to mask some of the present keyphrases (particularly, all occurrences of a given keyphrase) in the body of the article.
AUG**_TA_KPD:** In this method of augmentation, we first apply the dropout technique to the T || A,
and then add it to the training set.
AUG_BODY**_KPSR:** In this method, we replace all the present keyphrases in the body of the article with the corresponding synonyms from NLTK
WordNet (Miller, 1995) and augment it to the training set. If a particular keyphrase does not have a corresponding synonym, we retain the original keyphrase. Notably, only a small number of keyphrases lack synonyms in the WordNet. For instance, we were able to find synonyms for 2936
(out of 3282) keyphrases for data augmentation on the Body, with 1000 samples of LDKP3K dataset.
We show the statistics for the LDKP3K dataset in Table 3.
Table 3: Statistics of the synonyms replaced/ total synonyms by AUG_BODY_KPSR and AUG_TA_KPSR
methods for LDKP3K dataset for four settings, i.e.,
1000, 2000, 4000, 8000 samples.
| 1000 | 2000 | 4000 | 8000 | |
|---------------|--------|--------|--------|--------|
| Aug_TA_KPSR | 3386/ | 6705/ | 13374/ | 26757/ |
| 3733 | 7385 | 14702 | 29398 | |
| Aug_Body_KPSR | 2936/ | 5844/ | 11671/ | 23538/ |
| 3282 | 6515 | 13001 | 16171 | |
AUG**_TA_KPSR:** This is similar to AUG_BODY_KPSR but with the difference that we replace present keyphrases with their synonyms in the T || A instead of the body of the article.
AUG_BODY**_BT:** In this method, we backtranslate the body of the article from English to French and back to English using Opus-MT (Tiedemann and Thottingal, 2020) pretrained translation models.
The backtranslated (or equivalently, paraphrased)
articles are then augmented as separate samples to the training set. During the translation of text from one language to another, we use temperature sampling with a temperature value equal to 0.7.
AUG**_TA_BT:** This method applies back translation model to the T || A instead of the body and does augmentation similar to AUG_BODY_BT.
AUG_BODY**_SR:** We use the standard synonym replacement, i.e., we randomly select 10% of the tokens from the body of a given article, replace them with their corresponding synonyms from NLTK
Wordnet, and augment the text as a separate article to the training set.
AUG**_TA_SR:** We do augmentation similar to AUG_BODY_SR but use the T || A instead of body.
## 4 Experimental Setup 4.1 Datasets
We conduct experiments on three datasets for keyphrase generation. All these datasets contain the full text of the articles along with the keyphrase annotations. 1) **LDKP3K** (Mahata et al., 2022)
contains computer science research articles from online digital libraries like ACM Digital Library, ScienceDirect and Wiley. It is a subset of KP20K
corpus (Meng et al., 2017) but each article now contains the full text instead of just the title and abstract. 2) **LDKP10K** (Mahata et al., 2022) expands a subset of articles from OAGkx dataset (Çano and Bojar, 2019) to contain their full text. The articles are scientific publications curated from various domains. We use the *medium* version of both LDKP
datasets (each consists of 50,000 samples in the training set) to facilitate quality sampling of the articles for the low-resource setting while being mindful of the computational budget. 3) **KPTimes** (Gallina et al., 2019) is a large-scale dataset with long news texts. To mimic KG datasets, we map the heading of the news article to *Title*, and segment the main body of the news article into a maximum of 300-words3 *Abstract* and the rest of the text as *Body*. We choose KPTimes to validate our observation on an altogether different domain.
Datasets' statistics are shown in Table 2. Dataset preprocessing steps are outlined in Appendix §A.
## 4.2 Evaluation
We compare the performance of the different methods comprehensively for four low-resource settings, i.e., with 1000, 2000, 4000 and 8000 samples. The settings are highly competitive to the prior works where they used at best 5000 samples (Ray Chowdhury et al., 2022; Wu et al., 2022) for their experiments. Following prior works (Meng et al., 2017; Chen et al., 2018; Chan et al., 2019; Chen et al.,
2020), we report the results for metrics F1@54and F1@M in the main tables. All comparisons are done after stemming the text as well as keyphrases.
Following Meng et al. (2017); Chan et al. (2019);
Yuan et al. (2020), we use GRU encoder-decoderbased architecture for evaluating all models. For all experiments, we restrict the length of the body
(or equivalently, full text) to a maximum sequence length of 800 words. For each setting, we sample thrice and further repeat each sample for three different seeds. We thus report the average result for a total of nine runs (3 samples * 3 seeds) for each setting. Hyperparameters and other implementation details are presented in Appendix §A.
## 5 Results And Analysis
We present our discussion of results for the generation of the two types of keyphrases, i.e., *present* and *absent* in §5.1 and §5.2, respectively.
## 5.1 Present Keyphrase Generation
From Table 4, we make the following observations.
First, augmenting the baseline T || A with the text from the body (AUG_BODY) helps to improve the present keyphrase generation performance. Second, we observe that the methods that use the body
(prepended with AUG_BODY) are better than the augmentation methods that just use Title and Abstract (prepended with AUG_TA). These two observations imply that the body constitutes a rich source of present keyphrases.
Third, we also compare with Garg et al. (2022)
(T || A || BODY) where they concatenated different types of sentences to T || A. We observe that augmenting the text from the articles (AUG_BODY)
instead of merely concatenating them (T || A || BODY) improves the performance by a wide margin. It is also interesting to observe that T || A || BODY, which found significant performance gains in large-scale settings, underperforms even T || A
in many purely low-resource settings.
Fourth, the results suggest a quite intriguing observation that the standard data augmentation techniques like synonym replacement and back translation (suffixed with SR, BT) are more rewarding for present keyphrase generation performance than the techniques specifically designed for the keyphrase generation task (suffixed with KPD, KPSR). This trend could be because synonym replacement and back translation bring more diversity to the training samples (since they replace/ rephrase a much larger portion of the text) compared to keyphrase-specific techniques which modify only a handful of tokens
(i.e., present keyphrases) in the text. It is worth mentioning that even these standard data augmentation techniques have been largely ignored by the current research on keyphrase generation.
LDKP3K 1,000 2,000 4,000 8,000
F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M
T || A 4.681 9.106 6.191 11.892 9.672 18.478 11.971 22.861
T || A || Body 4.941 9.555 5.992 11.612 10.141 19.570 12.300 **23.53**0
AUG_TA_SR 4.751 9.343 6.662 12.740 9.193 17.6510 11.370 21.950
AUG_TA_BT 4.411 8.622 6.322 12.273 10.420 19.961 **12.34**0 23.322
AUG_TA_KPD 4.671 9.191 6.000 11.631 7.922 15.485 10.530 20.551
AUG_TA_KPSR 4.550 8.951 5.701 10.901 7.141 13.875 9.330 18.291
AUG_Body 5.332 10.425 7.106 **13.92**18 9.975 19.2518 11.822 22.674
AUG_Body_SR 4.881 9.694 6.500 12.532 9.369 18.1530 12.191 23.043
AUG_Body_BT 4.590 9.042 6.363 12.265 10.500 **20.09**1 12.311 23.193
AUG_Body_KPD 4.722 9.316 6.121 11.923 8.827 17.0418 11.610 22.141 AUG_Body_KPSR 4.600 9.151 5.781 11.216 7.442 14.608 11.401 21.643
LDKP10K 1,000 2,000 4,000 8,000
F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M
T || A **4.47**1 8.273 6.661 12.322 9.951 17.492 11.311 19.763
T || A || Body 3.894 7.3014 6.551 12.071 9.810 17.541 11.701 20.242
AUG_TA_SR 4.370 8.260 6.220 11.671 10.690 18.330 12.300 **20.86**0
AUG_TA_BT 4.041 7.592 7.701 **14.01**3 10.271 18.002 10.440 18.260
AUG_TA_KPD 3.790 7.182 5.141 9.753 9.533 16.686 11.292 19.925 AUG_TA_KPSR 3.740 7.111 4.771 9.083 8.663 15.196 10.220 17.781
AUG_Body 4.456 **8.45**21 6.985 12.9012 10.370 18.280 11.921 20.733
AUG_Body_SR 4.220 8.010 6.360 11.881 10.120 17.910 11.570 20.381 AUG_Body_BT 4.380 8.172 7.653 13.875 10.240 17.951 11.171 19.733
AUG_Body_KPD 3.963 7.498 5.612 10.544 9.430 16.810 11.030 19.530
AUG_Body_KPSR 3.830 7.241 4.771 9.062 9.240 16.490 10.930 19.270
KPTimes 1,000 2,000 4,000 8,000
F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M
T || A 9.830 19.011 13.493 24.497 16.921 28.840 19.130 31.890 T || A || Body 9.662 18.566 13.642 24.984 16.740 29.330 18.910 32.200
AUG_TA_SR 11.2010 21.1719 **15.21**0 26.590 **17.30**0 29.360 19.300 32.440
AUG_TA_BT 11.024 21.228 13.933 25.997 16.312 29.293 18.711 32.690
AUG_TA_KPD 8.940 17.410 12.901 23.633 15.581 27.860 17.641 30.911
AUG_TA_KPSR 9.122 17.884 13.832 24.902 15.770 27.600 17.990 30.930 AUG_Body 9.783 19.6211 14.311 26.251 17.261 30.331 **19.39**1 33.011
AUG_Body_SR 11.214 **22.05**7 14.461 **26.78**1 16.861 30.131 18.960 **33.23**0
AUG_Body_BT 10.462 20.240 14.120 25.920 16.460 29.281 18.883 32.751
AUG_Body_KPD 8.803 17.629 13.362 24.763 16.491 29.431 18.481 32.170
AUG_Body_KPSR 10.213 20.278 13.620 25.820 16.251 29.512 18.021 32.341
Fifth, we rather observe that the keyphrasespecific data augmentation techniques are not just lower in performance than the standard data augmentation techniques but often they hurt the performance of the model when trained in purely lowresource settings. The reason could be that the models do not have enough samples and diversity to learn to generate the present keyphrases, all the more when the present keyphrases are dropped or replaced during training. This is in contrast with the behavior of models when trained on a largescale dataset, where the performance of present keyphrase generation (AUG_TA_KPD) is on par with T || A (Ray Chowdhury et al., 2022).
Sixth, in Table 4, we can also compare the performance of models trained on: (1) total x original samples, (2) x original + x augmented samples,
(3) total 2x original samples. For example, for LDKP3K dataset, we observe that 2000 original samples achieve the best performance (11.89 in F1@M), followed by the augmented version (9.34 for augmentation with synonym replacement, 10.42 for augmentation with body) whereas the performance when using 1000 original samples is 9.10.
We observe similar trends across the different augmentation strategies and datasets.
We draw the following conclusions: (1) Data augmentation techniques for keyphrase generation have been quite an under-studied topic, particularly for low-resource settings and the behavior of the models is different than that when training on largescale settings; (2) We show that existing works such as those by Garg et al. (2022); Ray Chowdhury et al.
(2022) can be surpassed by the data augmentation methods discussed in this work when used in lowresource settings for *present* keyphrase generation.
LDKP3K 1,000 2,000 4,000 8,000
F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M
T || A 0.0780 0.1690 0.1290 0.2810 0.0440 0.0930 0.0440 0.0990
T || A || Body 0.0790 0.1650 0.1300 0.2820 0.0470 0.1050 0.0310 0.0730
AUG_TA_SR 0.1320 0.2900 0.1360 0.3000 0.0960 0.2070 0.0670 0.1410
AUG_TA_BT 0.1280 0.2790 0.1390 0.3050 0.0680 0.1400 0.1210 0.2660
AUG_TA_KPD 0.1400 0.3110 0.1450 0.3180 0.1410 0.3070 0.0990 0.2180
AUG_TA_KPSR 0.1420 0.3070 0.1770 0.3930 0.1510 0.3210 0.1540 0.3250
AUG_Body 0.1290 0.2910 0.1300 0.2920 0.0610 0.1380 0.0790 0.1750
AUG_Body_SR 0.1410 0.3190 0.1570 0.3420 0.0760 0.1610 0.1490 0.3220
AUG_Body_BT 0.1300 0.2870 0.1210 0.2650 0.0810 0.1830 0.1200 0.2530
AUG_Body_KPD 0.1440 0.3280 0.1890 0.4070 0.1360 0.2980 0.1820 0.3980 AUG_Body_KPSR 0.1620 0.3590 0.2000 0.4410 0.1840 0.4050 0.2270 **0.495**0
LDKP10K 1,000 2,000 4,000 8,000
F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M
T || A 0.0230 0.0470 0.0390 0.0790 0.1140 0.2280 0.1840 0.3350
T || A || Body 0.0210 0.0440 0.0350 0.0740 0.0520 0.1050 0.1590 0.2890
AUG_TA_SR 0.0310 0.0610 0.0540 0.1100 0.1950 0.3870 0.3550 0.6290
AUG_TA_BT 0.0270 0.0510 0.0840 0.1730 0.1960 0.3830 0.3370 0.6170
AUG_TA_KPD 0.0200 0.0410 0.0570 0.1150 0.2100 0.4030 0.2990 0.5520 AUG_TA_KPSR 0.0310 0.0590 0.0670 0.1330 0.2290 0.4330 0.4290 0.7690
AUG_Body 0.0330 0.0630 0.0710 0.1480 0.2060 0.4070 0.3440 0.6220
AUG_Body_SR 0.0370 0.0710 0.0850 0.1680 0.2130 0.4100 0.3780 0.6870 AUG_Body_BT 0.0330 0.0640 0.0730 0.1510 0.1930 0.3870 0.3380 0.6370
AUG_Body_KPD 0.0440 0.0880 0.0850 0.1660 0.2380 0.4650 0.4000 0.7260
AUG_Body_KPSR 0.0450 0.0890 0.1060 0.2100 0.2590 0.4920 0.4590 **0.827**0
KPTimes 1,000 2,000 4,000 8,000
F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M
T || A 0.0260 0.0510 0.0260 0.2470 1.4300 2.4451 3.0660 5.3931 T || A || Body 0.0230 0.0440 0.0230 0.2710 1.0820 1.9500 2.5580 4.7190
AUG_TA_SR 0.1050 0.1760 1.2400 2.1680 2.7180 4.6480 4.2740 7.3360
AUG_TA_BT 0.1630 0.2770 0.1631 2.0502 2.5010 4.3900 3.8261 6.8181
AUG_TA_KPD 0.0600 0.1070 0.0600 0.6370 1.6340 2.8540 3.4230 6.0060
AUG_TA_KPSR 0.0870 0.1630 0.0870 1.8411 **2.748**0 4.6480 **4.465**0 7.3520 AUG_Body 0.0330 0.0600 0.0330 1.1500 2.4600 4.3800 4.1710 7.3850
AUG_Body_SR 0.1590 0.2780 1.1220 1.9990 2.6810 4.7080 4.1351 7.2851
AUG_Body_BT 0.1310 0.2390 1.1910 **2.152**0 2.3630 4.2151 3.7950 6.6701
AUG_Body_KPD 0.0380 0.0690 0.0380 1.2001 2.5880 4.6070 4.3820 7.5750
AUG_Body_KPSR 0.1820 **0.319**0 0.1820 1.9631 2.7080 **4.744**0 4.3251 **7.629**2
## 5.2 Absent Keyphrase Generation
To investigate the ability of the KG models to develop a semantic understanding of the documents, we evaluate the performance of the absent keyphrase generation. Table 5 presents the absent keyphrase performance of the different augmentation methods. Our observations are as follows.
First, augmentation with the body (prefixed with AUG_BODY) still surpasses the Title and Abstract
(prefixed with AUG_TA) counterparts. Second, unlike the present keyphrase generation performance, the absent keyphrase generation performance is generally better with almost all the data augmentation methods compared to the baseline T || A.
The reason could be that the augmentation methods artificially turn some of the present keyphrases to absent keyphrases (e.g., present keyphrases replaced with synonyms or dropped or rephrased).
Thus, the model finds much more opportunities to learn to generate absent keyphrases.
Third, interestingly, KG-targeted data augmentation methods (suffixed with KPD, KPSR) perform better than the standard data augmentation methods like synonym replacement and back translation (suffixed with SR, BT) for generating absent keyphrases (unlike present keyphrase generation). This is because KPD, KPSR specifically replace the present keyphrases to become absent keyphrases. Whereas SR, BT *randomly* replace/
rephrase the tokens and thus, one would expect a less number of present keyphrases turning into absent keyphrases. Fourth, augmentation with KGbased synonym replacement (KPSR) surpasses even the dropout augmentation technique (KPD).
This might be because of two reasons: (1) the keyphrase dropout method masks the keyphrases
| Excerpts from test dataset samples | Methods | Predicted Keyphrases |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|------------------------|
| committees of learning agents [SEP] we describe how machine learning and decision theory is combined in an application that supports control room operators of a combined heating and power plant ... Gold: machine learning ; committees ; decision analysis | T || A | learning |
| Aug_Body | machine learning | |
| Aug_Body_SR | learning | |
| compositional analysis for linear control systems [SEP] the complexity of physical and engineering systems , both in terms of the governing physical phenomena and the number of subprocesses involved ... Gold: compositional reasoning ; linear systems ; simulation relations ; assume-guarantee reasoning | T || A | control |
| Aug_Body | linear control; linear systems | |
| Aug_Body_SR | linear control; linear systems | |
| the bits and flops of the n-hop multilateration primitive for node localization problems [SEP] the recent advances in mems , embedded systems and wireless communication technologies are making the realization ... Gold: technologies ; ad-hoc localization ; sensor networks ; embedded systems ; wireless ; network | T || A | tangible |
| Aug_Body | wireless networks | |
| Aug_Body_SR | sensors | |
with some probability value whereas we replace all the present keyphrases with their synonyms,
(2) dropping the important keyphrases hides some information from the model, while replacing the keyphrases with their synonyms still largely preserves the semantics and integrity of the text.
Fifth, we observe that the model proposed by Garg et al. (2022) which is based on concatenation is not able to generalize well in the low-resource settings, rather, ends up weakening the model performance compared to T || A. This again urges towards the development of data augmentation methods in purely low-data regimes.
Sixth, in Table 5, the results show that the model trained on the combination of original and augmented samples outperforms the settings where the model is trained on equivalent amount of original samples, for most datasets and augmentation strategies. For instance, for LDKP3K dataset, the 2000 augmentation version achieves 0.290 in F1@M (for augmentation with synonym replacement on Title and Abstract) and outperforms both 2000 original samples (0.281) and 1000 original samples (0.169).
Thus, for the same amount of data (2000 dataset size), the augmented version shows better results than without data augmentation.
We show sample predictions from the representative models: T || A (baseline), AUG_BODY (best for Present KG), AUG_BODY_SR (best for Absent KG) in Table 6. In the table, we can observe that while T || A fails to capture the specific topics (or keyphrases) for the document, models trained with augmentation strategies can generalize better.
| Methods | Pres.KP | Abs.KP | TotalKP |
|----------------|-----------|----------|-----------|
| T || A | 3374 | 2093 | 5467 |
| T || A || Body | 3985 | 1482 | 5467 |
| AUG_TA_SR | 5761 | 5173 | 10934 |
| AUG_TA_BT | 5499 | 5435 | 10934 |
| AUG_TA_KPD | 4586 | 6348 | 10934 |
| AUG_TA_KPSR | 4532 | 6402 | 10934 |
| AUG_Body | 6309 | 4625 | 10934 |
| AUG_Body_SR | 5402 | 5532 | 10934 |
| AUG_Body_BT | 5291 | 5643 | 10934 |
| AUG_Body_KPD | 4590 | 6344 | 10934 |
| AUG_Body_KPSR | 4591 | 6343 | 10934 |
## 6 Analysis
In this section, we study one of the settings in more detail, i.e., with the LDKP3K dataset having 1000 samples in the training set (and twice the number in the training set for AUG-prefixed methods). The study unfolds into two aspects: (a) analyzing the data created for the different augmentation methods, (b) developing better inference strategies.
We analyze the data created using the different augmentation methods and report the present, absent and total number of keyphrases in Table 7. First, we observe that all the data augmentation methods have double the total number of keyphrases because the total number of samples is doubled. In effect, the model develops a better generalization ability when it practices with more instances of present and absent keyphrases. Second, we see that AUG_BODY has the highest number of present keyphrases. This implies that the text
| Methods | Present | Absent | | |
|---------------------------------------|-----------|----------|-------|-------|
| F1@5 F1@M F1@5 F1@M | | | | |
| T || A | 4.68 | 9.10 | 0.078 | 0.169 |
| AUG_TA_BT | 4.41 | 8.62 | 0.128 | 0.279 |
| AUG_TA_KPSR | 4.55 | 8.95 | 0.132 | 0.290 |
| AUG_Body | 5.33 | 10.42 | 0.129 | 0.291 |
| AUG_Body_BT | 4.59 | 9.04 | 0.130 | 0.287 |
| AUG_Body_KPD | 4.72 | 9.31 | 0.144 | 0.328 |
| AUG_Body_KPSR | 4.60 | 9.15 | 0.162 | 0.359 |
| Inference Strategies Body ∪ Body-KPSR | 6.41 | 11.95 | 0.196 | 0.428 |
| TA-BT ∪ Body-BT | 5.39 | 10.19 | 0.160 | 0.342 |
| TA-KPSR ∪ Body-KPSR | 6.17 | 11.47 | 0.220 | 0.462 |
| Body-BT ∪ Body-KPD | 6.45 | 11.81 | 0.204 | 0.435 |
| Body-KPSR ∪ Body-KPD 5.94 | 11.18 | 0.204 | 0.444 | |
from the body of the articles not only adds diversity to the training samples (as also evident from Tables 1, 4), but also the diversity contains a lot of present keyphrases, unlike other augmentation methods like KPD, KPSR. Third, it is also evident from Table 7 that the KG-specific data augmentation methods (suffixed with KPD, KPSR) are rich sources of absent keyphrases whereas the standard data augmentation (suffixed with SR, BT) methods are rich in present keyphrases. This further explains the observations made in the previous sections §5.1-5.2 that the KG-specific augmentation methods perform better for absent keyphrase generation, whereas the standard data augmentation methods do better in present keyphrase generation.
Further, in Table 8, we present some of the representative inference strategies by unionizing different augmentation methods during inference. *Union* can be seen as a post-training augmentation method that (during inference) takes a union of the predictions from multiple models that are pretrained using different augmentation methods. The idea is to leverage the complementary strength of the different models that are good for either or both present and absent keyphrase generation. As expected, the performance of the *Union* methods surpasses that of the individual augmentation methods.
## 7 Conclusion
Although data augmentation has been a very common practice to advance the state-of-the-art in NLP,
it has been under-explored for the keyphrase generation (KG) task. Thus, this work discusses various data augmentation methods including both types (i.e., standard and KG-specific) particularly for purely low-resource keyphrase generation, and provides comprehensive evaluation for 12 different settings (four settings for three datasets each).
We also leverage the full text of the articles for data augmentation and observe large improvements over the baseline as well as over data augmentation methods that use only title and abstract (T
|| A). Detailed analysis helps us believe that KGspecific data augmentation methods can largely improve absent keyphrase generation but at the cost of present keyphrase generation. In contrast, the standard data augmentation techniques like synonym replacement and back-translation are capable of introducing enough diversity to improve the present keyphrase generation without bringing a drop in absent keyphrase generation performance. Although augmentation with the body improves both types of generation to some degree, this work leaves much room to develop better data augmentation strategies to train the model to do better on both present and absent keyphrase generation in low-resource settings which are prevalent in many domains.
## 8 Limitations
We conducted extensive experiments with three datasets from different domains to substantiate the results thoroughly. We observe the best performance when we also leverage the body of the articles. So, we did not evaluate the performance on the datasets that do not have the full text (or equivalently, long text) of the articles.
## Ethics Statement
The datasets we used in experiments are publicly available. In our work, we provide a comprehensive analysis and present data augmentation strategies specifically to address keyphrase generation in purely resource-constrained domains. We do not expect any direct ethical concern from our work.
## Acknowledgments
This research is supported in part by NSF CAREER
award \#1802358, NSF CRI award \#1823292, NSF
IIS award \#2107518, and UIC Discovery Partners Institute (DPI) award. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF or DPI. We thank AWS for computational resources used for this study. We also thank our anonymous reviewers for their valuable and constructive feedback and suggestions.
## References
Wasi Ahmad, Xiao Bai, Soomin Lee, and Kai-Wei Chang. 2021. Select, extract and generate: Neural keyphrase generation with layer-wise coverage attention. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 1389–1404, Online. Association for Computational Linguistics.
Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017.
SemEval 2017 task 10: ScienceIE - extracting keyphrases and relations from scientific publications.
In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 546–
555, Vancouver, Canada. Association for Computational Linguistics.
Gábor Berend. 2011. Opinion expression mining by exploiting keyphrase extraction. In *Proceedings of 5th* International Joint Conference on Natural Language Processing, pages 1162–1170, Chiang Mai, Thailand.
Asian Federation of Natural Language Processing.
Erion Çano and Ondˇrej Bojar. 2019. Keyphrase generation: A multi-aspect survey. In 2019 25th Conference of Open Innovations Association (FRUCT), pages 85– 94. IEEE.
Hou Pong Chan, Wang Chen, Lu Wang, and Irwin King.
2019. Neural keyphrase generation via reinforcement learning with adaptive rewards. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 2163–2174, Florence, Italy. Association for Computational Linguistics.
Jun Chen, Xiaoming Zhang, Yu Wu, Zhao Yan, and Zhoujun Li. 2018. Keyphrase generation with correlation constraints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4057–4066, Brussels, Belgium.
Association for Computational Linguistics.
Wang Chen, Hou Pong Chan, Piji Li, Lidong Bing, and Irwin King. 2019. An integrated approach for keyphrase generation via exploring the power of retrieval and extraction. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2846–2856, Minneapolis, Minnesota.
Association for Computational Linguistics.
Wang Chen, Hou Pong Chan, Piji Li, and Irwin King.
2020. Exclusive hierarchical decoding for deep keyphrase generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1095–1105, Online. Association for Computational Linguistics.
Xiang Dai and Heike Adel. 2020. An analysis of simple data augmentation for named entity recognition. In Proceedings of the 28th International Conference
on Computational Linguistics, pages 3861–3867, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Marzieh Fadaee, Arianna Bisazza, and Christof Monz.
2017. Data augmentation for low-resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 567–573, Vancouver, Canada. Association for Computational Linguistics.
Steven Y. Feng, Varun Gangal, Dongyeop Kang, Teruko Mitamura, and Eduard Hovy. 2020. GenAug: Data augmentation for finetuning text generators. In *Proceedings of Deep Learning Inside Out (DeeLIO): The* First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 29–42, Online. Association for Computational Linguistics.
Ygor Gallina, Florian Boudin, and Beatrice Daille. 2019.
KPTimes: A large-scale dataset for keyphrase generation on news documents. In *Proceedings of the 12th* International Conference on Natural Language Generation, pages 130–135, Tokyo, Japan. Association for Computational Linguistics.
Krishna Garg, Jishnu Ray Chowdhury, and Cornelia Caragea. 2022. Keyphrase generation beyond the boundaries of title and abstract. In *Findings of the* Association for Computational Linguistics: EMNLP
2022, pages 5809–5821, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li.
2016. Incorporating copying mechanism in sequenceto-sequence learning. In *Proceedings of the 54th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1631–
1640, Berlin, Germany. Association for Computational Linguistics.
Khaled M Hammouda, Diego N Matute, and Mohamed S Kamel. 2005. Corephrase: Keyphrase extraction for document clustering. In International workshop on machine learning and data mining in pattern recognition, pages 265–274. Springer.
Anette Hulth and Beáta B. Megyesi. 2006. A study on automatically extracted keywords in text categorization. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 537–544, Sydney, Australia. Association for Computational Linguistics.
Mayank Kulkarni, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. 2022. Learning rich representation of keyphrases from text. In *Findings of the* Association for Computational Linguistics: NAACL
2022, pages 891–906, Seattle, United States. Association for Computational Linguistics.
Yingjie Li and Cornelia Caragea. 2021. Target-aware data augmentation for stance detection. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 1850–1860, Online. Association for Computational Linguistics.
Qianying Liu, Daisuke Kawahara, and Sujian Li. 2018.
Scientific keyphrase extraction: extracting candidates with semi-supervised data augmentation. In *Chinese* Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data:
17th China National Conference, CCL 2018, and 6th International Symposium, NLP-NABD 2018, Changsha, China, October 19–21, 2018, Proceedings 17, pages 183–194. Springer.
Debanjan Mahata, Naveen Agarwal, Dibya Gautam, Amardeep Kumar, Swapnil Parekh, Yaman Kumar Singla, Anish Acharya, and Rajiv Ratn Shah. 2022.
Ldkp - a dataset for identifying keyphrases from long scientific documents. *DL4SR-22: Workshop* on Deep Learning for Search and Recommendation, co-located with the 31st ACM International Conference on Information and Knowledge Management
(CIKM).
Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 582–592, Vancouver, Canada. Association for Computational Linguistics.
George A Miller. 1995. Wordnet: a lexical database for english. *Communications of the ACM*, 38(11):39–41.
Seo Yeon Park and Cornelia Caragea. 2022. A data cartography based MixUp for pre-trained language models. In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4244–4250, Seattle, United States.
Association for Computational Linguistics.
Jishnu Ray Chowdhury, Seo Yeon Park, Tuhin Kundu, and Cornelia Caragea. 2022. KPDROP: Improving absent keyphrase generation. In *Findings of the Association for Computational Linguistics: EMNLP 2022*,
pages 4853–4870, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Anna Ritchie, Simone Teufel, and Stephen Robertson.
2006. How to find better index terms through citations. In Proceedings of the Workshop on How Can Computational Linguistics Improve Information Retrieval?, pages 25–32, Sydney, Australia. Association for Computational Linguistics.
TYSS Santosh, Debarshi Kumar Sanyal, Plaban Kumar Bhowmick, and Partha Pratim Das. 2021. Gazetteerguided keyphrase generation from research papers.
In *Pacific-Asia Conference on Knowledge Discovery* and Data Mining, pages 655–667. Springer.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Xianjie Shen, Yinghan Wang, Rui Meng, and Jingbo Shang. 2022. Unsupervised deep keyphrase generation. *Proceedings of the AAAI Conference on Artificial Intelligence*, 36(10):11303–11311.
Lichao Sun, Congying Xia, Wenpeng Yin, Tingting Liang, Philip Yu, and Lifang He. 2020. Mixuptransformer: Dynamic data augmentation for NLP
tasks. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 3436– 3440, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Avinash Swaminathan, Haimin Zhang, Debanjan Mahata, Rakesh Gosangi, Rajiv Ratn Shah, and Amanda Stent. 2020. A preliminary exploration of GANs for keyphrase generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8021–8030, Online. Association for Computational Linguistics.
Jörg Tiedemann and Santhosh Thottingal. 2020. OPUSMT - building open translation services for the world.
In *Proceedings of the 22nd Annual Conference of* the European Association for Machine Translation, pages 479–480, Lisboa, Portugal. European Association for Machine Translation.
Amir Pouran Ben Veyseh, Nicole Meister, Franck Dernoncourt, and Thien Huu Nguyen. 2022. Improving keyphrase extraction with data augmentation and information filtering. Association for the Advancement of Artificial Intelligence Workshop.
Rui Wang and Ricardo Henao. 2021. Unsupervised paraphrasing consistency training for low resource named entity recognition. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5303–5308, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 6382–6388, Hong Kong, China. Association for Computational Linguistics.
Theresa Wilson, Janyce Wiebe, and Paul Hoffmann.
2005. Recognizing contextual polarity in phraselevel sentiment analysis. In *Proceedings of Human* Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 347–354, Vancouver, British Columbia, Canada. Association for Computational Linguistics.
Di Wu, Wasi Ahmad, Sunipa Dev, and Kai-Wei Chang. 2022. Representation learning for resourceconstrained keyphrase generation. In Findings of the Association for Computational Linguistics: EMNLP
2022, pages 700–716, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Huanqin Wu, Wei Liu, Lei Li, Dan Nie, Tao Chen, Feng Zhang, and Di Wang. 2021. UniKeyphrase:
A unified extraction and generation framework for keyphrase prediction. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 825–835, Online. Association for Computational Linguistics.
Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. Advances in Neural Information Processing Systems, 33:6256–6268.
Shweta Yadav and Cornelia Caragea. 2022. Towards summarizing healthcare questions in low-resource setting. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 2892–
2905, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Hai Ye and Lu Wang. 2018. Semi-supervised learning for neural keyphrase generation. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, pages 4142–4153, Brussels, Belgium. Association for Computational Linguistics.
Jiacheng Ye, Ruijian Cai, Tao Gui, and Qi Zhang. 2021a.
Heterogeneous graph neural networks for keyphrase generation. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 2705–2715, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jiacheng Ye, Tao Gui, Yichao Luo, Yige Xu, and Qi Zhang. 2021b. One2Set: Generating diverse keyphrases as a set. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 4598–4608, Online. Association for Computational Linguistics.
Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Peter Brusilovsky, Daqing He, and Adam Trischler.
2020. One size does not fit all: Generating and evaluating variable number of keyphrases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7961–7975, Online. Association for Computational Linguistics.
## A More Implementation Details
Following Garg et al. (2022), we preprocessed the full text of the articles for all three datasets.
We filtered all the articles that had either of the four fields missing, viz., title, abstract, keyphrases, full text, or that contained less than five sentences in the full text. We segmented the full text into sentences using PunktSentenceTokenizer5and tokenized the sentences further into tokens using NLTK's word_tokenizer. We also lowercased the text, removed html text, emails, urls, escape symbols, and converted all the numbers into <digit>
(Meng et al., 2017), and finally removed any duplicate items in the collection. Further, we subsampled the datasets to construct four low-resource settings (sampled thrice for each setting) containing 1000, 2000, 4000 and 8000 samples.
We use the GRU-based architecture for evaluating all the methods. Similar to Meng et al. (2017);
Yuan et al. (2020); Chan et al. (2019) we use an encoder-decoder architecture (where both the encoder and the decoder are GRUs) with attention and a pointer mechanism (See et al., 2017). The exact details of the architecture are similar to that of Chan et al. (2019). The vocabulary size is 50,000 and each word is translated into embeddings of dimension equal to 100. The GRU encoders and decoders have hidden layer sizes of 150 and 300 respectively. We use a learning rate of 1e-3, batch size of 4, Adam optimizer, ReduceLROnPlateau scheduler and maximum epochs as 20. We early stop the training with patience value of 2.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
8
✗ A2. Did you discuss any potential risks of your work?
No potential risks
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank.
✓ B1. Did you cite the creators of artifacts you used?
4.1
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
All datasets are open-sourced and we checked the license before using.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
All datasets are open-sourced and we checked the license before using.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4.1
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Limitations The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4.1, Appendix, Limitations
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix, 4.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
kang-etal-2023-bigvideo | {B}ig{V}ideo: A Large-scale Video Subtitle Translation Dataset for Multimodal Machine Translation | https://aclanthology.org/2023.findings-acl.535 | We present a large-scale video subtitle translation dataset, *BigVideo*, to facilitate the study of multi-modality machine translation. Compared with the widely used *How2* and *VaTeX* datasets, *BigVideo* is more than 10 times larger, consisting of 4.5 million sentence pairs and 9,981 hours of videos. We also introduce two deliberately designed test sets to verify the necessity of visual information: *Ambiguous* with the presence of ambiguous words, and *Unambiguous* in which the text context is self-contained for translation. To better model the common semantics shared across texts and videos, we introduce a contrastive learning method in the cross-modal encoder. Extensive experiments on the *BigVideo* shows that: a) Visual information consistently improves the NMT model in terms of BLEU, BLEURT and COMET on both Ambiguous and Unambiguous test sets. b) Visual information helps disambiguation, compared to the strong text baseline on terminology-targeted scores and human evaluation. | # Bigvideo**: A Large-Scale Video Subtitle Translation Dataset For** Multimodal Machine Translation
Liyan Kang1,3∗ Luyang Huang2∗ Ningxin Peng2 Peihao Zhu2 **Zewei Sun**2 Shanbo Cheng2 Mingxuan Wang2 Degen Huang4 **Jinsong Su**1,3†
1School of Informatics, Xiamen University 2Bytedance 3Key Laboratory of Digital Protection and Intelligent Processing of Intangible Cultural Heritage of Fujian and Taiwan (Xiamen University), Ministry of Culture and Tourism, China 4Dalian University of Technology [email protected] [email protected] [email protected]
## Abstract
We present a large-scale video subtitle translation dataset, BIGVIDEO, to facilitate the study of multi-modality machine translation. Compared with the widely used HOW2 and VATEX
datasets, BIGVIDEO is more than 10 times larger, consisting of 4.5 million sentence pairs and 9,981 hours of videos. We also introduce two deliberately designed test sets to verify the necessity of visual information: AMBIGUOUS
with the presence of ambiguous words, and UNAMBIGUOUS in which the text context is self-contained for translation. To better model the common semantics shared across texts and videos, we introduce a contrastive learning method in the cross-modal encoder.
Extensive experiments on the BIGVIDEO
show that: a) Visual information consistently improves the NMT model in terms of BLEU,
BLEURT, and COMET on both AMBIGUOUS
and UNAMBIGUOUS test sets. b) Visual information helps disambiguation, compared to the strong text baseline on terminologytargeted scores and human evaluation. Dataset and our implementations are available at https://github.com/DeepLearnXMU/BigVideoVMT.
## 1 Introduction
Humans are able to integrate both language and visual context to understand the world. From the perspective of NMT, it is also much needed to make use of such information to approach humanlevel translation abilities. To facilitate Multimodal Machine Translation (MMT) research, a number of datasets have been proposed including imageguided translation datasets (Elliott et al., 2016; Gella et al., 2019; Wang et al., 2022) and videoguided translation datasets (Sanabria et al., 2018; Wang et al., 2019; Li et al., 2022b).
Source Subtitle: **Clear shot** could also get you used
![0_image_0.png](0_image_0.png)
to the air flow of the courts. **Ground Truth:** 高远球也可以让你习惯场馆的空 气阻力。
System w/o **Video**: 清晰的镜头也可以让你习惯法 庭的气流。
System w/ **Video**: 清晰的击球也会让你习惯球场的 气流。
Figure 1: An example with semantic ambiguity in BIGVIDEO. The phrases with semantic ambiguity are highlighted in red. The wrong translations are in blue and the correct translations are in **yellow**.
However, the conclusion about the effects of visual information is still unclear for MMT research (Caglayan et al., 2019). Previous work has suggested that visual information is only marginally beneficial for machine translation (Li et al., 2021; Caglayan et al., 2021), especially when the text context is not complete. The most possible reason is that existing datasets focus on captions describing images or videos, which are not large and diverse enough. The text inputs are often simple and sufficient for translation tasks (Wu et al., 2021). Take the widely used Multi30K as an example. Multi30K consists of only 30K image captions, while typical text translation systems are often trained with several million sentence pairs.
We argue that studying the effects of visual contexts in machine translation requires a large-scale and diverse data set for training and a real-world and complex benchmark for testing. To this end, we propose BIGVIDEO, a large-scale video subtitle translation dataset. We collect human-written subtitles from two famous online video platforms, Xigua and YouTube. BIGVIDEO consists of 155 thousand videos and 4.5 million high-quality parallel sentences in English and Chinese. We highlight the key features of BIGVIDEO as follows: a) The size of BIGVIDEO bypasses the largest available video machine translation dataset HOW2 and VATEX by one order of magnitude. b) To investigate the need for visual information, two test sets are annotated by language experts, referred as AMBIGUOUS and UNAMBIGUOUS. In AMBIGUOUS, the source input is not sufficient enough and requires videos to disambiguate for translation. The experts also labelled unambiguous words to help evaluate whether the improvement comes from visual contexts. In UN-AMBIGUOUS, actions or visual scenes in the videos are mentioned in the subtitles but source sentences are self-contained for translation.
To make the most of visual information for MMT, we propose a unified encoder-decoder framework for MMT. The model has a cross-modal encoder that takes both videos and texts as inputs.
Motivated by recent work on cross-modal learning (Li et al., 2020; Qi et al., 2020; Xia et al., 2021),
we also introduce a contrastive learning objective to further bridge the representation gap between the text and video and project them in a shared space.
As such, the visual information can potentially contribute more to the translation model.
We conduct extensive experiments on the proposed benchmark BIGVIDEO and report the results on BLEU (Papineni et al., 2002), BLEURT (Sellam et al., 2020), COMET (Rei et al., 2020),
terminology-targeted metrics and human evaluation. We also introduce the large scale WMT19 training data, which contains 20.4M parallel sentences to build the strong baseline model. The experiments show that visual contexts consistently improve the performance of both the AMBIGUOUS
and UNAMBIGUOUS test set over the strong textonly model. The finding is slightly different with previous studies and address the importance of a large scale and high-quality video translation data.
Further, the contrastive learning method can further boost the translation performance over other visual-guided models, which shows the benefits of closing the representation gap of texts and videos.
## 2 Related Work
Video-guided Machine Translation. The VATEX
dataset has been introduced for video-guided machine translation task (Wang et al., 2019). It contains 129K bilingual captions paired with video clips. However, as pointed out by Yang et al.
(2022), captions in VATEX have sufficient information for translation, and models trained on VATEX
tend to ignore video information. Beyond captions, Sanabria et al. (2018) considers video subtitles to construct the HOW2 dataset. HOW2 collects instructional videos from YouTube and obtains 186K
bilingual subtitle sentences. To construct a challenging VMT dataset, Li et al. (2022b) collect 39K
ambiguous subtitles from movies or TV episodes to build VISA. However, both HOW2 and VISA
are limited on scale and diversity, given the training needs of large models. In contrast, we release a larger video subtitle translation dataset, with millions of bilingual ambiguous subtitles, covering all categories on YouTube and Xigua platforms.
To leverage video inputs in machine translation models, Hirasawa et al. (2020) use pretrained models such as ResNet (He et al., 2016), FasterRCNN (Ren et al., 2015) and I3D (Carreira and Zisserman, 2017). An additional attention module is designed in the RNN decoder to fuse visual features. To better learn temporal information in videos, (Gu et al., 2021) propose a hierarchical attention network to model video-level features.
Different from previous work, we use a unified encoder to learn both video and text features. Specifically, a contrastive learning objective is adopted to learn cross-modal interaction.
Image-guided Machine Translation. Images as additional inputs have long been used for machine translation (Hitschler et al., 2016). For neural models, several attempts have been focused on enhancing the sequence-to-sequence model with strong image features (Elliott and Kádár, 2017; Yao and Wan, 2020; Lin et al., 2020; Yin et al., 2020; Su et al., 2021; Li et al., 2022a; Lin et al., 2020; Zhu et al., 2022; Lan et al., 2023). However, Li et al.
(2021) and Wu et al. (2021) point out that images in Multi30K provide little information for translation.
In this work, we focus on videos as additional visual inputs for subtitle translation. Videos illustrate objects, actions, and scenes, which contain more information compared to images. Subtitles are often in spoken language, which contains inherent ambiguities due to multiple potential interpretations (Mehrabi et al., 2022). Hence, our dataset can be a complement to existing MMT datasets.
| Dataset | Text | Video | | | |
|----------------------------|--------|---------|------|----------|-----|
| # Sent Len. # Video # Clip | Sec. | | | | |
| VISA | 35K | 7.0 | 2K | 35K 10.0 | |
| VATEX | 129K | 15.2 | 25K | 25K 10.0 | |
| HOW2 | 186K | 20.6 | 13K | 186K | 5.8 |
| BIGVIDEO | 4.5M | 22.8 | 156K | 4.5M | 8.0 |
## 3 Dataset
We present BIGVIDEO, consisting of 150 thousand unique videos (9,981 hours in total) with both English and Chinese subtitles. The videos are collected from two popular online video platforms, YouTube and Xigua. All subtitles are humanwritten. Table 1 lists statistics of our dataset and existing video-guided translation datasets. Among existing datasets, our dataset is significantly larger, with more videos and parallel sentences.
## 3.1 Bigvideo **Dataset**
To obtain high-quality video-subtitle pairs, we collect videos with both English and Chinese subtitles from *YouTube*1and *Xigua*2. Both two platforms provide three types of subtitles: 1) *creator* which is uploaded by the creator, 2) *auto-generate* which is generated by the automatic speech recognition model and 3) *auto-translate* which is produced by machine translation model. We only consider videos with both English and Chinese subtitles uploaded by creators in order to obtain high-quality parallel subtitles. These videos and subtitles are often created by native or fluent English and Chinese speakers. In total, we collect 13K videos (6K hours in total) from YouTube and 2K videos from Xigua
(3.9K hours in total).
Preprocessing. We first re-segment English subtitles into full sentences. To ensure the quality of parallel subtitles, we use quality estimation scores
(e.g., the COMET score) to filter out low-quality pairs. More details are provided in Appendix B.1. Ultimately, 3.3M sentences paired with video clips are kept for YouTube, and 1.2M for Xigua. The average lengths of English and Chinese sentences 1https://www.youtube.com 2https://www.ixigua.com
| Translation | | |
|---------------|----------|----------|
| Source | Fluency↑ | Quality↑ |
| YOUTUBE | 4.81 | 4.11 |
| XIGUA | 4.60 | 4.20 |
![2_image_0.png](2_image_0.png)
are 17.6 and 15.4 words for YouTube, 37.7 and 32.4 words for Xigua.
## 3.2 Dataset Analysis
Quality Evaluation. To assess the quality of text pairs, we randomly select 200 videos from each source and recruit seven annotators to rate the quality of subtitles pairs. For each video, we randomly select at most 20 clips for evaluation. All annotators are fluent in English and Chinese. After watching video clips and subtitles, human annotators are asked to rate subtitle pairs from 1 (worst)
to 5 (best) on **fluency**–whether the source sentence
(English) is fluent and grammatically correct, and translation quality–whether the Chinese subtitle is semantically equivalent to the English subtitle.
Detailed guidelines are provided in Appendix F.
From Table 2, English sentences from both YouTube and Xigua have an average of 4.8 and 4.6 fluency scores, which shows that English subtitles are fluent and rarely have errors. In terms of translation quality, we find more than 96 percent of the pairs are equivalent or mostly-equivalent, with only minor differences (e.g., style).
Diversity Evaluation. In addition to the size and
| Test set | Number | Length | # phrases |
|-------------|----------|----------|-------------|
| AMBIGUOUS | 877 | 28.61 | 745 |
| UNAMBIGUOUS | 1,517 | 27.22 | - |
quality, diversity is also critical for modeling alignments between parallel texts (Tiedemann, 2012).
Prior work calculates unique n-grams and part-ofspeech (POS) tags to evaluate linguistic complexity (Wang et al., 2019). Besides word-level metrics, we use video category distribution to assess videolevel diversity.
Since the source text of our dataset, VATEX and HOW2 are in English, we compare unique n-grams and POS tags on the source texts. For unique POS
tags, we compare four most common types: verb, noun, adjective and adverb. In Figure 2, our data from both XIGUA and YOUTUBE have substantially more unique n-grams and POS tags than VATEX and HOW2. Evidently, our dataset covers a wider range of actions, objects and visual scenes.
To evaluate video-level diversity, we compare category distributions among three datasets. The YouTube platform classifies videos into 15 categories. Since videos collected from the Xigua platform do not have category labels, we train a classifier on the YouTube data to label them. Details of the classifier are in Appendix B.2. Figure 3 depicts the distributions of three datasets. While both VATEX and HOW2 have a long-tail distribution on several categories (e.g., "Nonprofits & Activism" and "News & Politic"), BIGVIDEO has at least 1,000 videos in each category, which forms a more diverse training set.
## 3.3 Test Set Annotation Procedure
Subtitles often contain semantic ambiguities (Gu et al., 2021), which can be potentially solved by watching videos. In order to study "*How visual* contexts benefit machine translation", we create two test sets: AMBIGUOUS contains ambiguous subtitles that videos provide strong disambiguation signal, while UNAMBIGUOUS consists of selfcontained subtitles that videos are related but subtitles themselves contain enough context for translation. Statistics of two test sets are listed in Figure 3.
We randomly sample 200 videos from each of Xigua and YouTube and hire four professional
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
speakers in both English and Chinese to annotate the test set. Annotators are first asked to remove sentences which are not related to videos. In this step, we filter out about twenty percent of sentences. Annotators are then asked to re-write the Chinese subtitle if it is not perfectly equivalent to the English subtitle. Next, we ask the annotators to distinguish whether the source sentence contains semantic ambiguity. Specifically, annotators are instructed to identify ambiguous words or phrases in both English and Chinese sentences, as illustrated in Figure 4. We finally obtain 2394 samples in our test set. 36.6% of the sentences are in the AMBIGUOUS and 63.4% of the sentences are in the UNAMBIGUOUS. In the AMBIGUOUS, we annotate 745 ambiguous terms. The statistics indicate that videos play important roles in our dataset. Annotation instructions and detailed procedures are provided in Appendix F.
## 4 Method 4.1 Model
To better leverage videos to help translation, we present our video-guided machine translation model, as displayed in Figure 5. Our model can be seamlessly plugged into the pretrained NMT
model, which can benefit from large-scale parallel training data. Importantly, we design a contrastive learning objective to further drive the translation model to learn shared semantics between videos and text.
Cross-modal Encoder. Our model takes both videos and text as inputs. Text inputs are first represented as a sequence of tokens x and then converted to word embeddings through the embedding layer.
Video inputs are represented as a sequence of continuous frames v. We use a pretrained encoder to extract frame-level features, which is frozen for all experiments. Concretely, we apply the linear projection to obtain video features with the same dimension as text embeddings. To further model temporal information, we add positional embeddings to video features, followed by the layer normalization.
Video features v emb and text embeddings x emb are then concatenated and fed into the Transformer encoder.
Text Decoder. Our decoder is the original Transformer decoder, which generates tokens autoregressively conditioned on the encoder outputs. We consider the cross entropy loss as a training objective:
$${\mathcal{L}}_{\mathrm{CE}}=-\sum_{i}^{N}\log P(\mathbf{y}_{i}|\mathbf{v}_{i},\mathbf{x}_{i}),\qquad(1)$$
where yi denotes the text sequence in the target language for the i-th sample in a batch of N samples.
## 4.2 Contrastive Learning Objective
In order to learn shared semantics between videos and text, we introduce a cross-modal contrastive learning (CTR)-based objective. The idea of the CTR objective is to bring the representations of video-text pairs closer and push irrelevant ones further.
Formally, given a positive text-video pair
(xi, vi), we use remaining N − 1 irrelevant textvideo pairs (xi, vj ) in the batch as negative samples.
![4_image_0.png](4_image_0.png)
The contrastive learning objective (Sohn, 2016) is:
$${\mathcal{L}}_{\mathrm{CTR}}=-\sum_{i=1}^{N}\log{\frac{\exp(s i m({\bf x}_{i}^{p},{\bf v}_{i}^{p})/\tau)}{\sum_{j=1,j\neq i}^{N}\exp(s i m({\bf x}_{i}^{p},{\bf v}_{j}^{p})/\tau)}},\tag{2}$$
where x p i and v p i are representations for the text and video, sim(·) is the cosine similarity function and the temperature τ is used to control the strength of penalties on hard negative samples (Wang and Liu, 2021).
Text and Video Representations. Importantly, since videos and subtitles are weakly aligned on the temporal dimension (Miech et al., 2019), we first average video embeddings and text embeddings in terms of the time dimension. Concretely, we apply two *projection heads* ("MLP" in Figure 5) to map representations to the same semantic space (Chen et al., 2020).
In the end, we sum up the two losses to obtain the final loss:
$${\mathcal{L}}={\mathcal{L}}_{\mathrm{CE}}+\alpha{\mathcal{L}}_{\mathrm{CTR}},$$
$$\left({\mathfrak{I}}{\mathfrak{I}}\right)$$
where α is a hyper-parameter to balance the two loss items.
## 5 Experiments 5.1 Experimental Setup
Implementation Details. We evaluate our method on three video translation datasets: VATEX, HOW2
| System | BLEU | COMET | BLEURT | | | | | | |
|---------------------------|--------|---------|----------|-------|--------|-------|-------|--------|-------|
| All | Amb. | Unamb. | All | Amb. | Unamb. | All | Amb. | Unamb. | |
| w/o pretraining TEXT-ONLY | 43.97 | 43.59 | 44.19 | 37.31 | 33.06 | 39.77 | 61.11 | 59.68 | 61.93 |
| GATED FUSHION | 44.33 | 44.12 | 44.45 | 37.88 | 34.47 | 39.85 | 61.25 | 60.19 | 61.86 |
| SELECTIVE ATTN | 44.39 | 44.20 | 44.51 | 38.48 | 34.84 | 40.59 | 61.37 | 60.30 | 62.07 |
| Ours + VIT | 44.26 | 44.10 | 44.37 | 38.13 | 34.75 | 40.08 | 61.24 | 60.29 | 61.78 |
| + SLOWFAST | 44.21 | 44.12 | 44.26 | 37.81 | 34.99 | 39.44 | 61.22 | 60.28 | 61.77 |
| + VIT + CTR | 44.45 | 44.48 | 44.40 | 38.15 | 35.76 | 39.54 | 61.36 | 60.72 | 61.73 |
| + SLOWFAST + CTR | 44.44 | 44.20 | 44.58 | 38.37 | 35.18 | 40.22 | 61.31 | 60.41 | 61.82 |
| w/ pretraining TEXT-ONLY | 44.45 | 43.89 | 44.79 | 38.36 | 33.40 | 41.23 | 61.41 | 59.85 | 62.31 |
| + VIT + CTR | 44.83 | 44.62 | 44.96 | 39.44 | 36.42 | 41.19 | 61.76 | 60.75 | 62.34 |
| + SLOWFAST + CTR | 44.77 | 44.43 | 44.97 | 39.26 | 36.03 | 41.12 | 61.71 | 60.52 | 62.40 |
and our proposed dataset BIGVIDEO. More dataset details can be found in Appendex C.1.
Our code is based on the *fairseq* toolkit (Ott et al., 2019). The Transformer-base model follows
(Vaswani et al., 2017). Both encoder and decoder have 6 layers, 8 attention heads, hidden size = 512, and FFN size = 2048. We utilize post-layer normalization for all models. On VATEX, we follow the Transformer-small setting from Wu et al. (2021) for better performance, 6 layers for encoder/decoder, hidden size = 512, FFN size = 1024 and attention heads = 4.
All experiments are done on 8 NVIDIA V100 GPUS with mixed-precision training (Das et al.,
2018), where the batch assigned to each GPU contains 4,096 tokens. More training details can be found in Appendix C.2. We stop the training if the performance on the validation set does not improve for ten consecutive epochs. The running time is about 64 GPU hours for our system. During the inference, the beam size and the length penalty are set to 4 and 1.0. We apply byte pair encoding (BPE)
with 32K merge operations to preprocess sentences of our dataset. During training and testing, we uniformly sample a maximum of 12 frames as the video input. The text length is limited to 256. For the contrastive learning loss, we set α to 1.0 and τ to 0.002. The choices of hyper-parameters are in Appendix D.
For video features, we extract 2D features and 3D features to compare their effects. Concretely, we experiment with two pretrained models to extract the video feature: a) The vision transoformer (VIT) (Dosovitskiy et al., 2021) which extracts frame-level features. b) The SlowFast model (SLOWFAST) which extracts video-level features (Feichtenhofer et al., 2019). For 2D features, we first extract images at a fixed frame rate (3 frames per second). Then we utilize pretrained Vision Transformer3(ViT) to extract 2D video features into 768-dimensional vectors. Here the representation of [CLS] token is considered as the global information of one frame. For 3D features, we extract 2304-dimensional SlowFast4features at 2/3 frames per second.
Baselines and Comparisons. For baselines, we consider the base version of the Transformer
(TEXT-ONLY), which only takes texts as inputs. For comparisons, since most recent MMT studies focus on image-guided machine translation, we implement two recent image-based MMT models: a) The gated fusion model (GATED FUSION) which fuses visual representations and text representations with a gate mechanism (Wu et al., 2021). b) The selective attention model (SELECTIVE ATTN) which uses a single-head attention to connect text and im3The model architecture is vit_base_patch16_224.
4SLOWFAST_8x8_R50.
age representations (Li et al., 2022a). We extract image features using ViT and obtain the visual feature by averaging image features on the temporal dimension. The visual feature is then fused with the text representations which is the same as original GATED FUSION and SELECTIVE ATTN. For HOW2 and VATEX, we additionally include the baseline models provided by the original paper.
Evaluation Metrics. We evaluate our results with the following three metrics: detokenized sacreBLEU5, COMET6(Rei et al., 2020) and BLEURT7(Sellam et al., 2020). In order to evaluate whether videos are leveraged to disambiguate, we further consider three terminology-targeted metrics (Alam et al., 2021):
- Exact Match: the accuracy over the annotated ambiguous words. If the correct ambiguous words or phrases appear in the output, we count it as correct.
- Window Overlap: indicating whether the ambiguous terms are placed in the correct context.
For each target ambiguous term, a window is set to contain its left and right words, ignoring stopwords. We calculate the percentage of words in the window that are correct. In practice, we set window sizes to 2 (Window Overlap-2) and 3 (Window Overlap-3).
- Terminology-biased Translation Edit Rate (1-
TERm): modified translation edit rate (Snover et al., 2006) in which words in ambiguous terms are set to 2 edit cost and others are 1.
## 6 Results 6.1 Main Results
Videos Consistently Improve the NMT Model.
As displayed in Table 4, on BIGVIDEO, our models equipped with videos obtain higher automatic scores. This indicates the benefit of using videos as additional inputs. Notably, our model trained with the additional contrastive learning objective yields better scores compared to the variant trained only with the cross entropy loss. This signifies that our contrastive learning objective can guide better acquisition of video inputs. Furthermore, we
| System | Exact | Window | Window 1-TERm | |
|----------------------------|---------|----------|-----------------|-------|
| Match Overlap-2 Overlap-3 | | | | |
| w/o pre-training TEXT-ONLY | 23.03 | 14.22 | 14.28 | 49.50 |
| GATED FUSHION | 23.68 | 14.60 | 14.76 | 49.68 |
| SELECTIVE ATTN | 23.66 | 14.95 | 15.08 | 49.77 |
| Ours + VIT | 24.27 | 15.09 | 15.27 | 49.78 |
| + SLOWFAST | 24.05 | 14.97 | 15.13 | 49.32 |
| + VIT + CTR | 25.02 | 15.56 | 15.77 | 49.91 |
| + SLOWFAST + CTR | 24.08 | 15.04 | 15.12 | 49.32 |
| w/ pre-training TEXT-ONLY | 22.71 | 14.35 | 14.37 | 49.73 |
| + VIT + CTR | 24.30 | 15.17 | 15.39 | 50.28 |
| + SLOWFAST + CTR | 23.62 | 14.76 | 14.90 | 50.04 |
Table 5: Terminology-targeted evaluation on AMBIGU-OUS test set. Complete results with standard deviations can be seen in Appendix A.
find image-based pretrained model VIT and videobased pretrained model SLOWFAST yield comparable results, indicating that two vision features perform equally well on BIGVIDEO.
Noticeably, compared to the text-only baseline, our models trained with the CTR objective achieves larger gain on AMBIGUOUS than that on UNAM-BIGUOUS. This demonstrates that it is more difficult to correctly translate sentences of AMBIGU-OUS, while taking videos as additional inputs can help the model generate better translations.
To better study the role of videos in translation, we introduce additional training data to build a stronger NMT baseline. We introduce the WMT19 Zh-En dataset with 20.4M parallel sentences for pretraining. We aim to answer: how will the model perform if more text data are included?
As displayed in Table 4, *Model with video inputs* outperforms the strong NMT baseline. Pretraining on large corpus benefits models on BIGVIDEO.
However, we find improvements mainly come from the UNAMBIGUOUS. This shows that videos play more crucial roles in AMBIGUOUS, which suggests that BIGVIDEO can serve as a valuable benchmark for studying the role of videos in MMT research.
Videos Help Disambiguation. We further evaluate the model ability of disambiguation. We present results on terminology-targeted metrics in Table 5.
First, our systems with video features achieve consistent improvements both on exact match and window overlap metrics compared to the text-only variant, indicating that models augmented by video inputs correctly translate more ambiguous words
| Systems | Score | Win ↑ | Tie | Lose ↓ |
|-------------|---------|---------|-------|----------|
| AMBIGUOUS | | | | |
| TEXT-ONLY | 3.48 | - | - | - |
| + VIT + CTR | 3.53 | 19.3% | 65.3% | 15.3% |
| UNAMBIGUOUS | | | | |
| TEXT-ONLY | 3.71 | - | - | - |
| + VIT + CTR | 3.72 | 24% | 51.3% | 21.7% |
and place them in the proper contexts. It is also worth noticing that our system with pretraining achieves better scores compared to the strong text baseline, which further highlights the importance of video inputs. Moreover, we find it hard to correctly translate ambiguous words since the best exact match score is 25.02%, which suggests that our AMBIGUOUS set is challenging.
## Video-Augmented Model Improves Translation
Quality. We further conduct human evaluation to analyze the translation quality. We randomly pick 100 sentences from the AMBIGUOUS and the UNAMBIGUOUS respectively and recruit three human judges for evaluation. For each sentence, the judges read the source sentence and two candidate translations, which are from TEXT-ONLY and our model + VIT + CTR. The judges are required to rate each candidate on a scale of 1 to 5 and pick the better one. Detailed guidelines are in Appendix G.
From Table 6, we can see *our system with video* inputs are more frequently rated as better translation than the text-only model on both AMBIGUOUS
and UNAMBIGUOUS *test sets*. This echoes automatic evaluations and implies that taking videos as inputs improve translation quality. Moreover, overall scores on UNAMBIGUOUS are better than those on AMBIGUOUS, which demonstrates that AMBIGUOUS is more challenging.
## 6.2 Incongruent Decoding
In this section, we explore *whether visual inputs contribute to the translation model*. Following (Caglayan et al., 2019; Li et al., 2022a), we use incongruent decoding to probe the need for visual modality on BIGVIDEO. During the inference, we replace the original video with a mismatched video
![7_image_0.png](7_image_0.png)
for each sentence. As shown in Figure 6, on AM-BIGUOUS and UNAMBIGUOUS, we observe that all automatic metrics of our system drop significantly with incongruent decoding, suggesting the effectiveness of leveraging videos as inputs. Interestingly, we also find that the drop of the BLEU and COMET scores is larger on AMBIGUOUS than that on UNAMBIGUOUS, which further proves our point that videos are more crucial for disambiguation.
## 6.3 Results On Public Datasets
Next, we conduct experiments on public datasets, VATEX and HOW2. Results are displayed in Table 7. On HOW2, our best system achieves higher BLEU score compared to the text-only model. However, the text-only model achieves best COMET and BLEURT, compared to all systems that take videos as inputs. On VATEX, our model with SLOWFAST features also achieves the highest scores on three evaluation metrics, compared to the text-only model and comparisons. Notably, the model with SLOWFAST features is significantly better than models with VIT features, which is probably because VATEX focuses on human actions and the SLOWFAST model is trained on the action recognition dataset. However, the performance gap between the TEXT-ONLY and our model + SLOWFAST + CTR is marginal. After we introduce 20M
external MT data, we observe that the TEXT-ONLY
and our best system are comparable on automatic metrics. Since the cross-modal encoder often requires large-scale paired videos and text to train robust representations, our model does not achieve huge performance gain on VATEX and HOW2. We hope our BIGVIDEO dataset can serve as a comple-
System BLEU COMET BLEURT
| HOW2 | |
|-------------------------------------|-------|
| w/o pretraining Ours | VATEX |
| w/o pretraining Ours w/ pretraining | |
TEXT-ONLY 57.57 65.95 72.52
Sanabria et al. (2018) 54.40 - —
GATED FUSHION 57.65 65.12 72.26 SELECTIVE ATTN 57.51 65.77 72.35
+ VIT + CTR 57.95 65.53 72.46 + SLOWFAST + CTR 57.78 65.58 72.41
TEXT-ONLY 35.01 15.32 56.99
Wang et al. (2019) 30.11 4.50 53.85
GATED FUSHION 33.79 13.55 55.66
SELECTIVE ATTN 34.25 13.55 56.80
+ VIT + CTR 34.84 12.44 56.25 + SLOWFAST + CTR 35.15 15.65 57.06
TEXT-ONLY 37.57 25.22 59.33
+ ViT + CTR 37.34 24.07 58.87 + SLOWFAST + CTR 37.58 25.05 59.20
Table 7: Experimental results on HOW2 and VATEX.
Complete results with standard deviations can be seen in Appendix A.
ment to existing video-guided machine translation datasets.
## 7 Conclusion
In this paper, we present BIGVIDEO ——a largescale video subtitle Translation dataset for multimodal machine translation. We collect 155 thousand videos accompanied by over 4.5 million bilingual subtitles. Specially, we annotate two test subsets: AMBIGUOUS where videos are required for disambiguation and UNAMBIGUOUS where text contents are self-contained for translation. We also propose a cross-modal encoder enhanced with a contrastive learning objective to build cross-modal interaction for machine translation. Experimental results prove that videos consistently improve the NMT model in terms of the translation evaluation metrics and terminology-targeted metrics. Moreover, human annotators prefer our system outputs, compared to the strong text-only baseline. We hope our BIGVIDEO dataset can facilitate the research of multi-modal machine translation.
## Limitations
BIGVIDEO is collected from two video platforms Xigua and YouTube. All videos are publicly available. However, some videos may contain user information (e.g., portraits) or other sensitive information. Similar to VATEX and HOW2, we will release our test set annotation and the code to reproduce our dataset. For videos without copyright or sensitive issues, we will make them public but limit for research, and non-commercial use (We will require dataset users to apply for access). For videos with copyright or sensitive risks, we will provide ids, which can be used to download the video. This step will be done under the instruction of professional lawyers.
Though we show that our model with video inputs helps disambiguation, we find that our model could yield incorrect translation due to the lack of world knowledge. For example, model can not distinguish famous table tennis player Fan Zhengdong and give correct translation. We find this is due to video pretrained models are often trained on action dataset (e.g., Kinetics-600 (Long et al.,
2020)) and hardly learn such world knowledge. In this work, we do not further study methods that leverage world knowledge.
## Ethical Considerations
Collection of BIGV**IDEO**. We comply with the terms of use and copyright policies of all data sources during collection from the YouTube and Xigua platform. User and other sensitive information is not collected to ensure the privacy of video creators. The data sources are publicly available videos and our preprocessing procedure does not involve privacy issues. For all annotation or human evaluation mentioned in the paper, we hire seven full-time professional translators in total and pay them with market wage. All of our annotators are graduates.
Potential Risks of BIGVIDEO **and our model.**
While BIGVIDEO consists of high-quality parallel subtitles, we recognize that our data may still contain incorrect samples. Our model may as well generate degraded or even improper contents. As our dataset is based on YouTube or Xigua videos, models trained on our dataset might be biased towards US or Chinese user perspective, which could yield outputs that are harmful to certain populations.
## Acknowledgements
The project was supported by the National Key Research and Development Program of China(No.
2020AAA0108004), National Natural Science Foundation of China (No. 62276219) and Natural Science Foundation of Fujian Province of China
(No. 2020J06001). We also thank the reviewers for their insightful comments.
## References
Md Mahfuz Ibn Alam, Antonios Anastasopoulos, Laurent Besacier, James Cross, Matthias Gallé, Philipp Koehn, and Vassilina Nikoulina.
2021. On the evaluation of machine translation for terminology consistency. *arXiv preprint* arXiv:2106.11891.
Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond.
Transactions of the Association for Computational Linguistics.
Ozan Caglayan, Menekse Kuyu, Mustafa Sercan Amac, Pranava Madhyastha, Erkut Erdem, Aykut Erdem, and Lucia Specia. 2021.
Cross-lingual visual pre-training for multimodal machine translation. arXiv preprint arXiv:2101.10044.
Ozan Caglayan, Pranava Madhyastha, Lucia Specia, and Loïc Barrault. 2019. Probing the need for visual context in multimodal machine translation. In *Proc. of NAACL*.
João Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? A new model and the kinetics dataset. In *Proc. of CVPR*.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In *Proc. of ICML*.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli:
Evaluating cross-lingual sentence representations. In *Proc. of EMNLP*.
Dipankar Das, Naveen Mellempudi, Dheevatsa Mudigere, Dhiraj D. Kalamkar, Sasikanth Avancha, Kunal Banerjee, Srinivas Sridharan, Karthik
Vaidyanathan, Bharat Kaul, Evangelos Georganas, Alexander Heinecke, Pradeep Dubey, Jesús Corbal, Nikita Shustrov, Roman Dubtsov, Evarist Fomenko, and Vadim O. Pirogov. 2018.
Mixed precision training of convolutional neural networks using integer operations. In Proc. of ICLR.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021.
An image is worth 16x16 words: Transformers for image recognition at scale. In *Proc. of ICLR*.
Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30K: Multilingual English-German image descriptions. In Proc. of the 5th Workshop on Vision and Language.
Desmond Elliott and Ákos Kádár. 2017. Imagination improves multimodal translation. In Proc.
of IJCNLP.
Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. Slowfast networks for video recognition. In *Proc. of ICCV*.
Spandana Gella, Desmond Elliott, and Frank Keller.
2019. Cross-lingual visual verb sense disambiguation. In *Proc. of ACL*.
Weiqi Gu, Haiyue Song, Chenhui Chu, and Sadao Kurohashi. 2021. Video-guided machine translation with spatial hierarchical attention network.
In *Proc. of ACL: Student Research Workshop*.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *Proc. of CVPR*.
Tosho Hirasawa, Zhishen Yang, Mamoru Komachi, and Naoaki Okazaki. 2020. Keyframe segmentation and positional encoding for video-guided machine translation challenge 2020. *arXiv* preprint arxiv:2006.12799.
Julian Hitschler, Shigehiko Schamoni, and Stefan Riezler. 2016. Multimodal pivots for image caption translation. In *Proc. of ACL*.
Zhibin Lan, Jiawei Yu, Xiang Li, Jian Luan Wen Zhang, Bin Wang, Degen Huang, and Jinsong Su. 2023. Exploring better text image trans-
lation with multimodal codebook. In *Proc. of* ACL.
Bei Li, Chuanhao Lv, Zefan Zhou, Tao Zhou, Tong Xiao, Anxiang Ma, and JingBo Zhu. 2022a. On vision features in multimodal machine translation. In *Proc. of ACL*.
Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. 2020. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In *Proc. of AAAI*.
Jiaoda Li, Duygu Ataman, and Rico Sennrich.
2021. Vision matters when it should: Sanity checking multimodal machine translation models. In *Proc. of EMNLP*.
Yihang Li, Shuichiro Shimizu, Weiqi Gu, Chenhui Chu, and Sadao Kurohashi. 2022b. VISA:
An ambiguous subtitles dataset for visual sceneaware machine translation. In *Proc. of LREC*.
Huan Lin, Fandong Meng, Jinsong Su, Yongjing Yin, Zhengyuan Yang, Yubin Ge, Jie Zhou, and Jiebo Luo. 2020. Dynamic context-guided capsule network for multimodal machine translation.
In *Proc. of ACMMM*.
Fuchen Long, Ting Yao, Zhaofan Qiu, Xinmei Tian, Jiebo Luo, and Tao Mei. 2020. Learning to localize actions from moments. In *Proc. of ECCV*.
Ninareh Mehrabi, Palash Goyal, Apurv Verma, Jwala Dhamala, Varun Kumar, Qian Hu, KaiWei Chang, Richard S. Zemel, Aram Galstyan, and Rahul Gupta. 2022. Is the elephant flying?
resolving ambiguities in text-to-image generative models. *arXiv preprint arxiv:2211.12503*.
Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. 2019. Howto100m: Learning a textvideo embedding by watching hundred million narrated video clips. In *Proc. of ICCV*.
Jihyung Moon, Hyunchang Cho, and Eunjeong L.
Park. 2020. Revisiting round-trip translation for quality estimation. In *Proc. of EACL*.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proc. of* NAACL.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proc.*
of ACL.
Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proc. of CMT*.
Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, and Arun Sacheti. 2020. Imagebert:
Cross-modal pre-training with large-scale weaksupervised image-text data. arXiv preprint arXiv:2001.07966.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In *Proc. of EMNLP*.
Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards realtime object detection with region proposal networks. In *Proc. of NIPS*.
Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Loïc Barrault, Lucia Specia, and Florian Metze. 2018. How2: A large-scale dataset for multimodal language understanding. arXiv preprint arxiv:1811.00347.
Thibault Sellam, Dipanjan Das, and Ankur Parikh.
2020. BLEURT: Learning robust metrics for text generation. In *Proc. of ACL*.
Matthew G. Snover, Bonnie J. Dorr, Richard M.
Schwartz, Linnea Micciulla, and John Makhoul.
2006. A study of translation edit rate with targeted human annotation. In *Proc. of AMTA*.
Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Proc.
of NIPS.
Jinsong Su, Jinchang Chen, Hui Jiang, Chulun Zhou, Huan Lin, Yubin Ge, Qingqiang Wu, and Yongxuan Lai. 2021. Multi-modal neural machine translation with deep semantic interactions.
Inf. Sci.
Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In *Proc. of LREC*.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proc. of NIPS*.
Feng Wang and Huaping Liu. 2021. Understanding the behaviour of contrastive loss. In Proc. of CVPR.
Josiah Wang, Josiel Figueiredo, and Lucia Specia.
2022. MultiSubs: A large-scale multimodal and multilingual dataset. In *Proc. of LREC*.
Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, YuanFang Wang, and William Yang Wang. 2019.
Vatex: A large-scale, high-quality multilingual dataset for video-and-language research. In *Proc.*
of ICCV.
Zhiyong Wu, Lingpeng Kong, Wei Bi, Xiang Li, and Ben Kao. 2021. Good for misconceived reasons: An empirical revisiting on the need for visual context in multimodal machine translation.
In *Proc. of ACL*.
Qiaolin Xia, Haoyang Huang, Nan Duan, Dongdong Zhang, Lei Ji, Zhifang Sui, Edward Cui, Taroon Bharti, and Ming Zhou. 2021. Xgpt:
Cross-modal generative pre-training for image captioning. In *Proc. of NLPCC*.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le.
2019. Xlnet: Generalized autoregressive 897 pretraining for language understanding. In Proc. of NIPS.
Zhishen Yang, Tosho Hirasawa, Mamoru Komachi, and Naoaki Okazaki. 2022. Why videos do not guide translations in video-guided machine translation? an empirical evaluation of video-guided machine translation dataset. *J. Inf. Process.*
Shaowei Yao and Xiaojun Wan. 2020. Multimodal transformer for multimodal machine translation. In *Proc. of ACL*.
Yongjing Yin, Fandong Meng, Jinsong Su, Chulun Zhou, Zhengyuan Yang, Jie Zhou, and Jiebo Luo.
2020. A novel graph-based multi-modal fusion encoder for neural machine translation. In Proc.
of ACL.
Yaoming Zhu, Zewei Sun, Shanbo Cheng, Yuyang Huang, Liwei Wu, and Mingxuan Wang. 2022.
Beyond triplet: Leveraging the most data for multimodal machine translation. arXiv preprint arXiv:2212.10313.
## A Complete Results B Data Collection B.1 Preprocessing B.2 Video Category Classifier Details
The Complete results with standard deviations can be found in Table 8, Table 9 and Table 10. Subtitles are organized as a list of text chunks. Each chunk contains both English and Chinese lines and a corresponding timestamp. To obtain complete sentences, we start processing subtitles by merging chunks. Since English subtitles are often with strong punctuation marks, we greedily merge continuous segments (The start time of the second segment and the end time of the first segment are within 0.5 seconds) until an end mark is met at the end of the segment. To preserve context, we keep merging continuous sentences until a maximum time limit of 15 seconds is reached. Finally, we pair each merged segment with the video clip from the time interval corresponding to the segment.
English sentences from both YouTube and Xigua have an average of 4.6 fluency score, which shows that English subtitles are fluent and rarely have errors. In terms of translation quality, subtitles collected from Xigua have an average of 4.2 translation quality score, which indicates most of the subtitle pairs are equivalent or near-equivalent. In YouTube data, we find about 20 percent of sentence pairs are not equivalent or have major errors such as mistranslation or omission.
To remove low-quality pairs, we try three commonly-used quality estimation scores: 1) the COMET score, 2) the Euclidean distance based on the multilingual sentence embedding (Artetxe and Schwenk, 2019), and 3) the round-trip translation BLEU score (Moon et al., 2020). We filter out pairs if more than one score is lower than the threshold
(set to 0.1, 4 and 20, respectively). On annotated samples, the average translation quality reaches 4.1 after cleaning.
To construct a large-scale video-guided dataset, we collect videos from a variety of domains and categorized them into 15 classes based on their video categories in YouTube. We use the official youtubedl8toolkit to retrieve video categories and other metadata from YouTube. To ensure consistency
| System | BLEU | COMET | BLEURT | | | | | | |
|----------------------------|-----------|-------------|-----------|-----------|-------------|-----------|-----------|-------------|-----------|
| ALL | AMBIGUOUS | UNAMBIGUOUS | ALL | AMBIGUOUS | UNAMBIGUOUS | ALL | AMBIGUOUS | UNAMBIGUOUS | |
| w/o pre-training TEXT-ONLY | 43.970.10 | 43.590.29 | 44.190.20 | 37.310.34 | 33.060.24 | 39.770.56 | 61.110.09 | 59.680.21 | 61.930.13 |
| GATED FUSHION + VIT | 44.330.13 | 44.120.33 | 44.450.12 | 37.880.32 | 34.470.62 | 39.850.35 | 61.250.07 | 60.190.08 | 61.860.08 |
| SELECTIVE ATTN +VIT | 44.390.13 | 44.200.22 | 44.510.10 | 38.480.19 | 34.840.39 | 40.590.32 | 61.370.10 | 60.300.24 | 62.070.13 |
| Ours + VIT | 44.260.20 | 44.100.24 | 44.370.21 | 38.130.65 | 34.750.79 | 40.080.71 | 61.240.17 | 60.290.30 | 61.780.20 |
| + SLOWFAST | 44.210.17 | 44.120.27 | 44.260.27 | 37.810.38 | 34.990.25 | 39.440.53 | 61.220.09 | 60.280.08 | 61.770.17 |
| + VIT + CTR | 44.450.13 | 44.480.16 | 44.400.11 | 38.150.56 | 35.760.42 | 39.540.75 | 61.360.17 | 60.720.21 | 61.730.16 |
| + SLOWFAST + CTR | 44.440.12 | 44.200.12 | 44.580.17 | 38.370.41 | 35.180.71 | 40.220.53 | 61.310.10 | 60.410.09 | 61.820.14 |
| w/ pre-training TEXT-ONLY | 44.450.11 | 43.890.19 | 44.790.16 | 38.360.33 | 33.400.29 | 41.230.54 | 61.410.13 | 59.850.16 | 62.310.19 |
| + VIT +CTR | 44.830.09 | 44.620.10 | 44.960.11 | 39.440.51 | 36.420.41 | 41.190.89 | 61.760.08 | 60.750.12 | 62.340.17 |
| + SLOWFAST + CTR | 44.770.31 | 44.430.15 | 44.970.45 | 39.260.31 | 36.030.52 | 41.120.76 | 61.710.17 | 60.520.11 | 62.400.25 |
Table 8: Complete sacreBLEU(%), COMET(%) and BLEURT(%) scores on BIGVIDEO testset. "+ CTR" denotes our cross-model framework with contrastive learning loss. "ALL" represents the results on the whole test set. All results are mean values of five different random seeds. The best result in each group is in **bold**.
| System | Exact | Window | Window | 1-TERm | | | | |
|----------------------------|------------------------------------------|-----------|------------|-----------|--------|------|-------|--------|
| Match | Overlap-2 | Overlap-3 | | | | | | |
| w/o pre-training TEXT-ONLY | 23.030.67 | 14.220.41 | 14.280.42 | 49.500.13 | | | | |
| GATED FUSHION | 23.680.52 | 14.600.56 | 14.760.47 | 49.680.19 | | | | |
| SELECTIVE ATTN | 23.660.47 | 14.950.76 | 15.080.65 | 49.770.22 | | | | |
| Ours + VIT | 24.270.37 | 15.090.47 | 15.270.50 | 49.780.41 | | | | |
| + SLOWFAST | 24.050.58 | 14.970.77 | 15.130.85 | 49.320.44 | | | | |
| + VIT + CTR | 25.020.74 | 15.560.65 | 15.770.55 | 49.910.20 | | | | |
| + SLOWFAST +CTR | 24.080.31 | 15.040.14 | 15.120.17 | 49.320.18 | | | | |
| w/ pre-training TEXT-ONLY | 22.710.72 | 14.350.54 | 14.370.61 | 49.730.21 | | | | |
| + VIT + CTR | 24.300.67 | 15.170.58 | 15.390.52 | 50.280.39 | | | | |
| + SLOWFAST + CTR | 23.620.58 | 14.760.48 | 14.900.36 | 50.040.13 | System | BLEU | COMET | BLEURT |
| HOW2 | | | | | | | | |
| w/o pretraining TEXT-ONLY | 57.570.26 | 65.950.71 | 72.520.24 | | | | | |
| Sanabria et al. (2018) | 54.40 | - | - | | | | | |
| GATED FUSHION | 57.650.35 | 65.120.43 | 72.260.27 | | | | | |
| SELECTIVE ATTN | 57.510.20 | 65.770.93 | 72.350.23 | | | | | |
| Ours + VIT + CTR | 57.950.24 | 65.530.68 | 72.460.25 | | | | | |
| + SLOWFAST +CTR | 57.780.09 | 65.580.71 | 72.410.15 | | | | | |
| VATEX | | | | | | | | |
| w/o pretraining TEXT-ONLY | 35.010.14 | 15.320.45 | 56.990.307 | | | | | |
| Wang et al. (2019) | 30.110.72 | 4.500.81 | 53.850.55 | | | | | |
| GATED FUSHION | 33.790.14 | 13.550.42 | 55.660.11 | | | | | |
| SELECTIVE ATTN | 34.250.30 | 13.550.10 | 56.800.11 | | | | | |
| Ours + VIT + CTR | 34.840.25 | 12.440.64 | 56.250.25 | | | | | |
| + SLOWFAST + CTR | 35.150.24 | 15.650.35 | 57.060.06 | | | | | |
| w/ pretraining TEXT-ONLY | 37.570.35 | 25.220.66 | 59.330.09 | | | | | |
| + ViT + CTR | 37.340.14 | 25.071.21 | 58.870.33 | | | | | |
| + SLOWFAST + CTR | 37.580.15 | 25.050.58 | 59.200.13 | | | | | |
| Table 9: | Complete terminology-targeted results on | | | | | | | |
| BIGVIDEO test set. All results are mean values of five different random seeds with standard deviations as subscripts. The Best result in each group is in bold. between YouTube and Xigua videos, we train a category classifier to classify the video category tags of Xigua videos and those YouTube videos whose category information are missing. We train the | | | | | | | | |
between YouTube and Xigua videos, we train a category classifier to classify the video category tags of Xigua videos and those YouTube videos whose category information are missing. We train the category classifier using the English subtitles and category information of pre-labeled videos and use it to predict the category tags for the rest of the unlabeled videos. Specifically, we first group consecutive subtitles in a video by 5 and then concatenate them as input for the classifier. During inference, we predicted the category tags of groups in each video and obtain the video's label by voting. The category classifier model was fine-tuned based on the pre-trained XLNet-large-cased model, which performed well on other classification tasks such as XNLI (Yang et al., 2019; Conneau et al., 2018).
A statistical summary of the video categories of the train set can be found in Figure 3.
In addition, we also count the category tags of the videos in the test set, as listed in Table 11. Similar to the train set, we directly obtain the category tags for most YouTube videos directly from YouTube and predict the tags of the remaining videos using the category classifier mentioned earlier. The statistics for video categories show that the videos in our dataset are diverse in terms of domain, both in the training and test sets. These statistics for the video categories provide a more comprehensive view of BIGVIDEO.
| Category | Xigua | YouTube | All |
|------------------|---------|-----------|-------|
| People & Blogs | 15 | 29 | 44 |
| Entertainment | 48 | 14 | 62 |
| Howto & Sytle | 11 | 37 | 48 |
| Education | 33 | 3 | 36 |
| Travel & Events | 14 | 4 | 18 |
| Music | 0 | 0 | 0 |
| Gaming | 13 | 6 | 19 |
| Sports | 5 | 1 | 6 |
| Science & Tech | 30 | 2 | 32 |
| Comedy | 0 | 3 | 3 |
| Film & Animation | 3 | 1 | 4 |
| Pets & Animals | 12 | 6 | 18 |
| News & Politics | 1 | 1 | 2 |
| Autos & Vehicles | 9 | 2 | 11 |
| Activism | 0 | 2 | 2 |
| All | 194 | 111 | 305 |
| Hyperparameters | BIGVIDEO | HOW2 | VATEX |
|---------------------|------------|----------|----------|
| GPUs | 8 | 2 | 1 |
| Batch Size | 4,096 | 4,096 | 4,096 |
| Dropout | 0.1 | 0.3 | 0.3 |
| Weight Decay | 0.1 | 0.1 | 0.1 |
| Learning Rate | 7e-4 | 5e-4 | 1e-3 |
| Warmup Steps | 4000 | 4000 | 2000 |
| Layer Normalization | PostNorm | PostNorm | PostNorm |
Table 12: Training hyperparameters details.
## C Experimental Detatils C.1 Dataset
We additionally conduct experiments on two public video-guided translation datasets, VATEX (Wang et al., 2019) and HOW2 (Sanabria et al., 2018).
The HOW2 dataset is a collection of instructional videos from Youtube. The corpus contains 184,948 English-Portuguese pairs for training, each associated with a video clip. We utilize val (2,022) as the validation set and dev5 (2,305) as the testing set. The VATEX dataset is a video-and-language dataset containing over 41,250 unique videos. The released version of the bilingual collection includes 129,955 sentence pairs for training, 15,000 sentence pairs for validation, and 30,000 for testing.
Since the testing set is not publicly available, we split the original validation set into two halves for validation and testing. Some video clips of VA-TEX are no longer available on the Youtube. So after removal, the used corpus contains 115,480
![13_image_0.png](13_image_0.png)
![13_image_1.png](13_image_1.png)
sentence pairs for training, 6,645 sentence pairs for validation, and 6,645 sentence pairs for testing, each associated with a video clip.
## C.2 Training And Implementation Details
More training details can be found in Table 12.
For the pretraining on WMT19 Zh-En dataset, we utilize the same training parameters as that on BIGVIDEO and train the model for 300k steps.
## D The Choice Of Hyper-Parameters
Temperature for contrastive learning objective.
Performances of different temperature are presented in Figure 7. Here we fix the weight for contrastive learning objective to 1. On the validation set, there exists no significant difference in BLEU scores among choices of temperature. For better translation performance, a small temperature is more suitable.
Weight for contrastive learning objective. We fix the τ = 0.002 and adjust the weight from 0.5 to 1.5. We can observe that contrastive learning objective with varying weights benefits the model to different degrees. 1.0 is the most suitable weight for our system.
Length of Video Frames. To investigate how the
![14_image_1.png](14_image_1.png)
![14_image_2.png](14_image_2.png)
Source **Subtitle**: ……Number one **drive shot** requires smaller swing but more focus.
Target Subtitle: ……第一、抽球。挥杆幅度要小, 但是要集中力量。
System w/o **Video**: ……第一个驾驶镜头需要较小 的挥杆,但更多的焦点。
System w/ **Video**: ……第一,抽球需要更小的挥 杆动作,但要集中注意力 Figure 10: A case. The phrases with semantic ambiguity are highlighted in red. The wrong translations are in blue and the correct translations are in **yellow**.
length of video frames affects translation, we adjust the number of sampled video frames in [1,12,36].
Figure 9 depicts their performances. Here the video features we use are 2D features extracted by ViT.
We can observe that when only one video frame is sampled, the video degrades into one image and its positive impact on the system is reduced. A
maximum of 12 video frames achieves the best performance.
## E Case Study
We additionally present two cases in the appendix.
In figure 10, the phrase "drive shot" is better translated by our system by understanding the meaning of "shot". In Figure 11, we can find both the textonly baseline and our system fail to correctly translate the source title. The objects in the video are cards of Duel Monsters, which need world knowledge to understand. So the source title is complicated for text-only and our system.
Source **Subtitle**: I spent And there we go , a
![14_image_0.png](14_image_0.png) Blackfeather Darkrage Dragon , which is like Black Winged Dragon. **Ground Truth:** 好了,玄翼龙黑羽,就像黑翼龙。
System w/o **Video**: 好了,黑羽毛darkragedragon就 像黑风龙。
System w/ **Video**:这是黑羽毛的龙龙,就像黑窗龙。
Figure 11: A case. The phrases with semantic ambiguity are highlighted in red. The wrong translations are in blue and the correct translations are in **yellow**.
## F Annotation Guidelines
We hire seven full-time annotators who are fluent in both Chinese and English. They are recruited to annotate translation data or conduct human evaluations. The annotators are shown one English and corresponding Chinese subtitle of the given video clip. After watching videos and reading subtitles, they are required to decide whether videos are related to subtitles. If not, the sample will be discarded. Then the annotators are required to rate on three aspects:
- **Fluency Score (1-5, 1 is the worst and 5 is**
the best): If the audio is in English, the annotators will need to check whether the English subtitle is the transcript of the audio. If the audio is not in English, the annotators will need to rate if the sentence is grammatically correct.
- **Translation Quality (1-5, 1 is the worst and**
5 is the best): Whether the Chinese subtitle is equivalent in meaning to the English Subtitle.
- **Ambiguous (0/1):** The annotators need to decide whether the video information is required to complete the translation. "1" means "the video information is required" and otherwise
"0".
## G Human Evaluation Guidelines
We hire three annotators to conduct the human evaluation. Each annotator is required to rate 100 samples from AMBIGUOUS and 100 samples from UNAMBIGUOUS on **translation quality** and rank two systems. The definition of the **translation**
quality is the same as that in annotation guidelines.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We discuss the limitations of our work in Section Limitation.
✓ A2. Did you discuss any potential risks of your work?
We discuss the limitations of our work in Section Ethical Considerations.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
It can be seen in the abstract and the introduction of the paper.
✓ A4. Have you used AI writing assistants when working on this paper?
We use Grammarly to check our writing.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We use some publicly available scientific artifacts, which can be seen in Section Experiments and Appendix.
✓ B1. Did you cite the creators of artifacts you used?
We cite the creators of the datasets and code we use, which can be seen in Section Experiments and Appendix.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We discuss this problem in Section Ethical Considerations.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We have discussed these problems in the Section Ethical Considerations and Limitations. All the datasets and code repos we use are for research purposes and conform to the intention of the original creators. For the dataset we create, we specify intended use in the Limitations.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We avoid collecting user information or other sensitive information. However, in our dataset, we recognize that there might exist videos that contain sensitive information, such as portraits. Upon publication, we plan to release video features and publicly available url for videos that may contain sensitive information.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We provide information about the artifacts in Section Dataset and Appendix.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We report relevant statistics in Section Dataset and Appendix.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?**
We provide information about computational experiments in Section Experiments.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We provide the information in Section Experiments and Appendix.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We provide the information in Section Experiments and Appendix.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We provide the information in Section Results and Appendix. We report results which are the mean values of five runs and their standard deviations.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We provide the information in Section Experiments.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** We Provide The Information In Appendix.
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
We provide the information in Appendix.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
We provide the information in Section Limitations.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
We provide the information in Section Dataset and Appendix.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Our data collection protocol is followed under the instruction of professional attorneys.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
We report in Appendix. |
ren-etal-2023-constructing | Constructing Procedural Graphs with Multiple Dependency Relations: A New Dataset and Baseline | https://aclanthology.org/2023.findings-acl.536 | Current structured and semi-structured knowledge bases mainly focus on representing descriptive knowledge but ignore another commonsense knowledge (Procedural Knowledge). To structure the procedural knowledge, existing methods are proposed to automatically generate flow graphs from procedural documents. They focus on extracting sequential dependency between sentences but neglect another two important dependencies (i.e., inclusion dependency and constraint dependency) in procedural documents. In our paper, we explore a problem of automatically generating procedural graph with multiple dependency relations to extend the flow graph constructed by existing methods and propose a procedural graph construction method with syntactic information and discourse structures. A new dataset (WHPG) is built and extensive experiments are conducted to evaluate the effectiveness of our proposed model. | # Constructing Procedural Graphs With Multiple Dependency Relations: A New Dataset And Baseline
Haopeng Ren1,2∗
, Yushi Zeng1,2∗
, Yi Cai1,2†
, Bihan Zhou1,2**, Zetao Lian**1,2 1School of Software Engineering, South China University of Technology, Guangzhou, China 2Key Laboratory of Big Data and Intelligent Robot (South China University of Technology),
Ministry of Education [email protected]
## Abstract
Current structured and semi-structured knowledge bases mainly focus on representing descriptive knowledge but ignore another commonsense knowledge (Procedural Knowledge). To structure the procedural knowledge, existing methods are proposed to automatically generate flow graphs from procedural documents. They focus on extracting sequential dependency between sentences but neglect another two important dependencies
(i.e., inclusion dependency and constraint dependency) in procedural documents. In our paper, we explore a problem of automatically generating procedural graph with multiple dependency relations to extend the flow graph constructed by existing methods and propose a procedural graph construction method with syntactic information and discourse structures.
A new dataset (WHPG) is built and extensive experiments are conducted to evaluate the effectiveness of our proposed model.
## 1 Introduction
Many well-known structured knowledge bases
(e.g., *Wikidata*1) and semi-structured knowledge bases (e.g., *Wikipedia*2) have been built and assist many applications to achieve remarkable performance, such as the question-answering (QA)
(Li and Moens, 2022), information retrieval (Zhou et al., 2022)) and recommendation systems (Cui and Lee, 2022). They focus on representing the *descriptive knowledge* i.e. the knowledge of attributes or features of things (Yang and Nyberg, 2015), but lack another kind of commonsense knowledge— Procedural Knowledge. Specifically, the knowledge which is in the form of procedures or sequences of actions to achieve particular goals is called as procedural knowledge, such as cooking recipes and maintenance manuals.
∗The authors contribute equally. †Corresponding author 1http://wikidata.org/
2http://wikipedia.org/
Generally, most procedural knowledge is expressed in unstructured texts (e.g., websites or books of cooking recipes). To extract the structured procedural knowledge, existing methods (Honkisz et al., 2018; Qian et al., 2020; Pal et al., 2021)
are designed to transform the unstructured procedural documents into flow graphs (or workflows)
which can effectively present the main operations and their ordering relations expressed in procedural documents. However, they only focus on extracting the *sequential dependency* (i.e., the dependency relation "*Next*" in Figure 1) between steps
(operational sentences) in a procedural document, which is insufficient in real-world scenarios. As shown in Figure 1, sentences S2 and S3 are the sub-actions of sentence S1, which provide more fine-grained operational statements to finish operation S1. There is another kind of dependencyinclusion dependency between sentences S1 and S2 (or between S1 and S3). Nevertheless, the flow graphs constructed by current methods (Qian et al., 2020; Pal et al., 2021) ignore the inclusion dependencies among sentences and wrongly connect sentences S1 and S2 as a "*Next*" relation, as shown in Figure 1.
Furthermore, declarative (or descriptive) sentences commonly appear in real-world procedural documents, which state the constraints (e.g, reasons, conditions and effects) of doing things. Current researches have shown that declarative sentences in procedural documents can provide important clues for the procedural semantic understanding and reasoning (Georgeff and Lansky, 1986)
in many downstream tasks such as operation diagnosis (Luo et al., 2021) and technical maintenance (Hoffmann et al., 2022). However, current knowledge structure methods (Qian et al., 2020; Pal et al., 2021) simply transform the declarative sentences into an information flow in a flow graph
(e.g., S7 → S8 in Figure 1), which neglects the constraint dependency between operational and
![1_image_0.png](1_image_0.png)
![1_image_1.png](1_image_1.png)
declarative sentences. As shown in Figure 1 , the declarative sentences S 7 and S 8 respectively describe the effect constraint and condition constraint for the execution of operational sentence S 6.
Based on the above motivations, we explore a problem of automatically constructing a procedural graph with multiple dependency relations between sentences in a procedural document. According to our observation, the syntactic structures of sentences can provide obvious features for identifying sentence types and then assist to detect the dependency relations between sentences. As shown in Figure 2 , the syntactic pattern
"verb(VB) → noun(NN) " is a strong indicator for classifying sentences S 2 and S 3 into operation types. Meanwhile, the sentence type prediction can further benefit the dependency relation detection between sentences. For example, the constraint dependency cannot exist between two sentences with an operation type. Moreover, inspired by researches in discourse parsing (Zhu et al., 2019; Wang et al., 2021 ), we observe that the contextual dependency structures (which is called as discourse structures) can provide features to recognize the dependency relations between sentences. As shown in Figure 1, the dependency relation between S 1 and S 3 can be inferred as Sub-Action according to their contextual dependency structure S 1 Sub-Action S 2 Next-Action S 3.
In our paper, we design a procedural graph construction method to detect the multiple dependency relations between sentences in procedural documents by utilizing syntactic information and discourse structures. Specifically, a GCN-based syntactic structure encoder with multi-query attention is proposed to capture the syntactic features of sentences and improve the ability to distinguish between operational and declarative sentences. Moreover, a structure-aware edge encoder is designed to assist the inference of dependencies between sentences by infusing the contextual structure features in procedural documents. Furthermore, due to the lack of dependencies between sentences in existing procedural text datasets, a new dataset WHPG is built based on the wikiHow 3 database.
To summarize, the main contributions of this paper are listed as follows:
- We explore a problem of automatically generating procedural graphs with multiple dependencies from unstructured procedural documents, aiming to extend the flow graphs in existing methods that ignore dependencies of sentences. To the best of our knowledge, our work is the first study focusing on generating procedural graphs with multiple dependencies from procedural documents.
3http://www.wikihow.com/
- We design a GCN-based syntactic structure encoder and a discourse-structure aware edge encoder to effectively identify node types and assist the detection of dependency relations in procedural documents.
- We create a new procedural text dataset WHPG which builds dependency relations between operational and declarative sentences.
Extensive experiments are conducted on two public datasets from different domains and our created dataset to evaluate the effectiveness of our model in automatically generating the procedural graph.
## 2 Related Work
Many prominent knowledge bases such as *WikiData* (Vrandeciˇ c and Krötzsch ´ , 2014), *Wikipedia*
(Lehmann et al., 2015) and *FreeBase* (Bollacker et al., 2007) mainly focus on representing *descriptive knowledge* (i.e., the knowledge of attributes or features of things (Yang and Nyberg, 2015)). But they do not sufficiently cover *procedural knowledge* (i.e., the knowledge of procedures or sequences of actions for achieving the particular goals (Georgeff and Lansky, 1986)). Recently, to obtain the structured procedural knowledge, two categories of methods (i.e., entitylevel and sentence-level based methods) are proposed. Specifically, the entity-level based methods (Jermsurawong and Habash, 2015; Feng et al.,
2018; Mysore et al., 2019; Qian et al., 2020; Xu et al., 2020; Yamakata et al., 2020; Jiang et al.,
2020; Fang et al., 2022) aim to extract the predefined entities and their relations from unstructured procedural texts (e.g., cooking recipes). However, they require large-scale fine-grained annotated data for each domain and lack domain generalization ability.
To alleviate these issues, sentence-level based methods (Pal et al., 2021) are designed, aiming to construct the flow graphs at sentence-level for procedural documents. However, they only focus on extracting the action flows with sequential dependencies, which is limited in real-world scenarios.
In practice, both the *inclusion dependency* and *constraint dependency* are common in procedural texts and benefit the procedural text understanding and reasoning (Georgeff and Lansky, 1986; Hoffmann et al., 2022). Thus, our paper explores a problem of automatically generating a procedural graph with dependency relations from a procedural document.
Up to the present, several public entity-level procedural text datasets (Yamakata et al., 2020; Qian et al., 2020; Mysore et al., 2019; Mishra et al.,
2018) and a sentence-level dataset *CTFW* (which is not publicly available due to ethical considerations) (Pal et al., 2021) are built. Nevertheless, existing public datasets do not annotate the dependency relations between sentences in a procedural document. Thus, a new dataset *WHPG* based on the *wikiHow* knowledge base is built and will be publicly available for evaluation in future research.
## 3 Model 3.1 Problem Definition And Notations
The goal of our task is to extract the dependency relations between sentences and construct a procedural graph for each procedural document. Specifically, given a procedural document D = {s1, s2*, . . . , s*N }, a procedural graph GD =
{D, Ψ, R} with nodes (i.e., sentences) si ∈ D
and triplets (si, ri,j , sj ) ∈ Ψ is constructed, where ri,j ∈ R denotes the dependency relation between sentences si and sj ; N is the number of sentences in a procedural document D. Note that the dependency relation set R contains four kinds of dependency relations: Next-Action, Sub-Action, *Constraint* and *None*, as shown in Figure 1.
To construct the procedural graphs, three subtasks (i.e., Node Type Classification, *Edge Prediction* and *Dependency Relation Classification*)
are required. For *Node Type Classification* task, each node si ∈ D is classified into one of the node types (i.e., Operation, Declaration, *Both* and None). Then, the extraction of triplets (si, ri,j , sj )
from the procedural document D can be divided into two tasks: *Edge Prediction* P(si →
sj |(s1, s2*, . . . , s*N )) which predicts whether an edge exists for each sentence pair Si and Sj ; and Dependency Relation Classification P(ri,j |si →
sj ) which classifies each sentence pair (predicted as an edge in *Edge Prediction* task) into one of the dependency relations (i.e., Next-Action, *Sub-Action* and *Constraint*).
## 3.2 Syntactic Graph Construction
The part-of-speech and syntactic structure of sentences can provide the evident clues for facilitating the inference of node types and dependency relations. For each sentence, we use the Standford CoreNLP libraries (Manning et al., 2014) to recognize the part-of-speech of each token and
![3_image_0.png](3_image_0.png)
dependency relations among tokens, as shown in Figure 3. Thus, given the sentence si =
{xi,1, xi,2, . . . , xi,n}, a syntactic relational graph is created as follows:
$$G_{i}^{s y n}=\{P_{i},\Phi_{i},R_{i}^{s y n}\}$$
where Pi represents a set of the part-of-speech in sentence si; R
syn iis a set of syntactic dependency relations and Φi denotes a set of triplets
(pi,j , r syn j,k , pi,k) with the part-of-speech pi,j ∈ Pi of the token xi,j and syntactic dependency relation r syn j,k ∈ R
syn ibetween part-of-speech pi,j and pi,k.
## 3.3 Sentence (Node) Feature Representation 3.3.1 Syntactic Rgcn Encoder
Each type of part-of-speech p ∈ P and dependency relation r ∈ Rsyn are respectively initialized into a learnable vector p ∈ R
dr and a learnable weight matrix Wr ∈ R
dr×dr. Then, given a syntactic graph G
syn ifor sentence si, each part-of-speech node pi,j is encoded by Relational Graph Convolutional Networks (RGCN) (Schlichtkrull et al.,
2018) encoder as follows:
1 p (l+1) i,j = σ(X r∈R syn i X |Nr j| W(l) r p (l) i,k+W(l) 0 p (l) i,j ) k∈Nr j (2)
where Nr j is the set of neighborhood node for the node pi,j with the relation r ∈ R
syn i; W0 ∈ R
dr denotes the learning parameters and l is the number of layers in RGCN encoder.
## 3.3.2 Multi-Query Syntactic-Aware Attention
$$(1)$$
Moreover, not all the syntactic features are equally important to identify the node types and the dependency relations between the nodes. For example shown in Figure 3, the syntactic pattern
"VB obj
−→NN" is a strong indicator for classifying nodes as "*Operation*" types, while the pattern "Determiner (DT) det
−→*Noun (NN)*" does not provide explicit features for the node type classification.
Meanwhile, different tasks also focus on different syntactic features. Motivated by this, a multi-query syntactic-aware attention module is designed to enable the model to pay attention to the relevant syntactic features for the target tasks. Specifically, given a sentence si, the syntactic feature representation is obtained as follows:
$$\begin{array}{c}{{v_{i}^{s y n}=\sum_{k=1}^{N_{q}}\sum_{j=1}^{n}\alpha_{i,j}^{k}p_{i,j}}}\\ {{\alpha_{i,j}^{k}=\frac{e x p(q_{k}p_{i,j})}{\sum_{m=1}^{n}e x p(q_{k}p_{i,m})}}}\end{array}\tag{3}$$
where n is the number of tokens in sentence si; Nq is the number of query and qk ∈ R
dr denotes a learnable vector.
## 3.3.3 Bert-Based Bi-Gru Encoder
For a sentence si, each token xi,j can be encoded into a numerical vector vi,j ∈ R
d*bert* by the pre-trained language model BERT (Kenton and Toutanova, 2019). Then, a hierarchical GRU encoder consisting of two bidirectional GRUs (BiGRU) is utilized to learn the contextual features.
Specifically, given a sequence of embedding vectors Ei = {vi,1, vi,2, . . . , vi,n} of sentence si, the first Bi-GRU is utilized to encode them as hi ∈ R
dgru by concatenating the last hidden states from the two directions. In this way, each sentence siis encoded into a vector hi. Then, the procedural document D can be encoded as a sequence of vectors Hsent = {h1, h2*, . . . ,* hN }, where N is the number of sentences in the procedural document.
Moreover, to capture the global contextual features, the second Bi-GRU encoder is adopted to transform H*sent* into H*dialog* = {v gru 1, v gru 2*, . . . ,* v gru N},
where v gru i ∈ R
dgru denotes the feature representation of the sentence si ∈ D.
## 3.3.4 Feature Fusion
Each sentence si ∈ D can be obtained by concatenating the syntactic feature representation v syn i and the semantic feature representation v gru i, as follows:
$$\mathbf{v}_{i}=[\mathbf{v}_{i}^{s y n};\mathbf{v}_{i}^{g r u}]$$
where [·; ·] denotes the concatenation operation for the given two vectors.
## 3.4 Structural-Aware Edge Feature Representation
The contextual dependency structures in a procedural document have been proven to be effective in discourse parsing (Shi and Huang, 2019; Wang et al., 2021). The structure-aware attention is designed to capture the contextual structure features for each target sentence pair in both edge prediction and relation classification. Specifically, given a node pair (si, sj ), the edge representation r init i,j is initialized by concatenating the syntactic feature representation v syn i, v syn jand the distance embedding v dist i,j as follows:
$$\mathbf{r}_{i,j}^{i n i t}=\sigma([\mathbf{v}_{i}^{s y n};\mathbf{v}_{i,j}^{d i s t};\mathbf{v}_{j}^{s y n}]\mathbf{W}^{C})\qquad(5)$$
where i < j, j − *i < win* and win is the longest distance length between the given two nodes in a procedural document. Then, we update the node representation viin Equation (4) with the contextual features as follows:
$$\begin{split}\boldsymbol{v}_{i}^{att}=\sum_{j=1}^{N}\alpha_{i,j}(\boldsymbol{v}_{j}\boldsymbol{W}^{V}+\boldsymbol{r}_{i,j}^{init}\boldsymbol{W}^{F})\\ \alpha_{i,j}=\frac{exp(e_{i,j})}{\sum_{k=1}^{N}exp(e_{i,k})}\\ e_{i,j}=\frac{(\boldsymbol{v}_{i}\boldsymbol{W}^{Q})(\boldsymbol{v}_{j}\boldsymbol{W}^{K}+\boldsymbol{r}_{i,j}^{init}\boldsymbol{W}^{R})T}{\sqrt{d_{r}+d_{gru}}}\end{split}\tag{6}$$ where $\boldsymbol{W}^{Q},\boldsymbol{W}^{F},\boldsymbol{W}^{K}$, $\boldsymbol{W}^{V}$ and $\boldsymbol{W}^{R}$ are learn
where WQ,WF ,WK, WVand WR are learnable parameters and dr + dgru is the dimension of the node representations. Finally, the edge representation ri,j is calculated by refusing the node features, as follows:
$$\begin{array}{c}{{\gamma_{i,j}=\sigma([\mathbf{v}_{i}^{a t t};\mathbf{v}_{j}^{a t t}]\mathbf{W}^{r})}}\\ {{z_{i,j}=\sigma([\mathbf{v}_{i}^{a t t};\mathbf{v}_{j}^{a t t}]\mathbf{W}^{z})}}\\ {{\mathbf{r}_{i,j}^{'}=t a n h([\gamma_{i,j}\odot\mathbf{r}_{i,j}^{i n t i};\mathbf{v}_{i}^{a t t};\mathbf{v}_{j}^{a t t}]\mathbf{W}^{h})}}\\ {{\mathbf{r}_{i,j}=(1-z_{i,j})\odot\mathbf{r}_{i,j}^{i n t i}+z_{i,j}\odot\mathbf{r}_{i,j}^{'}}}\end{array}\tag{7}$$
where Wr, Wzand Whare the learnable parameters; denotes the dot-product operation.
## 3.5 Projection And Loss Function
$$(4)$$
The representation of node si and edge ri,j can be encoded by Equation (4) and Equation (7) as vi and ri,j . We adopt a projection layer with a softmax function to calculate the probability distribution of categories
(i.e., {*Operation, Declaration, Both, None*}
for node type classification task;
{*Existing, Non-Existing*} for edge prediction task and {*Next-Action, Sub-Action, Constraint, None*}
for dependency relation classification task).
Given the training dataset M, the model is trained with the following training objective:
$$\mathcal{L}(M,\theta)=\sum_{D\in M}(\mathcal{L}_{t}(D;\theta)+\mathcal{L}_{e}(D;\theta)+\mathcal{L}_{r}(D;\theta))\tag{8}$$
where £t(D; θ), £e(D; θ) and £r(D; θ) are the cross-entropy loss functions for node type classification, *edge prediction* and *relation classification* tasks; and D is a procedural document from the training dataset M.
## 4 Experiment
We firstly introduce the construction of the new dataset *WHPG* and then analyze the experimental results in detail.
| Node Type | Size |
|-------------|--------|
| Operation | 3585 |
| Declaration | 1794 |
| Both | 1100 |
| None | 163 |
| Relation Type | Size |
|-----------------|--------|
| Next-Action | 2272 |
| Sub-Action | 2371 |
| Constraint | 2698 |
| Dataset statistics | COR | MAM | CTFW | WHPG |
|-------------------------------------|-------|-------|--------|--------|
| # Doc. | 297 | 575 | 3154 | 283 |
| Avg Size of Doc. | 9.52 | 8.12 | 17.11 | 23.47 |
| Avg Len. of Sent. | 65.46 | 34.81 | 92.87 | 79.76 |
| +|) | 2670 | 5043 | 54539 | 7341 |
| # Edges (|e |e +| : (|e +| + |e −|) | 0.18 | 0.12 | 0.07 | 0.07 |
| Avg degree of node | 1.83 | 1.76 | 1.88 | 2.21 |
Table 1: Dataset Statistics. |e
+| + |e−| is the total number of actual edges |e
+| and possible edges |e−|.
Table 2: The Size of Sentence (Node) Types and Dependency Relations in *WHPG*.
We build the original corpus from the online wikiHow knowledge base (Anthonio et al., 2020)
which provides a collection of *how-to* articles about various topics (e.g., entertainment and crafts).
We exploit the wikiHow knowledge base to create *WHPG*, a dataset of procedural texts with dependency-relation annotations among operational and declarative sentences. The online wikiHow knowledge base provides an *Export pages*4 service which allows exporting the texts of wikiHow articles (Anthonio et al., 2020). We adopt the python library *urllib*5to request the Export pages services and crawl procedural articles. With the candidate set of procedural documents, we filter out the unnecessary information (e.g., writing date, citations and URLs). The procedural documents containing only one step are also filtered out. Finally, three parts (i.e., titles, method names and steps of procedural documents) are kept to form a complete instance. Statistically, we obtain a candidate set of 330 procedural documents from the Crafts topic.
As shown in Figure 1, for each procedural document, we provide three kinds of annotations: *sentence type* (i.e., "*Operation*", "*Declaration*", "*Both*"
or "*None*"), *edge* (i.e., the connections between two sentences) and *dependency relation* (i.e., "*NextAction*", "*Sub-Action*" and "*Constraint*"). Three well-educated annotators are employed to make annotations by averaging the candidate procedural Table 3: Label Distributions of Sentence (Node) Type Classification.
## 4.1 Dataset Collection & Annotation
documents using the BRAT tool6. To ensure the annotation quality, each annotator are required to give the confidence score for each annotated label.
We weigh the confidence score of each annotator for the same label and the label with the highest score will be preserved. Moreover, the annotation samples with the lowest confidence scores will be brainstormed to determine the final annotation results. Moreover, the labeled instances that are difficult to reach a consensus will be discarded. Finally, two well-trained annotators are required to recheck all the annotation results to further ensure the annotation quality.
Finally, the final dataset contains 283 procedural documents with about 7341 edges. The statistical comparison of WHPG with existing sentence-level procedural text datasets is shown in Table 1. Moreover, the statistics of sentence (node) types and dependency relations of our created dataset *WHPG*
is shown in Table 2. We also show the label distributions of the node types and dependency relations, as shown in Table 3 and Table 4.
| Label | Train | Validation | Test |
|-------------|---------|--------------|--------|
| Operation | 2471 | 412 | 702 |
| Declaration | 1261 | 163 | 370 |
| Both | 743 | 125 | 232 |
| None | 139 | 18 | 6 |
## 4.2 Experiment Settings
| Label | Train | Val | Test |
|-------------|---------|-------|--------|
| Next-Action | 1542 | 264 | 466 |
| Sub-Action | 1630 | 270 | 471 |
| Constraint | 1881 | 280 | 537 |
## 4.2.1 Datasets & Experimental Settings
We conduct extensive experiments7 on our annotated dataset *WHPG*. Following Pal et al. (2021),
we split *WHPG* dataset into train, validation and test sets with 7:1:2 ratio. Furthermore, two public datasets (i.e., COR (Yamakata et al., 2020) and MAM (Qian et al., 2020) which do not consider dependency relations among sentences) are also
| Settings | win = 5 | win = 10 | win = 20 | ALL | | | | |
|---------------------|-----------|------------|------------|-------|----------|-------|----------|-------|
| Edge | Edge&Rel | Edge | Edge&Rel | Edge | Edge&Rel | Edge | Edge&Rel | |
| BERT-NS | 55.44 | 22.98 | 39.24 | 16.53 | 28.05 | 11.16 | 21.28 | 10.02 |
| RoBERTa-NS | 55.54 | 23.07 | 40.02 | 16.31 | 27.82 | 11.29 | 21.74 | 8.43 |
| BERT-GCN | 42.57 | 17.69 | 27.32 | 11.73 | 20.57 | 8.28 | 18.02 | 7.24 |
| RoBERTa-GCN | 42.59 | 16.75 | 27.23 | 10.87 | 18.65 | 7.99 | 16.20 | 6.32 |
| BERT-GAT | 49.07 | 21.99 | 32.91 | 13.76 | 23.29 | 9.84 | 16.83 | 6.52 |
| RoBERTa-GAT | 47.34 | 19.87 | 34.13 | 14.69 | 22.48 | 9.12 | 18.60 | 7.32 |
| BERT+SBil | - | - | - | - | - | - | 29.67 | 17.58 |
| Ours w/o SynEncoder | 62.63 | 41.85 | 60.16 | 39.38 | 59.13 | 38.80 | 57.36 | 37.85 |
| Ours w/o MultiQAtt | 65.01 | 42.98 | 62.94 | 40.52 | 60.83 | 39.47 | 59.22 | 38.54 |
| Ours w/o SAtt | 59.52 | 38.98 | 58.72 | 37.71 | 57.40 | 36.67 | 55.47 | 35.43 |
| Ours | 65.71 | 43.61 | 63.79 | 41.39 | 61.87 | 40.31 | 60.84 | 39.06 |
utilized to conduct the comparative experiments on the *Edge Prediction* task.
For the *Node Type Classification* task, we use the *accuracy* as the evaluation metric. Considering the label imbalance in *Edge Prediction* task, F1score of the positive class (i.e., the sentence pair existing an edge) is used as the evaluation metric for the *Edge Prediction*. The performance of *Dependency Relation classification* task is affected by the previous stage *Edge Prediction*. Thus, we combine them to make an evaluation with the *F1-score* metric (i.e., *Edge&Rel* in Table 5).
In the edge prediction and dependency relation classification tasks, each sentence in the procedural document needs to be respectively combined with all the following sentences to determine whether there is an edge and what types of dependency relations they have. To evaluate the generalization ability, four experimental settings (i.e., win = 5, win = 10, win = 20 and ALL) are used to evaluate the effectiveness of our proposed model. For example, for the win = 5 setting, given the first sentence s1 ∈ D, five candidate sentence pairs i.e. {(s1, s2),(s1, s3),(s1, s4),(s1, s5),(s1, s6)}
should be examined by the model to predict whether there are edges and which type of dependency relation they belong to. In training stage, we use AdamW optimizer with 4 batch size, 2e-5 learning rate and 0.4 dropout rate.
## 4.3 Result Analysis
To evaluate the effectiveness of our proposed model on the three tasks (i.e., *Node Type Classification*,
Edge Prediction and Dependency Relation Classification), we compare the performance of our proposed model with 7 recent related works (Pal et al., 2021; Zhou and Feng, 2022) which focus on constructing flow graphs from a procedural document, as shown in Table 5. To explore the problem of generating procedural graphs from procedural documents, a new dataset *WHPG* is built and utilized to perform comparative experiments on both edge prediction and dependency relation classification tasks. Moreover, another two public datasets
(i.e., COR (Yamakata et al., 2020) and MAM (Qian et al., 2020)) from different domains are used to conduct the comparative experiments. Since these two public datasets ignore the dependencies between sentences, we can only perform experiments on *Edge Prediction* task.
## 4.3.1 Node Type Classification
As shown in Table 6, five baselines (Pal et al., 2021; Zhou and Feng, 2022) are used to perform the comparative experiments. Compared with them, syntactic structures are embedded into the node feature representations in our model. From the experimental results, our model achieves the highest accuracies than current related works on both validation and test datasets. It can evaluate that syntactic structure features can be effectively captured and further improve the ability to distinguish between operational and declarative sentences.
## 4.3.2 Edge & Relation Classification
Table 5 shows the comparative experimental results under four window size settings (i.e., 5, 10, 20 and ALL) on both edge prediction and dependency relation classification tasks. Our proposed
![7_image_0.png](7_image_0.png)
| Model | Val | Test |
|---------------|-------|--------|
| BERT-Base | 76.04 | 80.76 |
| RoBERTa-Base | 75.06 | 80.45 |
| BERT-Large | 75.11 | 80.01 |
| RoBERTa-Large | 76.10 | 80.55 |
| BERT+SBil | 55.91 | 60.71 |
| Ours | 76.90 | 81.38 |
model achieves the highest F1 scores with a large margin (nearly 15% F1 score in *Edge* and 20% F1 score in *Edge&Rel*) than current related works on all experiment settings. Specifically, current existing methods mainly focus on constructing a flow graph with only sequential dependency for each procedural document. They ignore another two significant dependencies (i.e., *inclusion dependency* and *constraint dependency*) between sentences. By capturing syntactic structures and discourse structures, our model can effectively identify the dependencies between sentences. Moreover, we observe that the performance of all models degrades as the window size increases (e.g., *BERT-NS* and *BERTGCN* drop nearly 15% F1 score from 5 to 10 window size settings). Instead, our proposed model has the smallest performance drop (only nearly 3% F1 score) on both edge prediction and dependency relation classification when the window size is reduced. Comparing with current related works, the contextual dependency structure (discourse structure) features are utilized to assist the inference of detecting dependency relations between sentences.
The experimental results can evaluate the advantages of our model with structure-aware attention module in handling long-range inter-sentence dependency recognition.
BERT-NS 43.14 29.73
RoBERTa 42.99 39.65
RoBERTa 42.99 39.65
BERT-GCN 58.13 63.75
RoBERTa-GCN 61.44 65.73
BERT-GAT 41.93 62.18
RoBERTa-GAT 24.74 59.55
BERT+SBil 46.76 58.21
Ours **69.57 67.58**
Model COR MAM
Furthermore, to evaluate the domain generalization ability of our model, another two public datasets (i.e., COR in recipe domain and MAM
in the maintenance domain) are used to conduct the comparative experiments. As shown in Table 7, our model obtains the best performance with a large margin than all related works on both datasets.
The experimental results evaluate that our proposed model can effectively identify sequential dependency between sentences and have a better domain generalization ability.
## 4.3.3 Analysis For Each Dependency Relation
Figure 4 shows the comparative experiments on extracting each type of dependency relations between sentences. Compared with the related works, our model can obtain the highest F1 scores on all dependency relation types. Note that due to the imbalance in the number of *existing* and *non-existing* edges between sentences in procedural documents, current existing methods are prone to recognize the inter-sentences as *None* dependency and have low performance in the three dependency relations (i.e.,
Next-Action, *Sub-Action* and *Constraint*).
Figure 5: The heatmap of the weight distributions for each word measured by *Multi-Query Syntactic-Aware* Attention.
## 4.3.4 Ablation Study
As shown in Table 5, the ablation experiments are conducted to evaluate the effectiveness of the designed modules (i.e., SynEncoder, *MultiQAtt* and SAtt). The ablation experimental results can evaluate that both the syntactic information and discourse structure benefit the dependency relation detection. Specifically, both the *SynEncoder* and MultiQAtt modules can effectively capture the syntactic features and assist the dependency relation detection. Moreover, the performance of our model can be improved effectively when the discoursestructure features are embedded by the structure aware attention module.
## 4.4 Visualization
As shown in Figure 5, we show the weight distributions of each word measured by the *MultiQuery Syntactic Aware Attention Module*. We can observe that the phrases with the syntactic pattern "VB obj
−→NN" (e.g" "cut→hole" in S1 and
"use→cup" in S2) obtain the higher weight values than other words, which indicates the sentences as operational sentences. Moreover, the token "*Then*"
in S3 of Figure 5 is measured as the highest weight value, which indicates the sentence has the "*NextAction*" dependency relation with previous sentences. The visualization analyse can evaluate the effectiveness of our proposed module *Multi-Query* Syntactic Aware Attention.
## 5 Conclusion
In this paper, we explore a problem of automatically generating procedural graphs with multiple dependency relations for procedural documents.
Existing procedural knowledge structured methods mainly focus on constructing action flows with the sequential dependency from procedural texts but neglect another two important dependencies: inclusion dependency and constraint dependency which are helpful for the procedural text understanding and reasoning. To solve this problem, we build a new procedural text dataset with multiple dependency relations and propose a procedural graph construction method by utilizing syntactic and discourse structure features. Extensive experiments are conducted and evaluate the effectiveness of our proposed model.
## 6 Limitations
In this section, we draw conclusions for the limitations of our proposed model in this paper. Our proposed model mainly focuses on the sentencelevel procedural graph construction. The scenario that two actions in the same sentence cannot be considered in our proposed model. It is challenging to handle multi-grained (i.e., entity-level and sentence-level) dependencies between actions. We will consider this limitation as our future work.
## Acknowledgement
This work was supported by the National Natural Science Foundation of China (62076100), Fundamental Research Funds for the Central Universities, SCUT (x2rjD2220050), the Science and Technology Planning Project of Guangdong Province
(2020B0101100002), CAAI-Huawei MindSpore Open Fund, CCF-Zhipu AI Large Model Fund.
## References
Talita Anthonio, Irshad Bhat, and Michael Roth. 2020.
wikihowtoimprove: A resource and analyses on edits in instructional texts. In *Proceedings of the* 12th Language Resources and Evaluation Conference, pages 5721–5729.
Kurt Bollacker, Robert Cook, and Patrick Tufts. 2007.
Freebase: A shared database of structured general human knowledge. In *AAAI*, volume 7, pages 1962–
1963.
Limeng Cui and Dongwon Lee. 2022. Ketch: Knowledge graph enhanced thread recommendation in healthcare forums. In *Proceedings of the 45th International ACM SIGIR Conference on Research and* Development in Information Retrieval, pages 492–
501.
Biaoyan Fang, Timothy Baldwin, and Karin Verspoor.
2022. What does it take to bake a cake? the reciperef corpus and anaphora resolution in procedural text.
In *Findings of the Association for Computational* Linguistics: ACL 2022, pages 3481–3495.
Wenfeng Feng, Hankz Hankui Zhuo, and Subbarao Kambhampati. 2018. Extracting action sequences from texts based on deep reinforcement learning. In
Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4064–4070.
Michael P Georgeff and Amy L Lansky. 1986. Procedural knowledge. *Proceedings of the IEEE*,
74(10):1383–1398.
Clemens Hoffmann, Sebastian Büttner, and Michael Prilla. 2022. Conveying procedural and descriptive knowledge with augmented reality. In *Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments*,
pages 40–49.
Krzysztof Honkisz, Krzysztof Kluza, and Piotr Wisniewski. 2018. A concept for generating busi- ´
ness process models from natural language description. In *International Conference on Knowledge Science, Engineering and Management*, pages 91–103.
Springer.
Jermsak Jermsurawong and Nizar Habash. 2015. Predicting the structure of cooking recipes. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 781–786.
Yiwei Jiang, Klim Zaporojets, Johannes Deleu, Thomas Demeester, and Chris Develder. 2020.
Recipe instruction semantics corpus (risec): Resolving semantic structure and zero anaphora in recipes. In *AACL-IJCNLP 2020, the 1st Conference* of the Asia-Pacific Chapter of the Association Computational Linguistics and 10th International Joint Conference on Natural Language Processing, pages 821–826. Association for Computational Linguistics
(ACL).
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*,
pages 4171–4186.
Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia.
Semantic web, 6(2):167–195.
Mingxiao Li and Marie-Francine Moens. 2022. Dynamic key-value memory enhanced multi-step graph reasoning for knowledge-based visual question answering.
Ruipu Luo, Qi Zhu, Qin Chen, Siyuan Wang, Zhongyu Wei, Weijian Sun, and Shuang Tang. 2021. Operation diagnosis on procedure graph: The task and dataset. In *Proceedings of the 30th ACM International Conference on Information & Knowledge* Management, pages 3288–3292.
Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual
meeting of the association for computational linguistics: system demonstrations, pages 55–60.
Bhavana Dalvi Mishra, Lifu Huang, Niket Tandon, Wen-tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension. *arXiv* preprint arXiv:1805.06975.
Sheshera Mysore, Zachary Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey Flanigan, Andrew McCallum, and Elsa Olivetti.
2019. The materials science procedural text corpus:
Annotating materials synthesis procedures with shallow semantic structures. In Proceedings of the 13th Linguistic Annotation Workshop, pages 56–64.
Kuntal Kumar Pal, Kazuaki Kashihara, Pratyay Banerjee, Swaroop Mishra, Ruoyu Wang, and Chitta Baral.
2021. Constructing flow graphs from procedural cybersecurity texts. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 3945–3957.
Chen Qian, Lijie Wen, Akhil Kumar, Leilei Lin, Li Lin, Zan Zong, Shu'ang Li, and Jianmin Wang. 2020.
An approach for process model extraction by multigrained text classification. In *International Conference on Advanced Information Systems Engineering*,
pages 268–282. Springer.
Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling.
2018. Modeling relational data with graph convolutional networks. In *European semantic web conference*, pages 593–607. Springer.
Zhouxing Shi and Minlie Huang. 2019. A deep sequential model for discourse parsing on multi-party dialogues. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 33, pages 7007–7014.
Denny Vrandeciˇ c and Markus Krötzsch. 2014. Wiki- ´
data: a free collaborative knowledgebase. *Communications of the ACM*, 57(10):78–85.
Ante Wang, Linfeng Song, Hui Jiang, Shaopeng Lai, Junfeng Yao, Min Zhang, and Jinsong Su. 2021. A
structure self-aware model for discourse parsing on multi-party dialogues. In *IJCAI*, pages 3943–3949.
Frank F Xu, Lei Ji, Botian Shi, Junyi Du, Graham Neubig, Yonatan Bisk, and Nan Duan. 2020. A benchmark for structured procedural knowledge extraction from cooking videos. In *Proceedings of the First International Workshop on Natural Language Processing Beyond Text*, pages 30–40.
Yoko Yamakata, Shinsuke Mori, and John A Carroll.
2020. English recipe flow graph corpus. In *Proceedings of the 12th Language Resources and Evaluation* Conference, pages 5187–5194.
Zi Yang and Eric Nyberg. 2015. Leveraging procedural knowledge for task-oriented search. In *Proceedings* of the 38th International ACM SIGIR Conference on
Research and Development in Information Retrieval, pages 513–522.
Yifei Zhou and Yansong Feng. 2022. Improve discourse dependency parsing with contextualized representations. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2250–
2261, Seattle, United States. Association for Computational Linguistics.
Ying Zhou, Xuanang Chen, Ben He, Zheng Ye, and Le Sun. 2022. Re-thinking knowledge graph completion evaluation from an informationd retrieval perspective. In *Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval*, pages 916–926.
Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, and Guodong Zhou. 2019. Modeling graph structure in transformer for better amr-to-text generation. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5459–5468.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 4
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
4
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
4
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
4 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. 4 D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
jain-etal-2023-multi | Multi-Dimensional Evaluation of Text Summarization with In-Context Learning | https://aclanthology.org/2023.findings-acl.537 | Evaluation of natural language generation (NLG) is complex and multi-dimensional. Generated text can be evaluated for fluency, coherence, factuality, or any other dimensions of interest. Most frameworks that perform such multi-dimensional evaluation require training on large manually or synthetically generated datasets. In this paper, we study the efficacy of large language models as multi-dimensional evaluators using in-context learning, obviating the need for large training datasets. Our experiments show that in-context learning-based evaluators are competitive with learned evaluation frameworks for the task of text summarization, establishing state-of-the-art on dimensions such as relevance and factual consistency. We then analyze the effects of factors such as the selection and number of in-context examples on performance. Finally, we study the efficacy of in-context learning-based evaluators in evaluating zero-shot summaries written by large language models such as GPT-3. | # Multi-Dimensional Evaluation Of Text Summarization With In-Context Learning
Sameer Jain1 Vaishakh Keshava1 **Swarnashree Mysore Sathyendra**1 Patrick Fernandes1,2 Pengfei Liu1 Graham Neubig1 **Chunting Zhou**3 1Carnegie Mellon University 2Instituto Superior Técnico 3Facebook AI Research
{sameerj, vkeshava, smysores}@cs.cmu.edu
## Abstract
Evaluation of natural language generation
(NLG) is complex and multi-dimensional. Generated text can be evaluated for fluency, coherence, factuality, or any other dimensions of interest. Most frameworks that perform such multi-dimensional evaluation require training on large manually or synthetically generated datasets. In this paper, we study the efficacy of large language models as multi-dimensional evaluators using in-context learning, obviating the need for large training datasets. Our experiments show that in-context learning-based evaluators are competitive with learned evaluation frameworks for the task of text summarization, establishing state-of-the-art on dimensions such as relevance and factual consistency. We then analyze the effects of factors such as the selection and number of incontext examples on performance. Finally, we study the efficacy of in-context learningbased evaluators in evaluating zero-shot summaries written by large language models such as GPT-3. Our code is available at https:
//github.com/JainSameer06/ICE
## 1 Introduction
Developing comprehensive evaluation frameworks (Deng et al., 2021; Yuan et al., 2021; Zhong et al., 2022) that can evaluate multiple humaninterpretable dimensions, such as factual consistency (Kryscinski et al., 2020; Wang et al., 2020)
and coherence (Dziri et al., 2019; Huang et al.,
2020), is important for the advancement of Natural Language Generation (NLG). However, similaritybased metrics (Papineni et al., 2002; Lin, 2004; Sellam et al., 2020; Zhao et al., 2019; Zhang et al.,
2020) still dominate NLG evaluation in practice.
Compared to them, desired multi-dimensional evaluators do not require reference texts for evaluation; and they can easily extend to new explainable evaluation dimensions. Recently, Zhong et al. (2022)
developed a unified evaluation framework that can
![0_image_0.png](0_image_0.png)
Figure 1: Our prompt design to evaluate the consistency of the summary in red, illustrated using two in-context examples (in blue). To evaluate other aspects, we remove the source text or replace it with a reference.
generalize to multiple dimensions and text generation tasks. However, it relies on the construction of synthetic and auxiliary data for the finetuning of a pre-trained language model, requiring in-depth knowledge and significant engineering effort for each dimension. Furthermore, the inclusion of new dimensions requires (continued) training of the model, and might affect the performance on other dimensions in unforeseen ways.
In this work, we propose to use *in-context* learning (Brown et al., 2020) with large language models (LLMs) - a commonly used method to perform many tasks by utilizing only a few input-output examples - to perform multi-dimensional text evaluation in a unified fashion. Compared to pre-trained evaluators that need specialized supervised training for each dimension, our In-Context learning-based Evaluator (ICE) framework is:
- Learning-free. It does not require supervised fine-tuning on large annotated (synthetic) training data, requiring only a handful of samples at inference time.
- Extensible. To evaluate new dimensions, it does not rely on large amounts of human judgments or the construction of new synthetic data, using only a natural language prompt consisting of a small number of example pairs to ascertain the properties associated with a given quality aspect.
In this paper, using text summarization as a test bed, we show that with a simple prompt design, ICE
is competitive with state-of-the-art trained evaluators on multi-dimensional evaluation of modelproduced summaries, establishing a new state-ofthe-art on dimensions such as relevance and factual consistency. To study the robustness of the evaluator to the selection of in-context examples, we analyze the factors that affect the performance of ICE, such as the number of in-context examples and sampling procedures when picking in-context examples from a set of candidates. We find ICE to be robust to the selection of in-context examples and observe a slight improvement in performance as the number of examples is increased. Finally, in light of the recent work (Goyal et al., 2022) that points to the misalignment of existing evaluation metrics with human preference in evaluating zeroshot summaries generated by LLMs such as GPT-3
(Brown et al., 2020), we study the effectiveness of ICE in evaluating zero-shot summaries generated by GPT-3. We find that ICE evaluations agree closely with human judgments on such summaries.
## 2 Methodology 2.1 Problem Statement
Given a sequence x that is input to an NLG system and a system-generated output sequence y, an evaluation framework outputs a score s that captures the quality of y, either with or without the help of a human-generated reference output r.
1In case of multi-dimensional evaluation where we are interested in assessing y over d quality metrics, we instead get a vector S = (s1, s2*, ..., s*d) over diverse dimensions (e.g., coherence, fluency). Depending on the dimension, there is sometimes a need to condition an evaluation on x (such as to evaluate consistency in summarization). We evaluate our method over four dimensions:
- Consistency: The factual correctness of a summary given the source text.
- Relevance: The property of capturing salient information from the source.
- Fluency: A measure of the quality of the individual sentences in the summary.
- Coherence: A measure of the quality, organization, and structure of sentences in the summary.
## 2.2 Prompt Design & Score Extraction
ICE relies on an LLM (we use the text-davinci-003 model of GPT-3) to make predictions. It takes in a prompt that consists of a small number of in-context examples, each of which consists of generated text and its corresponding quality score as a numeric string.
The prompt ends with a test example, for which the model predicts a score (Figure 1).
The input contains the model-generated text
(summary), in addition to which it might contain additional information such as the source text or references, depending on the dimension. To evaluate fluency and coherence, our prompts use in-context examples consisting of generated summaries and corresponding scores. For consistency and relevance, we use the source text and a reference summary respectively, in addition to the generated summary. We pass this prompt to a GPT-3 model, with sampling temperature set to 0 to elicit deterministic responses. We parse the model response–decoded numeric string–as the dimension score.
## 2.3 Selection Of In-Context Examples
By default, we use 4 in-context examples in our prompts, as this is the largest number that fits within the context window of GPT-3. We experiment with two sampling procedures (Appendix B)
to obtain 4 examples from a pool of examples:
1. **Uniform Random Sampling**. We randomly select 4 summaries from the pool of examples.
This causes the examples to follow the same distribution as the example pool.
2. **Stratified Sampling**. We bucket the range of scores, i.e. [0, 1], into 4 equal partitions and randomly sample one summary from each one.
This causes examples to be representative of the range of scores in the example pool.
We avoid using synthetically generated data (Kryscinski et al., 2020; Zhong et al., 2022)
since the kind of errors made by generation models is often different from the errors present in the negative examples in these datasets (Goyal and Durrett, 2021). We instead elect to use (a few) human evaluations of model-generated text in order to make the in-context examples as representative of real errors as possible. We do this by splitting the meta-evaluation dataset and using a partition as an in-context example pool, as described in Section 3.1.
| Metric | Coherence | Consistency | Fluency | Relevance | | | | |
|---------------------------|-------------|---------------|-----------|-------------|-------|-------|-------|-------|
| ρ | τ | ρ | τ | ρ | τ | ρ | τ | |
| CTC | - | - | 0.425 | 0.340 | - | - | 0.495 | 0.364 |
| BARTScore | 0.445 | 0.340 | 0.380 | 0.314 | 0.345 | 0.283 | 0.357 | 0.274 |
| UniEval | 0.591 | 0.424 | 0.433 | 0.348 | 0.445 | 0.349 | 0.473 | 0.343 |
| ICE (Uniform Sampling) | 0.476 | 0.388 | 0.486 | 0.466 | 0.366 | 0.328 | 0.467 | 0.384 |
| ICE (Stratified Sampling) | 0.497 | 0.387 | 0.298 | 0.263 | 0.397 | 0.348 | 0.485 | 0.396 |
## 3 Experiments 3.1 Datasets & Baselines
We use the SummEval dataset (Fabbri et al., 2020)
2 to meta-evaluate our evaluation framework. SummEval collects human evaluation annotations for 16 summarization systems on 100 articles sampled from the CNN/DailyMail corpus, for a total of 1600 summary-level annotations. Each summary is evaluated on four dimensions described in Section 2.2.
To get a pool of in-context examples, we keep aside a small subset (64 examples) of the SummEval dataset to pick in-context examples from, and use the rest (1536 examples) as the test set for meta-evaluation (evaluating the baselines on this same test set). Further details are in Appendix A.
We compare ICE to the following state-of-theart multi-dimensional evaluators: (1) CTC (Deng et al., 2021) uses information alignment between generated outputs and references or inputs; (2)
BARTScore (Yuan et al., 2021) uses the conditional probability of a sequence given inputs or references; and (3) **UniEval** (Zhong et al., 2022)
uses a question-answering framework (e.g. "Is this a coherent summary?") to calculate metrics.
Following Liu et al. (2021); Zhong et al. (2022),
we assess performance by computing summarylevel Spearman and Kendall-Tau correlations between predicted scores and human judgements.
## 3.2 Results
As illustrated in Table 1, ICE is competitive with fine-tuned baselines despite not requiring any finetuning. It achieves state-of-the-art correlation with human judgments for relevance and consistency. We perform pairwise significance tests and observe that ICE (uniform sampling) does better than UniEval on consistency and relevance on Kendall's Tau with a significance level of 0.05 (Appendix E). Additionally, the uniform sampling variant of ICE outperforms BARTScore (which also does not require finetuning) across dimensions.
Between the two *sampling procedures* for ICE, we observe that stratified sampling works marginally better for all dimensions other than consistency. Since summaries in the SummEval dataset have perfect or near-perfect human scores for consistency (Figure 2), uniform sampling causes in-context examples to also have nearperfect scores. This appears useful for the model to calibrate its scoring when evaluating consistency, leading to better performance. We explore this in greater detail in §4.1. While the same reasoning could hold for fluency, we observe both here and in §4.3 that fluency scores are quite stable. Given that fluency is an easier aspect to evaluate, this stability could be a result of the model possessing a strong notion about fluency from pre-training time that is not modified significantly as the distribution of in-context examples changes (Reynolds and McDonell, 2021). Finally, we observe that the performance for coherence and relevance are similar regardless of the sampling procedure. This is because scores for these aspects are spread out in the dataset, which makes uniform and stratified sampling return similar in-context examples.
## 4 Analysis
In this section, we analyse the effects of our prompt engineering choices. The comparison between sampling procedures in Section 4.1 is performed on the entire test set but the experiments in Sections 4.2 and 4.3 are performed on a test set sample of size 200 to control costs. The analyses in Sections 4.1 and 4.2 use four in-context examples.
## 4.1 Analyzing The Sampling Procedures
Figure 2 illustrates that the prediction distributions from uniform and stratified sampling differ the most when the true distribution is skewed, such as for consistency. In such a case, stratified sampling selects in-context examples from the entire 2https://github.com/Yale-LILY/SummEval
![3_image_0.png](3_image_0.png)
domain regardless of the true distribution. This forces predictions towards a centered distribution, which can cause the performance drop we observe in Table 1 when evaluating consistency using stratified sampling. Uniform sampling, on the other hand, selects examples that represent the true distribution, making model predictions more closely reflect the true distribution.
A drawback of uniform sampling is sub-optimal calibration in low-probability regions of the true distribution. For instance, if uniform sampling is used to evaluate consistency, the model might not see in-context examples with (say) scores less than 0.3 (Figure 2). This can affect output calibration in that region. Nonetheless, we suggest using uniform sampling in general. It is more stable and its prediction distribution closely follows the true distribution. For dimensions where it underperforms stratified sampling, the margins are less significant. Finally, even when ICE (uniform sampling) scores are calibrated differently from human scores, they still rank summary-quality correctly, insofar as our main results (Table 1) show
![3_image_1.png](3_image_1.png)
![3_image_2.png](3_image_2.png)
that they compete with state-of-the-art on rankingbased metrics like Kendall-Tau and Spearman correlation. We use uniform sampling to select incontext examples in Sections 4.2 and 4.3.
## 4.2 **Effect Of Selection Of In-Context Examples**
In order to determine whether performance is robust to the choice of in-context examples, we evaluate our test set using three different random sets of in-context examples. We observe in Figure 3 that for a given dimension, the maximum variation across three seeds is about 7 points, suggesting reasonably stable performance across the choice of in-context examples.
## 4.3 Effect Of Number Of In-Context Examples
We evaluate our test set using different numbers of in-context examples (Figure 4). We observe that only for relevance and coherence does performance show improvement as we increase the number of examples. One reason for this could be the distribution of scores for a given dimension in the test set (Figure 2). Concretely, consistency and fluency mostly have near-perfect scores and therefore do not benefit from more samples while the
Metric Model Coh. Con. Flu. Rel. **Overall**
![4_image_2.png](4_image_2.png)
![4_image_0.png](4_image_0.png)
GPT-3 4.85 4.73 4.97 4.65 4.80 BRIO 4.57 4.65 4.88 4.48 4.65 T0 4.15 4.47 4.78 3.68 4.27
![4_image_1.png](4_image_1.png)
-
![4_image_4.png](4_image_4.png)
BRIO 28.20 T0 26.63 GPT-3 -1.25 -1.25 -1.25 -1.25 -1.25 BRIO -0.71 -0.71 -0.71 -0.71 -0.71 T0 -0.96 -0.96 -0.96 -0.96 -0.96 GPT-3 0.908 0.996 0.994 0.849 0.937 BRIO 0.896 0.993 0.993 0.834 0.929 T0 0.890 0.981 0.985 0.761 0.904
scores for coherence and relevance are spread out and therefore more samples allow representation over the whole range of scores.
Another observation is that even for coherence and relevance, performance with a single incontext example reaches near that achieved by some of the weaker fine-tuned baselines in Table 1. This suggests that the model possesses the notion of the evaluation task from pre-training itself, which is in line with recent work (Reynolds and McDonell, 2021; Min et al., 2022) that suggests that demonstrations help extract this knowledge.
Finally, we note that calibration can potentially be improved by increasing the number of examples. For instance, we observed that the four incontext examples that the uniform sampling procedure chose for coherence in Figure 2 had scores that fall between 0.7 and 1.0. This concentrates the prediction distribution in that range. The probability of such an event will reduce as the number of examples is increased further.
## 5 Using Ice **To Evaluate Zero-Shot** Prompting Models
Recent work by Goyal et al. (2022) showed that standard reference-based and reference-free metrics are not reliable in evaluating zero-shot summaries written by models such as GPT-3. Through a human study comparing summaries from three systems–GPT-3, BRIO, and T0–they observed that while humans prefer GPT-3 summaries, automatic evaluators consistently score GPT-3 summaries lower than summaries from other models.
We study the efficacy of ICE in evaluating zeroshot summaries written by GPT-3 at a dimension level. We use the set of 500 CNN articles from Goyal et al. (2022), with summaries from GPT-3,
![4_image_3.png](4_image_3.png)
BRIO, and T0 for each article. We sample 100 of these articles and have three annotators rate summaries for each of the dimensions defined in Section 2.2 on a scale of {1, 2, 3, 4, 5}. We use ICE, ROUGE, and BARTScore (all of which do not require training data) to evaluate the summaries and present system-level results in Table 2.
We observe that ICE agrees with human judgments for each dimension and overall preferences while existing reference-based and reference-free metrics such as ROUGE and BARTScore3consistently rate GPT-3 summaries low. Goyal et al. (2022) suggest that most existing evaluation metrics reward summaries that imitate references, while GPT-3 summaries are zero-shot and not trained to imitate human-written references, which is likely why they are penalized by most existing evaluators. However, since ICE is not based on reference similarity (except when evaluating relevance) and is also not trained with reference summaries, it is able to better evaluate GPT-3 summaries and agrees with human preferences.
## 6 Conclusion
We show that in-context learning can be used for NLG evaluation as an alternative to fine-tuned evaluation metrics. Using a small number of examples, in-context learning evaluators can reach or exceed the state-of-the-art on multi-dimensional evaluation and that this is robust to the choice of in-context examples. Finally, we show that in-context learning evaluators align well with human judgements when evaluating summaries written by GPT-3.
## Limitations
While ICE does not require fine-tuning on large amounts of data, it requires querying a powerful LLM at inference time (we use GPT-3 for our experiments which has 175 billion parameters). This can be a pay-per-use model or an open-source model such as BLOOM. This makes a downstream system that uses ICE reliant on an external dependency, which carries the risk of the external dependency failing.
Relatedly, in this paper, we are limited due to monetary constraints in a variety of experiments we perform. For instance, we restrict ourselves to text summarization and use samples of benchmark meta-evaluation suites during some of our experiments. We leave the investigation of using ICE for other dimensions and downstream tasks for future work.
## References
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners.
Mingkai Deng, Bowen Tan, Zhengzhong Liu, Eric Xing, and Zhiting Hu. 2021. Compression, transduction, and creation: A unified framework for evaluating natural language generation. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7580–7605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Nouha Dziri, Ehsan Kamalloo, Kory Mathewson, and Osmar Zaiane. 2019. Evaluating coherence in dialogue systems using entailment. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 3806–3812, Minneapolis, Minnesota. Association for Computational Linguistics.
Alexander R Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´
Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2020. Summeval: Re-evaluating summarization evaluation. *arXiv preprint arXiv:2007.12626*.
Tanya Goyal and Greg Durrett. 2021. Annotating and modeling fine-grained factuality in summarization. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1449–1462, Online. Association for Computational Linguistics.
Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022.
News summarization and evaluation in the era of gpt-3.
Lishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, and Xiaodan Liang. 2020. GRADE: Automatic graphenhanced coherence metric for evaluating opendomain dialogue systems. In Proceedings of the
2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9230–9240, Online. Association for Computational Linguistics.
Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9332–9346, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaichen Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, and Graham Neubig. 2021. ExplainaBoard: An explainable leaderboard for NLP. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 280–289, Online. Association for Computational Linguistics.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations:
What makes in-context learning work?
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 311–318, USA.
Association for Computational Linguistics.
Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm.
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020.
BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics.
Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020.
Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5008–5020, Online. Association for Computational Linguistics.
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
Bartscore: Evaluating generated text as text generation. In *Advances in Neural Information Processing* Systems, volume 34, pages 27263–27277. Curran Associates, Inc.
Tianyi Zhang, Varsha Kishore, Felix Wu*, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore:
Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Linguistics.
Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multidimensional evaluator for text generation.
## A Splitting Summeval And The Selection Of In-Context Examples
We randomly select 4 articles from the SummEval dataset and pick one system-generated summary from each article as an in-context example using the procedures outlined in Section 2.3. In other words, we pick n = 4 in Figure 1. For a given value of n, prompts for evaluating consistency are the longest since they contain entire source articles. We pick n such that consistency prompts fit within the context window of the model. We study the effect of the choice of n in Section 4.3.
To ensure that there is no overlap in the source article of any in-context example with the source article of any test example, we remove all summaries corresponding to the 4 selected source texts and use the remaining 1536 examples from SummEval as our test set. We ensure the absence of overlap throughout all experiments in Sections 3, 4, and 5.
## B Sampling Procedures B.0.1 Uniform Random Sampling
One summary is picked uniformly at random from the set of 16 summaries for a given source text.
We do this for each of the 4 source texts selected to pick in-context examples from. Each of the 4 sampled in-context examples then consists of the selected summary, its human evaluation score on the current aspect of interest, and (optionally) the source text or the reference text.
## B.0.2 Stratified Sampling
Let A denote the score of a summary on the aspect we are evaluating for; then A ∈ [0, 1]. In stratified sampling, we define 4 buckets by the ranges
{[0, 0.25],(0.25, 0.5],(0.5, 0.75],(0.75, 1.0]}. We assign summary s to one of the buckets depending on the value of As. We do this for each of the 64 in-context example candidate summaries. Finally, we pick 4 summaries from the 64 candidates such that each summary falls into a different bucket and also comes from a different source text. We perform an exhaustive search for such an assignment, and in case no such assignment is possible
(this can happen if none of the 64 summaries fall in a given bucket), we pick an arbitrary summary from a randomly selected bucket, ensuring that all 4 summaries come from different source articles.
For both uniform and random sampling, we ensure that each summary corresponds to a different source article.
## C Annotation Procedure For Rating Gpt-3, Brio, And T0 Summaries
Summaries are annotated on a scale of
{1, 2, 3, 4, 5} for coherence, consistency, fluency, and relevance using the annotation instructions from Fabbri et al. (2020).
## D Use Of Existing Evaluation Packages
We use existing packages for all our baselines–
ROUGE, BARTScore, CTC, and UniEval. For ROUGE, we use the native python implementation and report ROUGE-L scores for our experiment in Section 5. For BARTScore, we use the implementation accompanying the paper with the source to hypothesis setting across all dimensions, as that gives the best correlations with human judgments across dimensions. For UniEval, we use pre-trained model released by the authors to obtain results in Table 1 on the test set of size 1536.
## E Significance Tests
Since ICE scores for some dimensions are close to UniEval scores, we perform pairwise tests to determine when one method is better than the other.
Concretely, we compare performance on 1000 bootstrap samples by randomly selecting 80% of the test set for each sample. We observe that when using Kendall's Tau, ICE with uniform sampling outperforms UniEval with a significance level of 0.05 on both consistency and relevance. When using Spearman's rank correlation, Ice again outperforms UniEval on consistency, but the test is inconclusive at that significance level for relevance.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section at the end
✓ A2. Did you discuss any potential risks of your work?
In the limitations section, we discuss the risks associated with relying on external dependencies in deploying a framework such as the one studied in our work, if one intends to build a real-world application around it.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We generate ratings of system-generated summaries on the basis of quality B1. Did you cite the creators of artifacts you used?
Not applicable. We created the artifacts
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. We have not, at the moment, decided on the term of distribution of our humanannotated data B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. The artifacts we create are built on top of publicly available datasets of publicly available news articles, which do not constitute data accessed solely for research purposes B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. We do not collect any new data. The data we use consists of CNN articles.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Our artifacts are ratings of system generated summaries from news articles. We mention this in the relevant section–Section 5.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We mention these statistics for all our experiments in Sections 3, 4, and 5.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?**
Almost all sections other than Introduction describe computational experiments
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We mention in the paper that our backbone is GPT-3. We mention its number of parameters in the limitations section.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Most of the "hyperparamters" for our framework are prompt engineering choices, which we discuss extensively in Sections 4 and 5. We mention relevant parameters of our GPT backbone (such as sampling temperature) in Section 3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
A number of our analyses are done on samples of the benchmark datasets, and we have described where and how we are setting up and reporting multiple runs. We have added significance tests to validate results, where necessary.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix D
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 5 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. We use precisely the same instructions as used to annotate SummEval, our benchmark meta-evaluation dataset. We highlight the main points of the instructions in our paper but redirect readers to the original paper for the full text D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. We performed the relevant annotations D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. We annotate system-generated summaries of publically available news (CNN)
articles for quality. We do not use/curate any individual's personal data.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. The source of the data is news articles. For our study, we annotate system-generated summaries of such articles for quality |
liu-xu-2023-learning | Learning to Rank Utterances for Query-Focused Meeting Summarization | https://aclanthology.org/2023.findings-acl.538 | Query-focused meeting summarization(QFMS) aims to generate a specific summary for the given query according to the meeting transcripts. Due to the conflict between long meetings and limited input size, previous works mainly adopt extract-then-summarize methods, which use extractors to simulate binary labels or ROUGE scores to extract utterances related to the query and then generate a summary. However, the previous approach fails to fully use the comparison between utterances. To the extractor, comparison orders are more important than specific scores. In this paper, we propose a Ranker-Generator framework. It learns to rank the utterances by comparing them in pairs and learning from the global orders, then uses top utterances as the generator{'}s input. We show that learning to rank utterances helps to select utterances related to the query effectively, and the summarizer can benefit from it. Experimental results on QMSum show that the proposed model outperforms all existing multi-stage models with fewer parameters. | # Learning To Rank Utterances For Query-Focused Meeting Summarization
Xingxian Liu, Yajing Xu∗
Pattern Recognition & Intelligent System Laboratory Beijing University of Posts and Telecommunications, Beijing, China
{liuxingxian,xyj}@bupt.edu.cn
## Abstract
Query-focused meeting summarization(QFMS)
aims to generate a specific summary for the given query according to the meeting transcripts. Due to the conflict between long meetings and limited input size, previous works mainly adopt extract-then-summarize methods, which use extractors to simulate binary labels or ROUGE scores to extract utterances related to the query and then generate a summary.
However, the previous approach fails to fully use the comparison between utterances. To the extractor, comparison orders are more important than specific scores. In this paper, we propose a **Ranker-Generator** framework. It learns to rank the utterances by comparing them in pairs and learning from the global orders, then uses top utterances as the generator's input. We show that learning to rank utterances helps to select utterances related to the query effectively, and the summarizer can benefit from it. Experimental results on QMSum show that the proposed model outperforms all existing multi-stage models with fewer parameters.
## 1 Introduction
Query-focused meeting summarization(QFMS)
aims to summarize the crucial information for the given query into a concise passage according to the meeting transcripts. By responding to the query, QFMS can meet the user's need to focus on a specific aspect or topic of the meeting (Litvak and Vanetik, 2017; Baumel et al., 2018). Unlike the generic summary, QFMS requires the summary depending on both the given query and meeting transcripts.
Previous works consist of end-to-end and twostage frameworks. The end-to-end models take the whole long meeting as the input. Although some works such as HMNet (Zhu et al., 2020) and HATBART (Rohde et al., 2021) use hierarchical attention mechanism to alleviate the rapid growth
*Yajing Xu is the corresponding author.
![0_image_0.png](0_image_0.png)
Figure 1: (a) **Locator-Generator** framework, it predicts a binary label and uses Cross-Entropy loss to update parameters. (b) **Simulator-Generator** framework, it simulates the ROUGE score and uses Mean Squared Error loss to update parameters. (c) **Ranker-Generator**
framework proposed in this paper, it learns to rank utterances from the relative order between utterances. The top K utterances can be passed to the generator.
in computational complexity, it's still faced with difficulties in training efficiency. The two-stage models extract utterances related to the query and then pass the concatenation of them to the generator. For QFMS, the key information related to the query scatters in certain parts of the meeting.
Therefore, the two-stage framework is considered as a practical approach to balance experimental performance and computational efficiency in the long-input problems.
The two-stage framework mainly includes the Locator-Generator and the Simulator-Generator approaches. As shown in Figure 1, in the first stage, the Locator-Generator (Zhong et al., 2021b) framework considers it as a binary classification task. It predicts a binary label of whether the utterance is relevant to the query and uses cross-entropy loss to update parameters. But the hard binary labels can not reflect the relative quality. Especially when the training data is limited by scarcity, the binary classification will have a large margin between positive and negative samples. So the SimulatorGenerator (Vig et al., 2022) framework considers
![1_image_0.png](1_image_0.png)
it as a ROUGE score regression task. It simulates the ROUGE score and uses MSE loss to update parameters. However, there is a gap between the extractor's ultimate objective and the objective of minimizing the absolute error between predicted scores and ROUGE scores. In fact, rather than specific scores, we care more about the relative orders of utterances.
To make full use of the comparison information between samples, we propose a Ranker-Generator framework in this paper. To balance experimental effectiveness and computational efficiency, the framework contains three steps. First, the utterances would be divided into samples. We conduct pairwise ranking to get an order for each sample. Second, the top utterances in different samples would be fed into the re-ranker, which would conduct listwise ranking to get a global order. Finally, the top K utterances would be concatenated and passed to the generator.
To summarize, our contributions are as follows:
(1) This paper demonstrates that, by enhancing the accuracy of extracting query-relevant utterances, the generator can make the summary more related to the query. (2) We propose a Ranker-Generator framework to extract query-relevant utterances by learning to rank discourse to improve the quality of the generated summaries. (3) Experimental results show that the proposed model outperforms existing multi-stage models with fewer model parameters.
## 2 Method
The architecture of our method is illustrated in Figure 2. Our model consists of a two-stage ranking step and a generating step. The utterances would be ranked by the Sample Pairwise Ranking module and the Global Listwise Re-ranking module, and top of them can be passed to the generator to produce the final summary.
## 2.1 Two-Stage Ranking
The utterance ranking orders for a brief meeting can be efficiently obtained using the single-stage ranking paradigm. However, the computing complexity of full-pairwise ranking grows at a square rate as the number of utterances grows. Therefore, we adopt a two-stage ranking framework. In the first stage, we propose sample pairwise ranking to reduce computational complexity. But sample pairwise ranking can only evaluate the relative quality within samples. It performs poorly when applied to utterances from various samples, e.g., the top utterances in sample 1 may be ranked lower in sample 2.
To overcome the above problem, we apply global listwise re-ranking and concentrate on the top-k utterances in the second stage. Utterances that are unlikely to appear in the generator are filtered out by the pairwise ranking model, then global listwise ranking is conducted to get better top-k orders.
## 2.2 Sample Pairwise Ranking
In this paper, the ROUGE (Lin, 2004) scores between utterances U and the gold summary S∗are considered as the measure of query-relevance. The utterances from one meeting are divided into various samples. In one sample, the utterances would be ordered by the ROUGE scores. The ranker should be encouraged to assign higher relevance scores to these top utterances in the order. By learning to rank in pairwise, the model can distinguish the utterances that are more relevant to the query from the comparison. Following the previous work
(Zhong et al., 2020), the loss is as follows:
$$L=\sum_{i}\sum_{j>i}max(0,f(U_{j})-f(U_{i})+\lambda_{ij})\tag{1}$$ $$\lambda_{ij}=(j-i)*\lambda\tag{2}$$ where $U_{i}$ and $U_{j}$ are the $i$-th and $j$-th utterances in gold ranking orders,
ROUGE(Ui, S∗)>ROUGE(Uj , S∗), ∀*i, j, i < j*,
λ is the base margin. f(Ui) is the predicted query-relevance score given by a cross-encoder model.
## 2.3 Global Listwise Re-Ranking
As shown in Figure 2, the top utterances in different samples are gathered in the re-ranking module.
The gold orders would be determined by ranking the utterances according to the ROUGE scores. To obtain a more precise top-ranking order, we would perform a refined global sort on these top utterances from various samples using listwise re-ranking. Inspired by ListNet (Cao et al., 2007), we optimize the permutation probability distribution between predicted scores s and the gold scores s∗. The permutation probability is defined as
$$P_{s}(\pi)=\prod_{j=1}^{n}{\frac{\phi(s_{\pi(j)})}{\sum_{t=j}^{n}\phi(s_{\pi(t)})}}$$
$$(3)$$
π is a permutation on the n objects, and ϕ(.) is an increasing and strictly positive function.
But different with ListNet, we optimize the top-k permutation probability rather than top-1 probability. The top-k permutation probability is as follows:
$$P_{s}^{k}(\pi)=\prod_{j=1}^{k}\frac{\phi(s_{\pi(j)})}{\sum_{t=j}^{n}\phi(s_{\pi(t)})}\qquad\qquad(4)$$
For example, the top-3 permutation probability of π = ⟨1, 2, 3, 4, 5⟩ is as follows:
$$P_{s}^{3}(\pi)=\frac{\phi(s_{1})}{\sum_{i=1}^{5}\phi(s_{i})}\cdot\frac{\phi(s_{2})}{\sum_{i=2}^{5}\phi(s_{i})}\cdot\frac{\phi(s_{3})}{\sum_{i=3}^{5}\phi(s_{i})}\tag{5}$$ The predicted top1-to-topk distribution is $P_{s}=\frac{1}{2}$.
(P
1 s, P2 s, · · · , Pk s), the gold top1-to-topk distribution is Ps∗ = (P
1 s∗ , P2 s∗ , · · · , Pk s∗ ) We use KLdivergence to reduce the gap between the above two distributions.
$$L=KL(P_{s^\ast}||P_s)\tag{6}$$ $$KL(P_{s^\ast}||P_s)=\sum_{i=1}^k P_{s^\ast}^i\cdot\log\frac{P_{s}^i}{P_{s}^i}\tag{7}$$ i = ...
## 2.4 Generator
As shown in Figure 2, after the two-stage ranking, top-k of the utterances would be concatenated and fed into the generator. In the generation stage, the objective is to minimize the cross-entropy loss:
$$L=-\sum_{i}p_{gt}(S_{i}|S_{<i}^{*},U)\log p(S_{i}|S_{<i}^{*},U)\tag{8}$$
$$p_{g t}(S_{i}|S_{<i}^{*},U)=\begin{cases}1&S_{i}=S_{i}^{*}\\ 0&S_{i}\neq S_{i}^{*}\end{cases}\quad\quad(9)$$
$U$ is the generator's input, $S^{*}$ is the gold summary.
## 3 Experiments 3.1 Setup 3.1.1 Implementation Details
Models are implemented using the PyTorch framework. The pre-trained BART* from the Transformers (Wolf et al., 2020) library is used as the base abstractive model. The pre-trained MiniLM†from the sentence-transformers (Reimers and Gurevych, 2019) library is used as the pairwise ranking model and the listwise re-ranking model.
All experiments are conducted on NVIDIA RTX
3090 GPU(24G memory). The generator model is trained for 10 epochs. For one model training, the average running time is around 2 hours. Weight hyperparameter λ is 0.01 in Equation 2. The generator's max length of the input is 1024, max length of the output is 256. Learning rate is 5e-6.
Models were evaluated using the ROUGE metrics (Lin, 2004) in the SummEval toolkit (Fabbri et al., 2021) and each pair of results was subjected to t-test to confirm the effectiveness of our method.
## 3.1.2 Datasets Details
QMSum (Zhong et al., 2021b) is a query-focused meeting summarization dataset consisting of 1,808 query-summary pairs over 232 meetings from product design, academic, and political committee meetings. Additionally, QMSum contains manual annotations such as topic segmentation and relevant spans related to the reference summary.
## 3.1.3 Baselines Details
We compare the proposed method with several baselines. **TextRank** (Mihalcea and Tarau, 2004) is an extractive summarization method with a graphbased ranking model. **PGNet** (See et al., 2017)
uses pointer mechanism to copy tokens from source texts. **BART** (Lewis et al., 2020) is a pre-trained encoder-decoder Transformer model with a denoising objective, which achieves advanced performance on several summarization datasets(i.e.
CNN/DailyMail (Hermann et al., 2015) and Xsum
*The checkpoint is "facebook/bart-large", containing around 400M parameters.
†The checkpoint is "cross-encoder/ms-marco-MiniLM-L12-v2", containing around 134M parameters.
| Models | ROUGE-1 | ROUGE-2 | ROUGE-L | Extractor Size(M) |
|-----------------------------------------------|--------------|-------------|--------------|---------------------|
| TextRank (Mihalcea and Tarau, 2004) | 16.27 | 2.69 | 15.41 | - |
| PGNet (See et al., 2017) | 28.74 | 5.98 | 25.13 | - |
| BART (Lewis et al., 2020) | 29.20 | 6.37 | 25.49 | - |
| LEAD + BART | 32.06 | 9.67 | 27.93 | - |
| HMNet (Zhu et al., 2020) | 32.29 | 8.67 | 28.17 | - |
| Longformer (Beltagy et al., 2020) | 34.18 | 10.32 | 29.95 | - |
| DialogLM (Zhong et al., 2021a) | 33.69 | 9.32 | 30.01 | - |
| SUMMN (Zhang et al., 2022) | 34.03 | 9.28 | 29.48 | - |
| DYLE (Mao et al., 2022) | 34.42 | 9.71 | 30.10 | 501 |
| Pointer Network + PGNet (Zhong et al., 2021b) | 31.37 | 8.47 | 27.08 | 440 |
| Pointer Network + BART (Zhong et al., 2021b) | 31.74 | 8.53 | 28.21 | 440 |
| RELREG-TT (Vig et al., 2022) | 33.02 | 10.17 | 28.90 | 329 |
| RELREG (Vig et al., 2022) | 34.91 | 11.91 | 30.73 | 1372 |
| Oracle | 43.80 | 19.63 | 39.10 | |
| Locator-Generator | 31.47(-3.77) | 8.53(-3.70) | 28.21(-3.07) | 134 |
| Simulator-Generator | 32.92(-2.59) | 9.46(-2.77) | 28.93(-2.35) | 134 |
| Ranker-Generator | 35.51 | 12.23 | 31.28 | 134 |
| RankSUM(w/o re-ranking) | 33.02(-2.49) | 9.73(-2.50) | 29.15(-2.13) | 134 |
| Models | Top 5 | Top 10 | | | | |
|-----------|---------|----------|-------|-------|------|-------|
| R-1 | R-2 | R-L | R-1 | R-2 | R-L | |
| Gold | 26.32 | 7.58 | 24.43 | 20.55 | 5.15 | 19.29 |
| LEAD | 11.15 | 0.99 | 10.17 | 12.11 | 1.11 | 11.10 |
| RELREG | 18.02 | 2.46 | 15.30 | 15.02 | 2.35 | 13.23 |
| Locator | 16.89 | 2.24 | 13.97 | 14.10 | 1.97 | 12.75 |
| Simulator | 17.06 | 2.36 | 14.88 | 14.44 | 2.14 | 13.06 |
| Ours | 20.07 | 3.69 | 17.78 | 17.08 | 3.01 | 15.48 |
(Narayan et al., 2018)). **LEAD+BART** uses the beginning utterances as the BART's input. **HMNet** (Zhu et al., 2020) uses a hierarchical attention mechanism and cross-domain pre-training for meeting summarization. **Longformer** (Beltagy et al., 2020) replaces the quadratic self-attention mechanism with a combination of local attention and sparse global attention. **DialogLM** (Zhong et al., 2021a) is a pre-train model using intrawindow denoising self-reconstruction pre-training task and intra-block inter-block mixing attention.
SUMMN (Zhang et al., 2022) is a multi-stage summarization framework for the long-input summarization task. **DYLE** (Mao et al., 2022) treats the extracted text snippets as the latent variable and jointly trains the extractor and the generator. **Point Network+PGNet** and **Point Network+BART** (Zhong et al., 2021b) adopt a twostage approach of locate-then-summarize for long meeting summarization. **RELREG-TT** (Vig et al.,
2022) and **RELREG** (Vig et al., 2022) considers extracting as a ROUGE regression model using bi-encoder and cross-encoder.
## 3.2 Results & Analysis
The ROUGE score (Lin, 2004) is adopted as the evaluation metric. The performances of our method and baselines are summarized in Table 1. Experimental results show that our method significantly outperforms the baselines (p < 0.05) on QMSum dataset with fewer parameters.
To have a fair comparison among the three frameworks, we design an experiment to evaluate the performance of these frameworks using the same backbone as the extractor and the same generator.
The experimental results show that the proposed model significantly outperforms Locator-Generator and Simulator-Generator, which demonstrates that the ranker can obtain meeting utterances that are more suitable for the generator by learning to rank utterances.
To verify the effectiveness of the two-stage ranking paradigm, we conduct an ablation experiment.
Our model significantly outperforms the model without re-ranking module (p < 0.05). Experimental results show that the model without re-ranking module reduces 2.49 ROUGE-1, 2.50 ROUGE-2, 2.13 ROUGE-L scores, which demonstrates the importance of the re-ranking module. By listwise ranking, we can get a more precise top-ranking order.
We have an interesting observation. Unlike the ROUGE score regression model, the ranker is less sensitive to the model size. We believe this is because learning the relative order by comparison is easier than fitting ROUGE scores separately. It reduces the ranker's reliance on the model size by
Models Flu. QR. FC.
Gold 4.88 4.90 4.92 BART 4.48 3.78 3.64
RELREG 4.51 4.12 4.07 Locator-Generator 4.45 3.90 3.83
Simulator-Generator 4.48 4.01 4.02
Ours **4.52 4.40 4.21**
making full use of the comparison between samples. As a training task for extractors, learning to rank is a more suitable objective. Since to the extractor, it is the relative order that matters rather than the absolute error in fitting the ROUGE score.
## 3.3 Extractor Performance
We conduct experiments to evaluate the performance of the extractor, which help to explore the impact of the extractor on the quality of the generated summaries. The lexical overlap metric between the extracted utterances and the gold summary is used to measure the relevance of the meeting utterances to the summary/query. The experimental results show that the ranker significantly outperforms the baselines in extracting relevant utterances. It demonstrates that by learning to rank utterances, the ranker is able to extract the utterances that are more relevant to the summary/query.
## 3.4 Human Evaluation
We further conduct a manual evaluation to assess the models. We randomly select 50 samples from QMSum and ask 5 professional linguistic evaluators to score the ground truth and summaries generated by 5 models according to 3 metrics: fluency, query relevance and factual consistency. Each metric is rated from 1 (worst) to 5 (best) and the scores for each summary are averaged.
As shown in Table 3, the proposed model significantly outperforms all the baselines on query relevance, which benefits from the extractor's improvement on selecting the relevant utterances. Besides, the factual consistency score is also improved. We think that by comparing the relevance between utterances and the summary/query, the top utterances are more relevant to each other, which may help to improve factual consistency. In the aspect of fluency, the proposed model has only slight improvement compared to the baselines.
## 4 Conclusion
This paper proposes a new multi-stage framework for QFMS. It learns to rank the meeting utterances by pairwise and listwise comparison between them. By selecting the utterances with high query-relevance scores as the generator's input, the generator can produce high-quality summaries that are more relevant to the query. The experiments demonstrate the effectiveness of the RankerGenerator framework.
## 5 Acknowledgements
This work was supported by MoE-CMCC "Artifical Intelligence" Project No. MCM20190701 and the National Natural Science Foundation of China
(NSFC No.62076031).
We thank the anonymous reviewers for valuable feedback and helpful suggestions.
## Limitations
This paper mainly focuses on the Query-focused Meeting Summarization(QFMS) task. Besides, We have explored the performance of the RankerGenerator framework on the long-input summarization task. But the results do not show a significant improvement. Although QMSum dataset is also faced with the long-input challenge, the QFMS
task only summarizes specific parts of the original text, so it can take these parts as the input. While the goal of the long-input summarization task is to generate an overall summary, which needs to have a global view on the original text. So we think the extract-then-generate framework is unsuitable for the long-input summarization task. The previous work SUMMN (Zhang et al., 2022) is more suitable for the long-input summarization task.
In addition, the multi-stage approach has a performance disadvantage over the end-to-end approach. However, the computational complexity of the multi-stage approach is much lower than that of the end-to-end approach. The multi-stage approach can balance experimental performance and computational complexity. So it is worthy of exploration as well as the end-to-end approach.
## Ethics Statement
In this paper, all experiments are conducted on **QMSum** (Zhong et al., 2021b), which is open-source and obeys MIT license. The meeting transcripts data doesn't contain any privacy information(such as password, phone number and trade secrets) or offensive content.
## References
Tal Baumel, Matan Eyal, and Michael Elhadad. 2018.
Query focused abstractive summarization: Incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models.
arXiv preprint arXiv:1801.07704.
Iz Beltagy, Matthew E. Peters, and Arman Cohan.
2020. Longformer: The long-document transformer.
arXiv:2004.05150.
Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In *International Conference on Machine Learning*.
Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´
Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. *Transactions of the Association for* Computational Linguistics, 9:391–409.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. *Advances in neural information* processing systems, 28.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Marina Litvak and Natalia Vanetik. 2017. Query-based summarization using MDL principle. In Proceedings of the MultiLing 2017 Workshop on Summarization and Summary Evaluation Across Source Types and Genres, pages 22–31, Valencia, Spain. Association for Computational Linguistics.
Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chenguang Zhu, Ahmed Awadallah, and Dragomir Radev. 2022.
DYLE: Dynamic latent extraction for abstractive long-input summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1687–1698, Dublin, Ireland. Association for Computational Linguistics.
Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In *Proceedings of the 2004 Conference on Empirical Methods in Natural Language* Processing, pages 404–411, Barcelona, Spain. Association for Computational Linguistics.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Tobias Rohde, Xiaoxia Wu, and Yinhan Liu. 2021. Hierarchical learning for generation with long source sequences. *ArXiv*, abs/2104.07545.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Jesse Vig, Alexander Fabbri, Wojciech Kryscinski, Chien-Sheng Wu, and Wenhao Liu. 2022. Exploring neural models for query-focused summarization. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1455–1468, Seattle, United States. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yusen Zhang, Ansong Ni, Ziming Mao, Chen Henry Wu, Chenguang Zhu, Budhaditya Deb, Ahmed Awadallah, Dragomir Radev, and Rui Zhang. 2022. Summn: A
multi-stage summarization framework for long input dialogues and documents. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1592–
1604, Dublin, Ireland. Association for Computational Linguistics.
Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197–6208, Online.
Association for Computational Linguistics.
Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021a. Dialoglm: Pre-trained model for long dialogue understanding and summarization. In *AAAI Conference on Artificial Intelligence*.
Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir Radev. 2021b. QMSum: A new benchmark for querybased multi-domain meeting summarization. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 5905–5921, Online. Association for Computational Linguistics.
Chenguang Zhu, Ruochen Xu, Michael Zeng, and Xuedong Huang. 2020. A hierarchical network for abstractive meeting summarization with cross-domain pretraining. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 194–
203, Online. Association for Computational Linguistics.
## A Case Study
Query: Why did the industrial designer disagree with the marketing when talking about some new ideas about future products?
Gold: The marketing thought they should not trust too much the trend because it wouldn't last long - fruits and vegetables were in fashion this year but maybe next year it would be insects. However, the industrial designer believed it was good to follow the flow now and then since electronic devices were not long life products: if people changed, the product appearance should change as well.
Previous: Industrial Designer thought that it would be interesting to see if people would buy this one, and maybe add some features to it after.
However, he thought it would not be very productive to go through all the process from the beginning to the end.
Ours: The industrial designer thought it was good to follow the flow and make it now and after, if the people change their mind, you change also the product.
About future trend Query: What did Barry Hughes think about the targets regarding performance measure? Gold: Barry Hughes stated that they did not have targets and only prosecuted with reasonable prospects. As for the police, there were no targets for the police anymore, except for some individuals.
Previous: Barry Hughes thought that there was a shortage of registered intermediaries in Wales, and the Ministry of Justice had taken action to deal with that, and they had had a number of people who were now in a position to act as intermediaries. Now, of course, if they were to decide not to do that anymore, they may have a problem, but, in turn, they would be looking to recruit more people into those positions. So, it had the potential to serve as a barrier, but in practice, he didn't think it would be a barrier. Ours: Barry Hughes thought that the police were much more sensible these days than they may have been 20 years ago, in terms of trying to get cases charged in order to meet some notional target. In the CPS, they did not have targets for securing convictions or not. Clearly, they prosecuted if they thought that there was a reasonable prospect, a realistic prospect, of conviction and it was in the public interest.
About the performance measure
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
The Limitations section.
✓ A2. Did you discuss any potential risks of your work?
The Ethics Statement section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The Abstract and Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Yes, The References Section.
✓ B1. Did you cite the creators of artifacts you used?
Yes, the References section.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The Ethics Statement section.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The Limitations section.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The Ethics Statement section.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix B.
## C ✓ **Did You Run Computational Experiments?** Section 3.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix A.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 3.4.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 3.4.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
The Ethics Statement section.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
lawton-etal-2023-neural | Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models | https://aclanthology.org/2023.findings-acl.539 | Parameter-efficient tuning (PET) methods fit pre-trained language models (PLMs) to downstream tasks by either computing a small compressed update for a subset of model parameters, or appending and fine-tuning a small number of new model parameters to the pre-trained network. Hand-designed PET architectures from the literature perform well in practice, but have the potential to be improved via automated neural architecture search (NAS). We propose an efficient NAS method for learning PET architectures via structured and unstructured pruning. We present experiments on GLUE demonstrating the effectiveness of our algorithm and discuss how PET architectural design choices affect performance in practice. | # Neural Architecture Search For Parameter-Efficient Fine-Tuning Of Large Pre-Trained Language Models
Neal Lawton1∗ Anoop Kumar2 Govind Thattai2 Aram Galstyan2 **Greg Ver Steeg**2 1Information Sciences Institute 2Amazon Alexa AI
[email protected] {anooamzn,thattg,argalsty,gssteeg}@amazon.com
## Abstract
Parameter-efficient tuning (PET) methods fit pre-trained language models (PLMs) to downstream tasks by either computing a small compressed update for a subset of model parameters, or appending and fine-tuning a small number of new model parameters to the pretrained network. Hand-designed PET architectures from the literature perform well in practice, but have the potential to be improved via automated neural architecture search (NAS).
We propose an efficient NAS method for learning PET architectures via structured and unstructured pruning. We present experiments on GLUE demonstrating the effectiveness of our algorithm and discuss how PET architectural design choices affect performance in practice.
## 1 Introduction
Fine-tuning a large pre-trained language model is a popular method for solving many downstream natural language processing (NLP) tasks. *Full finetuning* involves fine-tuning all parameters of the base PLM, resulting in a fine-tuned copy of the model. However, full fine-tuning becomes cumbersome when fine-tuning on multiple downstream tasks due to the massive size of state-of-the-art language models, which range from the millions (Devlin et al., 2018; Liu et al., 2019) to billions (Brown et al., 2020) and now trillions (Fedus et al., 2022)
of parameters. Full fine-tuning also carries a risk of *catastrophic forgetting* (Jang et al., 2021; Chen et al., 2022), wherein the PLM's learned useful representation of natural language data is forgotten during fine-tuning.
To address those problems, recent research has focused on *parameter-efficient tuning* (PET).
Rather than fine-tuning all parameters of the base PLM, PET methods choose a small subset of parameters to fine-tune (Zaken et al., 2021; Guo et al.,
2020), or compute compressed parameter updates
*Work done while at Amazon Alexa AI
(Hu et al., 2021; Mahabadi et al., 2021), or append and fine-tune a small subset of new parameters
(Houlsby et al., 2019; Li and Liang, 2021; Hambardzumyan et al., 2021; He et al., 2021). Each of these methods has their own advantages and disadvantages, but one question relevant to all these methods is *which parts of the network are most efficient to fine-tune, and what is the most parameterefficient way to fine-tune them*?
Here we answer this question by designing and applying a fine-grain NAS method for learning PET
architectures. Our method uses a first order approximation of the loss function and is computationally efficient. We compare our approach with several hand-designed PET methods and find that the architectures learned by our method generally achieve comparable or higher development set performance on GLUE tasks (Wang et al., 2018) for the same number of parameters. We conclude by examining the PET architectures learned by our method and discuss the affect of architecture design choices on parameter efficiency.
## 2 Related Work
Many different PET methods exist in the literature.
Adapter networks insert and fine-tune small adapter modules to a base PLM. Rebuffi et al. (2017) introduced adapter networks to the visual domain, and Houlsby et al. (2019) introduced adapters to transformers. Adapters have been applied to text generation (Lin et al., 2020), translation (Bapna et al., 2019), and multi-task learning (Pfeiffer et al.,
2020c,a). Peters et al. (2019) compare adaptation with full fine-tuning. AdapterHub (Pfeiffer et al.,
2020b) enables easy sharing of adapter models.
Additionally, Mosbach et al. (2020) propose best practices for producing strong full fine-tuning baselines.
Prompt-tuning methods fine-tune a PLM by inserting prompt tokens into the input sequence.
Continuous prompts (Li and Liang, 2021; Lester et al., 2021; Hambardzumyan et al., 2021) or discrete prompts (Shin et al., 2020) can be learned or engineered (Brown et al., 2020). Gu et al. (2021) demonstrate the effectiveness of pretraining prompts for low resource tasks.
Some methods fine-tune a subset of parameters
(Zaken et al., 2021; Guo et al., 2020), or compute compressed parameter updates (Hu et al., 2021; Mahabadi et al., 2021). These methods fine-tune the PLM without increasing test-time inference latency. He et al. (2021) and Mao et al. (2021)
combine multiple PET methods.
Beyond parameter-efficient tuning, NAS has previously been used to discover more parameterefficient base language models. Cheong and Daniel
(2019) use magnitude pruning to reduce the number of parameters in BERT. Many efforts at pruning BERT have focused on pruning attention heads from the multi-head attention (MHA) modules
(Michel et al., 2019; Voita et al., 2019; Li et al.,
2021). Sajjad et al. (2020) evaluate different adhoc strategies for shrinking the depth of a BERT
encoder. So et al. (2019) use an evolutionary NAS
method to learn an improved transformer cell. In contrast to NAS, distillation can be used to compress language models (Sanh et al., 2019; Jiao et al.,
2019; Sun et al., 2020).
In our experiments section, we examine the architectures learned by our algorithm and consider what they say about which parts of the network are most parameter-efficient to fine-tune. Merchant et al. (2020) explore a similar question, probing the network activations to understand how the network's representation of natural language data changes during full fine-tuning.
## 3 Method
The architecture search space we choose for our NAS method is based on BitFit (Zaken et al., 2021)
and LoRA (Hu et al., 2021), two of the most popular methods for parameter-efficient fine-tuning in the literature. We consider both structured and unstructured variants of each of these, where the non-zero pattern of the learned PET parameters is restricted or unrestricted, respectively. Specifically, our search space consists of the following:
1. Learning an update ∆b for each vector of bias parameters b. In *structured bias-tuning*, for each PLM module, the NAS algorithm must choose whether ∆b = 0 or not. In *unstructured bias-tuning*, for each PLM module, the
NAS algorithm must choose which components of ∆b should be zero or non-zero.
2. Learning a low-rank (LoRA Hu et al., 2021)
update ∆W = UV ⊤ for each user-specified parameter matrix W. The maximum possible rank for the update is also user-specified.
In *structured LoRA*, for each parameter matrix W, the NAS algorithm must decide what the rank of the update UV ⊤ should be. In unstructured LoRA, the NAS algorithm must decide which components of U and V should be non-zero.
The collection of updates ∆b and ∆W are the PET parameters. In this search space, any number of the above PET modules can be applied to a base PLM without increasing the latency of inference, just like BitFit (Zaken et al., 2021) and LoRA (Hu et al., 2021).
## 3.1 Pruning
We perform NAS via pruning. Our NAS method begins by training a PET architecture of a maximum user-specified size: for each bias tuning module, we fine-tune all bias parameters, and for each LoRA update module, we learn a dense low-rank update with a user-specified rank (in all our experiments, we use rank-16 initial LoRA updates). After training the initial PET architecture, our method decides which PET parameters to prune and which to keep. Then we re-initialize and re-train the pruned architecture before evaluating on the validation set.
The criteria that we use to decide which PET parameters to prune is based on a first-order approximation of the change in training loss that results from pruning a PET parameter θ:
## −Θ · ∂L ∂Θ .
Note that this is a common pruning criterion, e.g.,
see Molchanov et al. (2016). This criterion is straight forward to use when deciding whether to prune a single PET parameter, as in unstructured bias-tuning and unstructured LoRA. For structured bias-tuning, we sum this criterion over the entire bias update ∆b, and for structured LoRA, when considering what column of U and V to prune, we sum the criterion over each column of U.
Pruning via evaluating the criterion at the end of training does not yield better-than-random architectures. We observe that the value of the pruning
| Method | #params | MNLI | SST-2 | MRPC | CoLA | QNLI | QQP | RTE | STS-B | Avg. |
|-----------|-----------|--------|---------|--------|--------|--------|-------|-------|---------|--------|
| FFT | 355M | 90.6 | 96.0 | 89.2 | 66.8 | 94.6 | 91.6 | 85.2 | 91.5 | 88.2 |
| BitFit | 273k | 89.2 | 95.6 | 88.2 | 65.0 | 93.9 | 88.1 | 81.9 | 91.4 | 86.7 |
| Adapters† | 3.0M | 90.2 | 96.1 | 90.2 | 68.3 | 94.8 | 91.9 | 83.8 | 92.1 | 88.4 |
| LoRA | 3.4M | 90.7 | 95.3 | 89.7 | 65.1 | 93.8 | 90.3 | 84.8 | 91.7 | 87.7 |
| MaM | 3.4M | 90.6 | 95.3 | 89.7 | 65.1 | 93.8 | 90.3 | 84.8 | 91.7 | 87.7 |
| S-MaM | 3.4M | 90.6 | 95.9 | 90.4 | 66.3 | 94.5 | 90.6 | 85.2 | 91.6 | 88.1 |
| U-MaM | 3.4M | 90.3 | 95.8 | 90.7 | 66.8 | 94.1 | 90.8 | 85.9 | 91.8 | 88.3 |
| WARP† | 25k | 88.2 | 96.0 | 90.8 | 60.6 | 93.5 | 84.5 | 75.8 | 88.6 | 84.8 |
| S-BitFit | 25k | 84.1 | 94.2 | 70.6 | 40.2 | 88.9 | 83.8 | 56.0 | 76.8 | 74.3 |
| U-BitFit | 25k | 88.8 | 95.5 | 85.3 | 62.1 | 93.5 | 87.7 | 74.0 | 90.3 | 84.6 |
criterion may change drastically from one stochastic gradient descent (SGD) step to the next. To maximally smooth the noise introduced by SGD,
we instead average the pruning criterion over all training SGD steps. This yields the most consistent indication of which PET parameters are efficient to prune.
Our NAS algorithm takes as input a parameter budget specifying the desired maximum number of parameters in the learned PET architecture. After training the initial PET architecture and evaluating each pruning criterion, we apply each pruning operation in increasing criterion order until the number of parameters in the PET architecture falls below the parameter budget. This way, pruning operations that are estimated to increase the training loss the least are applied first.
## 3.2 Initialization
Correct initialization is important for successfully applying this algorithm. After pruning, we reinitialize and re-train the learned PET architecture before evaluating on the validation set. We find that it is important to use the same initialization after pruning as before. We believe this is a consequence of the lottery ticket hypothesis (Frankle and Carbin, 2018).
We always initialize bias parameter updates as zero, as do other works, and find this works well. However, we find that the initialization for LoRA
given in the original paper (Hu et al., 2021), which initializes the matrix U with zeros and V with a Gaussian distribution, is not ammenable to unstructured LoRA pruning. Because the parameters in the matrix U are initialized zero, the magnitudes of those parameters are likely to remain small throughout training relative to the magnitudes of the parameters in V⊤. Consequently, the pruning criterion for unstructured LoRA updates is likely to favor pruning parameters from U over V , leading to an unbalanced, parameter-inefficient LoRA update. Instead, following the same reasoning given for Kaiming initialization (He et al., 2015), we recommend the following initialization:
$$U\sim{\mathcal{N}}(0,1/{\sqrt{m}})$$
√m) V ∼ N (0, 1/
$$V\sim{\mathcal{N}}(0,1/{\sqrt{n}}),\quad(1)$$
where m is the first dimension of the matrix U
(i.e., the "fan-in"), and n is the second dimension of the matrix V⊤ (i.e., the "fan-out"). With this initialization, the expected square gradients for the parameters of U and V are equal.
## 4 Experiments
Details of our experimental setup, including hyperparameter choices, are available in the appendix. In all experiments we report median validation score at the end of training over 5 random initializations using the GLUE development set for validation.
## 4.1 Comparing To Full Fine-Tuning
Here we present results for training larger PET architectures with the aim of achieving performance similar to full fine-tuning, but with fewer parameters. In addition to structured or unstructured bias-tuning, our learned PET architectures add structured or unstructured LoRA updates to the MHA query modules, key modules, and the dense feed forward network (FFN) modules. In Table 1, our learned structured PET architecture is labeled S-MaM, and our learned unstructured PET architecture is labeled U-MaM. We compare our method with
![3_image_0.png](3_image_0.png)
a LoRA baseline (Hu et al., 2021) and a baseline similar to Mix-and-Match (MaM) (He et al., 2021).
Our LoRA baseline fine-tunes all bias parameters and adds rank-8 updates to all MHA query and key modules. Our MaM-like baseline fine-tunes all bias parameters and adds rank-8 updates to all MHA
query and key modules and all FFN modules.
Results for this experiment with parameter budget 3.4M are in Table 1. In our S-MaM and U-MaM
experiments, we prune from an initial architecture with 6.8M parameters. We observe that our S-MaM
architecture achieves slightly higher average GLUE
(Wang et al., 2018) validation score over our MaMlike baseline, and our U-MaM architecture slightly higher average GLUE validation score over our S-MaM architecture. We conclude that structured architecture search provides a small positive benefit over the uniform-rank baseline architecture, and that unstructured architecture search provides a small positive benefit over structured architecture search. We also observe our U-MaM architecture achieves average GLUE validation score on par with full fine-tuning while fine-tuning approximately 100 times fewer parameters.
## 4.2 Very Small Pets
Here we examine our learned PET architectures with parameter budget less than the total number of bias parameters in the base PLM. For roberta-large, this is about 273k.
We use our method to learn structured and unstructured bias-tuning architectures. We compare our method with WARP (Hambardzumyan et al.,
2021) using parameter budget 25k in Table 1, and report results for our method for other parameter budgets in the appendix. Our learned structured and unstructured bias-tuning architectures are labeled S-BitFit and U-BitFit, respectively. In our S-BitFit and U-BitFit experiments, we prune from a PET architecture with 273k parameters that fine-tuens all bias parameters, the same as BitFit. We observe that the unstructured biastuning architecture achieves significantly higher validation performance than the structured biastuning architecture with the same parameter budget.
We conclude that the subset of bias parameters that are "good" to fine-tune are not concentrated in a few modules, but rather are distributed throughout the network. Our learned unstructured bias-tuning architecture with < 50k parameters fine-tunes only 18% of all bias parameters while achieving validation GLUE score only slightly less than fine-tuning all bias parameters (86.5 versus 86.7). We conclude that a vast majority of bias parameters do not need to be fine-tuned to achieve performance comparable to fine-tuning all bias parameters. With a parameter budget of 25k, unstructured bias tuning achieves similar performance compared to WARP,
beating or tying WARP on a majority of GLUE
tasks but achieving slightly worse average performance. We conclude that both methods are about equally effective.
## 4.3 Interpreting Learned Architectures
Here we examine the architectures learned by our algorithm and consider what they say about which parts of the network are most parameter-efficient to fine-tune. Each illustration discussed in this section averages the architectures learned by our method over all GLUE tasks and five random initializations per task. Figure 1a illustrates the architecture learned by our method for structured bias-tuning with parameter budget 50k. We observe a clear preference by our algorithm for fine-tuning the biases of the intermediate.dense modules in the middle of the network. Figure 1b illustrates the architecture learned by our method for unstructured bias tuning with parameter budget 50k. We observe a weak preference for fine-tuning the bias parameters of modules in the middle of the network, but not for any particular module type within each transformer block. We conclude that the biases that are most parameter-efficient to fine-tune are in the middle layers of the network.
## 5 Conclusion
In this paper, we considered the question which parts of the network are most efficient to fine-tune, and what is the most parameter-efficient way to fine-tune them? To answer that question, we developed a NAS algorithm based on structured and unstructured pruning. We presented experimental results on RoBERTa Large demonstrating the effectiveness of our algorithm, achieving GLUE
validation performance similar to WARP at 25k parameters (9% of all biases), similar to BitFit at 50k parameters (18% of all biases), and similar to full fine-tuning at 3.4M parameters (10% of all parameters). From our learned architectures we observed that the bias parameters in the middle layers of the network are most efficient to fine-tune. We conclude that it is important to consider *where* to fine-tune as well as how.
## Limitations
Differences in experimental setup may make it difficult to accurately and fairly compare published results. For example, to prevent data leakage, we report validation performance at the end of training and do not perform early stopping. This is in contrast to most other papers which report peak validation performance. Results reported for other methods are reproduced in the same learning environment as our method unless explicitly stated otherwise. This takes into account recent work demonstrating problems with fairly and accurately evaluating PET methods that use early stopping improperly (Chen et al., 2022).
Although many pruning criteria exist in the literature, in this paper we only consider one pruning criterion. Although not presented in this paper, experiments we conducted with various formulations of magnitude pruning did not produce better results.
Although prompt tuning is a popular PET
method, we do not perform NAS for prompt tuning to determine the most efficient positions for inserting prompt tokens into the input. Pruning may or may not prove to be a successful strategy for this problem.
Other NAS strategies exist in the literature besides pruning, such as evolutionary, reinforcement learning, and DARTS (Liu et al., 2018). However, our pruning method seems to give a good trade-off between validation performance and computational expense.
## Ethics Statement
Powerful language models can be used for unethical purposes, such as generating offensive or deceptive content. Although researchers today are making a greater effort to establish protections against the unethical use of their models, bad actors may still find ways to circumvent those protections. One avenue for attack could involve fine-tuning a PLM
on a nefarious dataset to produce unethical content. In this paper, we showed that a PLM can be successfully fine-tuned on a downstream task by fine-tuning a small number of parameters, or adding a low-rank update to a few select parameter matrices. Thus researchers should consider the risk posed by unethical parameter-efficient fine-tuning before publishing a fine-tuneable version of their model.
## References
Ankur Bapna, Naveen Arivazhagan, and Orhan Firat.
2019. Simple, scalable adaptation for neural machine translation. *arXiv preprint arXiv:1909.08478*.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Guanzheng Chen, Fangyu Liu, Zaiqiao Meng, and Shangsong Liang. 2022. Revisiting parameterefficient tuning: Are we really there yet? arXiv preprint arXiv:2202.07962.
Robin Cheong and Robel Daniel. 2019. transformers.
zip: Compressing transformers with pruning and quantization. *Technical report, tech. rep., Stanford* University, Stanford, California.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
William Fedus, Barret Zoph, and Noam Shazeer. 2022.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. The Journal of Machine Learning Research, 23(1):5232–
5270.
Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. *arXiv preprint arXiv:1803.03635*.
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang.
2021. PPT: Pre-trained prompt tuning for few-shot learning. *arXiv preprint arXiv:2109.04332*.
Demi Guo, Alexander M Rush, and Yoon Kim. 2020.
Parameter-efficient transfer learning with diff pruning. *arXiv preprint arXiv:2012.07463*.
Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. 2021. WARP: Word-level adversarial reprogramming. *arXiv preprint arXiv:2101.00121*.
Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2021. Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In *Proceedings of the IEEE International Conference* on Computer Vision, pages 1026–1034.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for NLP. In *International Conference on Machine Learning*, pages 2790–2799. PMLR.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. LoRA: Low-rank adaptation of large language models. *arXiv preprint* arXiv:2106.09685.
Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, and Minjoon Seo. 2021. Towards continual knowledge learning of language models. *arXiv* preprint arXiv:2110.03215.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019.
TinyBERT: Distilling BERT for natural language understanding. *arXiv preprint arXiv:1909.10351*.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. *arXiv preprint* arXiv:1412.6980.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*.
Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2021.
Differentiable subset pruning of transformer heads.
Transactions of the Association for Computational Linguistics, 9:1442–1459.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. *arXiv* preprint arXiv:2101.00190.
Zhaojiang Lin, Andrea Madotto, and Pascale Fung.
2020. Exploring versatile generative language model via parameter-efficient transfer learning. arXiv preprint arXiv:2004.03829.
Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2018.
DARTS: Differentiable architecture search. *arXiv* preprint arXiv:1806.09055.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692*.
Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient lowrank hypercomplex adapter layers. arXiv preprint arXiv:2106.04647.
Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Wen-tau Yih, and Madian Khabsa. 2021. UniPELT: A unified framework for parameter-efficient language model tuning. *arXiv* preprint arXiv:2110.07577.
Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to BERT
embeddings during fine-tuning? arXiv preprint arXiv:2004.14448.
Paul Michel, Omer Levy, and Graham Neubig. 2019.
Are sixteen heads really better than one? Advances in neural information processing systems, 32.
Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. 2016. Pruning convolutional neural networks for resource efficient inference.
arXiv preprint arXiv:1611.06440.
Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020. On the stability of fine-tuning BERT: Misconceptions, explanations, and strong baselines. *arXiv preprint arXiv:2006.04884*.
Matthew E Peters, Sebastian Ruder, and Noah A Smith.
2019. To tune or not to tune? Adapting pretrained representations to diverse tasks. *arXiv preprint* arXiv:1903.05987.
Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2020a.
AdapterFusion: Non-destructive task composition for transfer learning. *arXiv preprint arXiv:2005.00247*.
Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun ´ Cho, and Iryna Gurevych. 2020b. AdapterHub: A
framework for adapting transformers. arXiv preprint arXiv:2007.07779.
Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebas- ´
tian Ruder. 2020c. MAD-X: An adapter-based framework for multi-task cross-lingual transfer. arXiv preprint arXiv:2005.00052.
Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. *Advances in neural information processing systems*, 30.
Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2020. Poor man's BERT: Smaller and faster transformer models. arXiv preprint arXiv:2004.03844.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Taylor Shin, Yasaman Razeghi, Robert L Logan IV,
Eric Wallace, and Sameer Singh. 2020. AutoPrompt:
Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980.
David So, Quoc Le, and Chen Liang. 2019. The evolved transformer. In *International Conference on Machine* Learning, pages 5877–5886. PMLR.
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT:
a compact task-agnostic bert for resource-limited devices. *arXiv preprint arXiv:2004.02984*.
Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multihead self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. *arXiv* preprint arXiv:1804.07461.
Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked languagemodels. *arXiv preprint arXiv:2106.10199*.
## B Additional Experimental Results A Experiment Setup
best when fine-tuning a small number of parameters. We use different peak learning rates for different experiments depending on the maximum number of parameters being fine-tuned, ranging from 10−5for full fine-tuning to 3 × 10−4for training our smallest PETs. We also train for a different number of epochs for each GLUE tasks. We train for 20 epochs on MRPC, RTE, CoLA, and STSB; 5 epochs on SST-2 and QNLI; and 2 epochs for MNLI and QQP. We observe that extending the number of training epochs beyond these limits does not substantially affect validation performance. In all experiments, we use batch size 16 and maximum sequence length 128.
We report results for our learned structured and unstructured bias-tuning architecture with parameter budgets 10k, 25k, 50k, 100k, and 200k in Table 2. We observe that unstructured bias-tuning holds an advantage over structured bias-tuning across all parameter budgets. We also observe that the performance of unstructured bias-tuning begins to fall off after decreasing the parameter budget below 50k.
WARP with a parameter budget of 11k significantly outperforms our U-BitFit method with a parameter budget of 10k on the MRPC and COLA tasks.
This difference might be explained by the difference in experimental setup (e.g., Hambardzumyan et al. (2021) reports peak validation score whereas we report end-of-training validation score), or the small difference in parameter budget. We believe that our method can be improved in the very small parameter budget regime using iterative, rather than one-shot, pruning.
In all experiments we use the Adam optimizer
(Kingma and Ba, 2014) and a linear learning rate scheduler with 6% warm-up steps. We observe that training with a higher peak learning rate works
Method #params MNLI SST-2 MRPC CoLA QNLI QQP RTE STS-B Avg.
WARP† 11k 87.6 93.0 83.8 72.9 95.4 85.6 57.4 81.0 82.1
WARP† 25k 88.2 96.0 90.8 60.6 93.5 84.5 75.8 88.6 84.8
S-BitFit 10k 70.1 92.1 70.6 0.0 73.1 73.3 52.7 22.2 56.8
S-BitFit 25k 84.1 94.2 70.6 40.2 88.9 83.8 56.0 76.8 74.3 S-BitFit 50k 87.1 94.3 72.1 51.5 91.4 86.2 59.6 86.9 78.6
S-BitFit 100k 88.2 95.0 87.7 58.8 92.4 87.4 78.7 90.4 84.8 S-BitFit 200k 89.1 95.6 88.2 63.1 93.8 87.9 81.9 91.4 86.4
U-BitFit 10k 87.4 95.1 71.1 58.8 92.2 86.3 59.6 88.3 79.8 U-BitFit 25k 88.8 95.5 85.3 62.1 93.5 87.7 74.0 90.3 84.6
U-BitFit 50k 89.1 95.8 88.5 64.8 93.8 88.0 80.9 91.1 86.5 U-BitFit 100k 89.3 95.8 88.5 63.6 93.9 87.7 81.9 91.3 86.5
U-BitFit 200k 89.4 95.6 88.5 64.8 93.9 86.5 81.9 91.4 86.5
Table 2: GLUE development set score for structured (S-BitFit) and unstructured (U-BitFit) bias-tuning architectures learned by our method for different parameter budgets. The results for WARP†are reported from Hambardzumyan et al. (2021).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
This is discussed in the section titled "Limitations" after section 5.
✓ A2. Did you discuss any potential risks of your work?
We provide an Ethics Statement after section 5.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The abstract is presented before section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We used the GLUE datasets in our experiments discussed in section 4.
✓ B1. Did you cite the creators of artifacts you used?
We provide the citation for GLUE on line 227 in section 4.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We did not discuss license for GLUE due to space constraints.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We did not discuss the intended use for GLUE as we properly use GLUE for its intended use and because GLUE is a widely known dataset.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We did not discuss whether GLUE contains any non-anonymized or offensive data because GLUE is a widely known dataset.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We did not provide documentation for GLUE because GLUE is a widely known dataset.
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We did not present train/test/dev split counts for GLUE because of the tight space constraint and because we used the default train/test/dev split for each GLUE task.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We reported the number of parameters used, but not the computational budget or the computing infrastructure.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We discuss experimental setup in the appendix.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
At the beginning of section 4, we specify that we report the median of 5 runs.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We specify the hugging face model that we use and specific modules within that model in section 4.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
dibia-etal-2023-aligning | Aligning Offline Metrics and Human Judgments of Value for Code Generation Models | https://aclanthology.org/2023.findings-acl.540 | Large language models have demonstrated great potential to assist programmers in generating code. For such human-AI pair programming scenarios, we empirically demonstrate that while generated code are most often evaluated in terms of their functional correctness (i.e., whether generations pass available unit tests), correctness does not fully capture (e.g., may underestimate) the productivity gains these models may provide. Through a user study with N=49 experienced programmers, we show that while correctness captures high-value generations, programmers still rate code that fails unit tests as valuable if it reduces the overall effort needed to complete a coding task. Finally, we propose a hybrid metric that combines functional correctness and syntactic similarity and show that it achieves a 14{\%} stronger correlation with value and can therefore better represent real-world gains when evaluating and comparing models. | # Aligning Offline Metrics And Human Judgments Of Value For Code Generation Models
Victor Dibia1, Adam Fourney1**, Gagan Bansal**1, Forough Poursabzi-Sangdeh1, Han Liu2**, Saleema Amershi**1 1Microsoft Research, Redmond, United States
{victordibia, adam.fourney, gaganbansal, fpoursabzi, samershi}@microsoft.com, [email protected] 2University of Chicago, Chicago, United States
## Abstract
Large language models have demonstrated great potential to assist programmers in generating code. For such human-AI pair programming scenarios, we empirically demonstrate that while generated code are most often evaluated in terms of their *functional correctness*
(i.e., whether generations pass available unit tests), correctness does not fully capture (e.g.,
may underestimate) the *productivity* gains these models may provide. Through a user study with N = 49 experienced programmers, we show that while correctness captures high-value generations, programmers still rate code that fails unit tests as valuable if it reduces the overall effort needed to complete a coding task. Finally, we propose a hybrid metric that combines functional correctness and syntactic similarity and show that it achieves a 14% stronger correlation with value and can therefore better represent real-world gains when evaluating and comparing models.
## 1 Introduction
Large language models trained on code (e.g.,
Codex (Chen et al., 2021), AlphaCode (Li et al., 2022), CodeGen (Nijkamp et al., 2022), InCoder
(Fried et al., 2022)) have shown impressive capabilities on code generation tasks. One important application for such models is *Human-AI pair programming*, where a model suggests in-line code completions (e.g., within an IDE) that programmers can choose to ignore, accept, or edit as needed.
Early studies suggest that this paradigm may dramatically boost productivity and transform the practice of software development (Ziegler et al., 2022; Kalliamvakou, 2022).
As is common with model development more generally, code-generation advances are largely driven by comparing model performance on offline metrics (i.e., metrics computed automatically over held out evaluation data) that can be easily tracked on leaderboards. *Functional correctness* metrics
![0_image_0.png](0_image_0.png)
such as *pass@k* (Chen et al., 2021) currently represent the state-of-best-practice (Chen et al., 2021; Fried et al., 2022; Austin et al., 2021; Chowdhery et al., 2022; Nijkamp et al., 2022; Hendrycks et al.,
2021; Kulal et al., 2019). These metrics evaluate generations by executing a set of unit tests and assessing whether the generations pass or fail. While functional correctness is clearly important, it does not fully capture the productivity gains programmers may value about code generation assistance.
For example, a generation that fails unit tests might yet provide critical hints to solve a task (see example in Fig 1), or serves as boilerplate that can be adapted with minimal effort. Likewise, functionally correct code might be difficult to read or maintain, or may contain other vulnerabilities.
With developer productivity in mind (Forsgren et al., 2021), we investigate syntactic similaritybased offline performance metrics (e.g., (Svyatkovskiy et al., 2020; Chowdhery et al., 2022; Papineni et al., 2002)) as proxies of programmer effort needed to modify or correct automatic code generations. Similarity-based metrics compute how similar a generation is to reference or ground truth code, typically available in the offline setting. We then conducted a user study with N=49 experienced programmers to assess how well self-reported utility
(Forsgren et al., 2021) correlates with similaritybased and functional correctness metrics. Our work answers the following key research questions:
1. Do programmers still value code generations that may be incorrect (fail unit tests)?
2. How well do existing offline performance metrics align with programmer-rated value, accuracy and effort?
3. Does a metric that captures both functional correctness and effort saved better align with programmers' perceived value?
In our studies, we showed participants code generated by AI models and asked them to provide ratings in terms of the accuracy of the code, overall value of the code and effort associated with fixing the code (if any). We find that while ratings on effort and accuracy both correlate with value, effort is significantly more correlated. In other words, code that is perceived as easy-to-fix is judged to be more valuable. Conversely, when considering offline metrics, we find that while functional correctness metrics are more correlated to value compared to similarity based metrics, similarity based metrics offer complementary information. Specifically, we find 42% of generations that failed unit tests were still rated as valuable - and similarity based metrics provide a better signal as to value in this regime.
We therefore propose a metric that combines functional correctness and similarity and show that it increases correlation with perceived value by 14%.
## 2 Related Work
Offline performance evaluation of AI models typically consists of running models as isolated components over benchmark datasets and then computing aggregate *metrics* (e.g., accuracy, AUC, and precision/recall) that can be easily compared and tracked on leaderboards. While these evaluation practices have led to rapid advancements in AI
by enabling efficient apples-to-apples model comparison, a growing body of work has raised concerns about the mismatch between popular metrics and what people need and value in the real world
(Thomas and Uminsky, 2022; Raji et al., 2022; Hellendoorn et al., 2019; Hand, 2006; Jacobs and Wallach, 2021; Chandar et al., 2020; Zhou et al.,
2022). Using metrics that fail to appropriately capture what people value can result in deploying models that are at best less effective than they could be, and at worst harmful to people and society (Thomas and Uminsky, 2022; Raji et al., 2022; Hand, 2006).
In this work, we investigate the extent to which common offline code generation metrics capture what professional programmers value about code generation models. In particular, we examine how well existing code generation metrics capture notions of developer effort and productivity Forsgren et al. (2021).
The current most popular family of code generation metrics is based on measuring functional correctness. Functional correctness metrics seek to evaluate generated code against known objective properties such as passing unit tests (Chen et al.,
2021; Austin et al., 2021; Li et al., 2022; Roziere et al., 2020). Following the release of Codex and the HumanEval dataset (Chen et al., 2021)—which is a dataset of 164 hand-written problems in python with associated unit tests—the functional correctness metric of *pass*@k (where k code samples are generated per problem and a problem is considered solved if any of the k generations passes the corresponding unit tests) has emerged as the dominant method for evaluating code generation models
(e.g., (Fried et al., 2022; Xu et al., 2022; Li et al.,
2022; Austin et al., 2021)). Advocates of functional correctness metrics argue for their resemblance to programming best practices (e.g, test-driven development) and fidelity to capturing functional behaviour (Chen et al., 2021). However, in this work we demonstrate that functional correctness does not fully capture what people value about code generation models.
Similarity-based metrics compare tokens from generated code to tokens of known solutions, with code that is more similar to given solution(s) being considered better. Multiple similarity-based metrics have been proposed for evaluating code generation models including exact match (Lu et al., 2021),
edit distance (Svyatkovskiy et al., 2020; Chowdhery et al., 2022), BLEU (Papineni et al., 2002),
CodeBLEU (Ren et al., 2020), and ROGUE (Lin, 2004). Analyses of similarity-based metrics and other measures of code quality have been mixed
(e.g., (Ren et al., 2020) vs Austin et al. (2021)).
However, in most of these cases, similarity was considered a proxy for functional correctness. In this work, we revisit similarity-based metrics as proxies for effort saved in coding tasks Svyatkovskiy et al.
(2020) and demonstrate how they can be used to better capture value.
In this work we focus on *pass*@k as a proxy for functional correctness and we experiment with two similarity-based metrics, namely, normalized edit similarity (Lu et al., 2021; Svyatkovskiy et al.,
2020; Chowdhery et al., 2022) (which measures how many single-character edits— including insertion, substitution, or deletion—are required to convert generated code to some reference code) and BLEU (which measures the token overlap between the generated and reference text) to investigate how these metrics approximate different facets of what programmers value in practice.
## 3 User Study
We designed a user study to evaluate how well functional correctness- and similarity-based offline metrics approximate value of code generations for programmers. The study showed experienced programmers various programming tasks, together with code generations and reference solutions. Programmers then rated the generations on perceived accuracy, effort, and value.
## 3.1 Dataset For Programming Tasks
We selected programming tasks from the HumanEval dataset (Chen et al., 2021), which consists of 164 hand-crafted programming tasks and solutions written in Python. Each task includes a task description (i.e., a function header followed by a comment describing the task with some sample test cases (115 - 1360 characters)), a canonical handwritten solution, and a set of associated unit tests.
HumanEval has been extensively used to evaluate code generation systems (e.g., (Chen et al., 2021; Fried et al., 2022; Xu et al., 2022; Chowdhery et al.,
2022; Nijkamp et al., 2022)). To the best of our knowledge, HumanEval is not part of any model's training data, and its simple standalone tasks makes it an ideal choice for user studies.
## 3.2 Offline Metrics For Code Generation
We experimented with three offline metrics, one of which served as a proxy for functional correctness and the other two served as a proxy for a programmer's effort.
PASS: As a proxy for functional correctness, we computed the *pass*@k metric (Chen et al., 2021).
pass@k takes k generations for a problem and considers the problem solved if any generation passes the accompanying unit tests (in our case the unit tests provided in the HumanEval dataset). While related work has presented *pass*@k results for values of k including 1, 10, and even up to 1M (Chen et al., 2021; Li et al., 2022), we focus our analysis on k = 1 which most closely resembles the realworld scenario where a programmer sees a single generation inline within a coding editor.
EDIT-SIM: As one proxy for effort, we computed normalized edit similarity (Svyatkovskiy et al., 2020) as follows:
$$\mathrm{{EDT-Sim}}=1-{\frac{l e v(g e n,r e f)}{m a x(l e n(g e n),l e n(r e f))}}$$
where gen is code generated by a model for a problem in the HumanEval dataset, ref is the handwritten reference solution to the problem and lev is the character Levenshtein edit distance.
BLEU: As another proxy for effort, we computed BLEU using the formulation introduced by Papineni et al. (2002) (generated code compared with a single reference), and based on the implementation in the Tensorflow library (Abadi et al.,
2015).
We focused on syntactic similarity-based metrics like EDIT-SIM Lu et al. (2021); Svyatkovskiy et al.
(2020); Chowdhery et al. (2022) and BLEU Barone and Sennrich (2017); Karaivanov et al. (2014);
Nguyen et al. (2013); Ahmad et al. (2021); Wang et al. (2021) because they have been commonlyused metrics for evaluating text-based generative models, especially, for code generation scenarios.
## 3.3 Code Generation Models
We selected 5 publicly available autoregressive large language models trained on code, varied mostly by the parameter size of each model. The first two models are variants of the CodeGen model introduced by Nijkamp et al. (2022) (CodeGen350 Multi, CodeGen2B Multi) - autoregressive transformers with the regular next-token prediction language modeling as the learning objective trained on a natural language corpus and programming language (C, C++, Go, Java, JavaScript, and Python) data curated from GitHub. Next, we use three publicly available variants of the Codex model (Chen et al., 2021), a GPT language model fine-tuned on publicly available code from GitHub (Cushman, Davinci1, Davinci2). Note that the goal of this work is to compare code-generation *metrics* and not to assess the performance of models. We used models of different sizes to help ensure our findings on how metrics behave translate across a range of model qualities. Following guidance from Chen et al. (2021) who demonstrate the importance of optimizing sampling temperature for particular values of k, we used a low temperature value of t = 0.2 for k = 1 so that each model generates the most likely code tokens.
## 3.4 Tasks
We used programming tasks from the HumanEval dataset, where for each task, participants were shown the task description (function header and docstring describing the task along with sample test cases), the corresponding unit tests, and two code snippets - the reference solution from the HumanEval dataset and a generation for that task from one of the models - shown in a random order. Each snippet was randomly assigned a name -
Code Snippet A or *Code Snippet B* for easy reference in the subsequent questions. All parts of the interface showing code were syntax highlighted to improve readability.
For each task, participants answered questions designed to collect their judgements along three dimensions of interest: overall value, accuracy, and effort which we hypothesized would impact value. Each question used 5-point Likert scales and were shown sequentially only after the previous question had been answered. The questions were as follows:
A**CCURACY**: The first question asked participants to judge whether both snippets were functionally equivalent. Since the reference solution is correct, functional equivalence can be used to infer perceived accuracy of a generation (complete equivalence indicates the participant believes the generation would produce the same outputs for all the same inputs as the reference solution which passes the provided unit tests). We used this equivalence question-based approach to assess perceived accuracy because our pilots suggested that judging equivalence is easier than solving the coding task from scratch, and also because it enabled us to design a simpler, consistent survey - the other two survey questions (as described next) also compared the generation to the reference.
At this point in the task, participants were not told which snippet corresponded to the generation and which was written by a human programmer to minimize the impact of any existing biases about the capabilities of AI models.
V**ALUE**: Once participants advanced to the second question, the interface disclosed which snippet (A or B) was AI generated and which was a reference solution. They were then asked how useful the generated snippet would be assuming they were a programmer attempting to solve the task themselves. We described usefulness in terms of whether participants believed the generation provided a useful starting point, ranging from Extremely useful (they "would definitely accept it and use it as a starting point") to Not at all useful
(they "would not even want to see the generation" let alone accept and use it as a starting point).
E**FFORT**: The final question asked participants how much effort they believed it would require them to modify the AI generated solution into a correct solution similar to the snippet written by a human programmer, if any.
## 3.5 Study Protocol And Participants
The study consisted of four sections: consent form, instructions, main study, and a brief post-study feedback section. The instructions section was a sample task designed to familiarize participants with the mechanics of the study interface (e.g., they will be shown problems and asked to provide ratings, they will not be allowed to go back and revise previous responses) and to anchor them to pair programming scenario.
The main study was made up of 12 tasks. We chose 12 because our pilot studies showed that participants could complete 12 tasks within an hour. For each task, participants were shown a generation from one randomly chosen model from our set of 5 models.
A key goal of our study was to assess how well our offline metrics of interest align with what programmers value. We were particularly interested in understanding the tradeoffs between functional correctness and similarity as they relate to value and so we wanted to probe cases where these metrics disagreed. Therefore, to select study tasks, we first considered taking a random sample from HumanEval. However, the number of generations falling into regions where these metrics agreed on the largest model (Davinci2) was over-represented compared to the disagreement region (70% agreement vs 30% disagreement). Therefore, we chose a stratified sampling method where we first assigned each HumanEval problem into one of three buckets:
PASS = 1 and EDIT-SIM is low, PASS = 0 and EDITSIM is high, PASS and EDIT-SIM agree .1 Then, 1According to Davinci2 and where similarity was thresholded along the median similarity value for that model.
we sampled equally across each bucket aiming to annotate 20 problems per bucket for this study.
Because we intended to recruit professional programmers, we aimed to obtain up to 2 annotations per problem-model pair. With 60 problems (20 per bucket), 5 models, and 2 annotations per task and a budget of 12 problems per participant, this required us to recruit 50 participants for this study. We assigned annotation tasks to participants by randomly sampling a problem from our sample of 60 and then randomly sampling a generation for that problem, without repeating a problem for any participant, until each problem-model pair was assigned 2 annotations.
We recruited professional programmers from a large technology company for this study and recruitment emails were sent out to a randomly sampled subset of software engineers. Participants were required to have at least 1-2 years of programming experience with Python and to program in Python at least a few times a year. 61% of respondents indicated they had worked on a python project in the last month and 59% had never used a pair programming AI assistant like GitHub Copilot.
The study was deployed as a web application.
Participants were given five days to complete the study, and could pause and resume using their personalized study link. At the end of the study, participants were given a $50 online gift card. As an additional incentive, we awarded the top 5 performers an additional $50 gift card. We determined top performers based on the rate at which participants correctly indicated a generation was equivalent to the reference code when it passed vs when it failed the given unit tests. This experiment was approved by our organization's internal IRB process.
## 4 Study Results
At the end of the study period, we obtained responses from 49 participants. We then applied the following criteria to evaluate the quality of responses: First, we computed the median response time per task for all participants and also computed a performance rating on the code equivalence task in the same way we determined top performers in our study. Data from three participants who fell within the bottom 10th percentile of the median task completion times and their performance was worse than the probability of random chance
(given the questions they responded to) was excluded from the data analysis. The final dataset
![4_image_0.png](4_image_0.png)
includes data from 46 participants with 552 annotations across 290 unique tasks and 1.96 annotation per task. Finally, across tasks where we obtained multiple annotations, we verified that there was agreement between annotators2and then computed the mean annotation per task for use in our analysis. In this section, we present the main findings based on this data.
## 4.1 Accuracy Is Valuable, But Effort Matters
Our first finding is that the VALUE of a generation is nearly perfectly correlated with the perceived E**FFORT** needed to correct a generation (Pearson r = 0.94; 95%-confidence interval [0.92 − 0.95]).
Recall that E**FFORT** is reverse-coded such that a score of 5 indicates "no effort" is needed. ACCU-**RACY** is also highly correlated (Pearson r = 0.87; 95%-confidence interval [0.84 − 0.90]), but significantly less so - we note that their confidence intervals do not overlap. **From this we conclude**
that ACCURACY isn't everything, and E**FFORT**
is at least as important a signal for capturing V**ALUE**. We will return to this theme throughout the paper. Correlations between these dimensions are presented in the top-left quadrant of Figure 2.
## 4.2 Offline Metrics Highly Correlate With Programmers' Judgements, But There Is Room For Improvement
Our second finding confirms **that the metrics used**
in practice (PASS, EDIT-SIM, and BLEU) are indeed positively correlated with V**ALUE**, but there are important differences (Fig. 2, bottom-left quadrant). As an example, PASS shows the strongest 2In 50% of cases, annotators are in perfect agreement; 75%
differ by at most one point in valence (on a rating scale of 1-5)
and the mean difference is 0.89.
![5_image_1.png](5_image_1.png)
association with A**CCURACY** of the three metrics
(r = 0.66; p < 0.001). This is unsurprising, given that PASS is a direct measure of functional correctness. More surprising to us, however, is that PASS
is also the most strongly correlated metric to both E**FFORT** and VALUE (r = 0.62; p < 0.001, and r = 0.62; p < 0.001 respectively). This was unexpected since EDIT-SIM is a direct measure of the number of changes needed to correct a suggestion, and yet shows weaker correlation to EF-**FORT** (r = 0.48; p < 0.001). With a correlation of r = 0.36; p < 0.001, BLEU **under-performs**
all other metrics. Finally, given that none of the metrics correlate better than r = 0.66, there is significant opportunity to develop improved metrics.
## 4.3 Code That Passes Unit Tests (Pass = 1**) Is** Extremely Likely To Be High-Value
Our third finding is that when PASS = 1 **(i.e.,**
when generations pass unit tests), we can be reasonably certain that they will be of high V**ALUE**
(Figure 3). In fact, only 2 of 77 (3%) generations that passed unit tests were found to have a VALUE
scores less than 3. Recall, a VALUE score of 3 indicates that the participant found a suggestion to be at least "somewhat useful."
However, PASS = 0 is less good at filtering low-value suggestions; Only 123 of the 213 (58%)
generations that failed unit tests scored lower than
![5_image_0.png](5_image_0.png)
3 on value. Stated differently, 90 generations
(42%) were found to be at least somewhat valuable.
This finding confirms existing qualitative work that while programmers value functionally correct code, they may still find considerable value in code that is not functionally correct (Weisz et al., 2021).
## 4.4 Improved Metrics Through Combination
Upon further inspection, we realized that EDITSIM was itself a useful lens through which to understand how VALUE is distributed when unit tests are failed. Figure 4 shows a partitioning of results such that left and right columns correspond to (PASS = 0) and (PASS = 1) respectively. Top and bottom rows correspond to cases where the EDIT-SIM is below and above the 50th-percentile respectively (referred to as EDIT-SIM =Low and EDIT-SIM =High). As before, we find that when
(PASS = 1), the VALUE tends to be high (blue outlined regions). However, we also find that **when a**
generation both fails the unit test and has low EDIT-SIM (i.e., PASS = 0; EDIT-SIM = low**), it**
tends to be judged to have low V**ALUE** (red outlined region). Conversely, in the final region (PASS
= 0; EDIT-SIM = *high*), VALUE is distributed more uniformly, and the signal is less clear. This strongly suggests that if human labeling is limited by budget, it may be worthwhile oversampling this region to maximally recover some of the missing VALUE signal.
This also suggests that there is an opportunity to combine metrics because PASS = 1 is good at spotting high-value generations, while PASS = 0; EDIT-SIM = *high* is good at spotting low-value generations. To investigate this further, we formally define a simple combined metric as follows:
## C**Ombined** = Min(1.0, Pass + Edit-Sim)
Figure 2, row 7, shows some merit to this approach:
The combined metric correlates better with human judgments of value (r = 0.70; p < 0.001)
than PASS (r = 0.61; p < 0.001) and EDIT-SIM
(r = 0.48; p < 0.001) for EDIT-SIM. This is an extremely promising result, but was also only our first attempt at combining metrics. Future work is needed to explore other potential combinations.
## 5 Discussion & Future Work 5.1 What Do Programmers Value?
Much of the current research evaluating code generation models aims to approximate overall value via some notion of correctness (Chen et al., 2021; Fried et al., 2022; Austin et al., 2021; Chowdhery et al., 2022; Nijkamp et al., 2022; Hendrycks et al., 2021; Kulal et al., 2019). Even research exploring similarity-based metrics have tended to validate these against some general notion of code quality (e.g., Mathur et al. (2020) consider "adequacy" while Ren et al. (2020) consider "good vs bad").
In this work, we aim to tease out distinct aspects of value to better understand how they contribute what programmers want from their AI-pair programmers. In this study, we examine the impact of correctness and effort. Our findings show that effort indeed matters to programmers. Accuracy also matters but, interestingly, our findings suggest that effort savings may be even more valuable to programmers than accuracy.
In general, we take the position that value is a multidimensional theoretical construct (Thomas and Uminsky, 2022). As such, while our findings showed effort as more valuable to programmers than accuracy, because both are still highly correlated with value, we recommend considering both when assessing the impact of human-AI pair programming. Moreover, there are likely many other properties of AI-pair programmers that developers find valuable (Forsgren et al., 2021) and future work warrants investigating how these may also be captured in offline evaluations.
## 5.2 How Can Developers Approximate Value?
Our results show that when developers have access to evaluation data containing high quality unit tests
(as in HumanEval), generations that pass unit tests are highly likely to be valuable to programmers.
This suggests that PASS could be used as a reasonable filter in high precision scenarios (e.g., if an AI-pair programmer was tuned to limit distractions by only showing generations when they most likely to be valuable).
That said, however, PASS alone may miss a significant fraction of generations that programmers might find valuable. Our findings show that another offline metric - EDIT-SIM can help overcome this issue when we combine it with PASS according to Equation 4.4. This new metric is similar in spirit to hinge-loss in support vector machines.3In that setting, misclassifications are penalized based on their distance to the hyperplane decision boundary. Conversely, correct classifications all receive a fixed loss of 0, following the intuition that they don't become *more correct* the further they move from the hyperplane. In our setting, we expect VALUE
to increase as generations become more similar to a reference solution, but once it reaches functional correctness it doesn't become *more correct* the closer it gets (syntactically) to the reference solution.
We emphasize, however, that metrics can have varying implications on model development decisions and therefore the choice of when or if to combine them is important. For example, when developers are seeking to make deployment decisions between models, selecting models that rank highest in terms of the overall value they may provide to programmers seems reasonable. In this case, the theoretical construct being approximated is perceived VALUE and our C**OMBINED** metric is better at estimating this than PASS or EDIT-SIM
alone. However, when developers are diagnosing issues during model development (e.g., via error analyses) *we recommend that* PASS and EDIT-SIM
be applied independently to get a clearer picture of model behavior (Thomas and Uminsky, 2022) and to ensure appropriate mitigation strategies are used for different issues. For example, PASS failing on certain types of problems (e.g., recursive problems) or code blocks (e.g., conditional statements, error handling) may suggest additional data is needed in fine tuning. Whereas, EDIT-SIM failures may warrant new user interface techniques to help programmers focus attention to parts of the code most likely needing edits.
## 5.3 Approximating Accuracy And Effort
Our results show that programmers value both accuracy and effort savings when it comes to their AI pair programmers. We demonstrate that PASS
is a reasonable proxy for accuracy. Surprisingly, however, we found that EDIT-SIM is only moderately correlated with effort and in fact is less correlated with effort than PASS. This is somewhat counter-intuitive since EDIT-SIM directly measures the number of characters that need to be changed to convert a generation to the reference solution
(Svyatkovskiy et al., 2020; Lu et al., 2021).
This, along with our finding that programmers value effort reduction from their AI pairprogrammers, suggests that an important area for future work is to experiment with alternative ways to operationalize effort for offline evaluation. This also, emphasizes the importance of validating that metrics faithfully capture the theoretical constructs they are trying to measure (Jacobs and Wallach, 2021).
## 5.4 When Should Developers Use Edit-Sim?
Our findings show that EDIT-SIM is moderately correlated with PASS. This is important because there are many situations where computing PASS
may be undesirable. For example, PASS requires executing arbitrary generated code which can be resource intensive and may pose security risks (Chen et al., 2021). PASS and other functional evaluation metrics also require the availability of comprehensive, high-quality unit tests as well as languagespecific test infrastructure, assumptions which may not hold in some evaluation scenarios (e.g., testing functions in the wild). Therefore, *while we recommend* PASS when it is appropriate because it is more strongly correlated with value than EDITSIM*, our findings suggest that* EDIT-SIM *may be a* reasonable alternative when it is desirable to avoid limitations of functional evaluation.
Of course, limitations of similarity metrics should also be weighed against their benefits. For example, similarity metrics can fail when tasks have multiple syntactically divergent solutions -
e.g. an algorithm may have an iterative vs recursive implementation with low token overlap, leading to noisy similarity metric. However, we intuit that this scenario is relatively infrequent given the structured nature of programming languages and existing research on developer behaviour e.g.,
Allamanis et al. (2018) who mention that developers prefer to write (Allamanis et al., *2014) and* read code (Hellendoorn et al., 2015) that is conventional, idiomatic, and familiar, because it helps in understanding and maintaining software systems.
A convergence towards idiomatic solutions make it more likely the solutions and patterns learned by large language models of code coincide with ground truth solutions, limiting the scenario where generated code is syntactically different from but functionally equivalent to ground truth.
## 6 Conclusion
We studied how well two types of offline metrics for evaluating code generation models (i.e., functional correctness such as *pass*@k based on unit tests and similarity-based metrics such as edit similarity) align with human judgements of value when used for human-AI pair programming. Our user study with 49 experienced programmers suggests that while programmers find functionally correct code generations valuable, the effort to edit and adapt generations also matters. Existing offline metrics show high correlation with human judgements of value, but there is room for improvement.
One reason is that while code that passes unit tests is very likely to be rated high-value, code that fails unit tests is often still considered valuable by programmers. Based on this observation, we propose a combined offline metric inspired by hinge-loss in support vector machines that allows for partial credit by combining strengths of functional correctness and similarity-based metrics. Our analysis shows that this combined metric aligns better with human judgements of value in code generations than functional correctness or similarity alone.
Overall our work highlights the importance of validating that offline metrics in AI capture what people value and that human-centered metrics, inspired by what people value, can provide better estimates of what people want from their AI-pair programmers.
## Limitations
In this work, we focused on problems posed in the hand-crafted HumanEval dataset (Chen et al.,
2021). A potential pitfall of a curated dataset such as HumanEval is that the results may not generalize to real-world scenarios where developers often deal with more complex problems and code bases
(e.g, code with multiple dependencies across multiple files). To address this limitation, we originally explored the use of datasets mined from GitHub.
However, our experiments indicated memorization issues (e.g., verbatim generation of solutions to niche problem), potentially due to the sample code already being included in the model training set(Lee et al., 2021). In practice, high quality code deduplication required to avoid this specific limitation is challenging. Work by Allamanis (2019)
find that the impact of duplicate code can be severe, sometimes inflating model performance scores by up to 100%. Furthermore, in our early pilot tests, functions extracted in the wild were found to contain insufficient context (e.g. absence of docstring)
for even expert human annotators and isolating functional tests is challenging without heavy curation. Further research is therefore needed to understand how our findings might generalize to a wider variety of deployment settings as well as research on designing diverse evaluation datasets. In addition, future work may also explore the impact of problem difficulty on the observed results in our study.
## Ethics Statement
While our study informs current practices in evaluating code generation models, we acknowledge that measures of value can differ across multiple demographics with impact on productivity. For our experiments (approved by an internal IRB board),
we generate code snippets based on a publicly available dataset, using publicly available models that are annotated by experienced developers. These choices make our work readily reproducible. We also developed a library that implements multiple metrics for bench marking code generation models which we will make available as an open source library (MIT license) at the time of publication.
## References
Martín Abadi, Ashish Agarwal, and Paul Barham et al.
2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.
Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. arXiv preprint arXiv:2103.06333.
Miltiadis Allamanis. 2019. The adverse effects of code duplication in machine learning models of code. In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, pages 143–153.
Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. 2014. Learning natural coding conventions. In Proceedings of the 22nd ACM SIGSOFT
International Symposium on Foundations of Software Engineering, pages 281–293.
Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. 2018. A survey of machine learning for big code and naturalness. ACM Computing Surveys (CSUR), 51(4):1–37.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021.
Program synthesis with large language models. *arXiv* preprint arXiv:2108.07732, 2021.
Antonio Valerio Miceli Barone and Rico Sennrich. 2017.
A parallel corpus of python functions and documentation strings for automated code documentation and code generation. *arXiv preprint arXiv:1707.02275*.
Praveen Chandar, Fernando Diaz, and Brian St. Thomas.
2020. Beyond accuracy: Grounding evaluation metrics for human-machine learning systems. In *Advances in Neural Information Processing Systems*.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. *arXiv preprint* arXiv:2107.03374.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Nicole Forsgren, Margaret-Anne Storey, Chandra Maddila, Tom Zimmermann, Brian Houck, and Jenna Butler. 2021. The space of developer productivity: There's more to it than you think. *ACM Queue*,
19(1):1–29.
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. Incoder:
A generative model for code infilling and synthesis.
arXiv preprint arXiv:2204.05999.
David J. Hand. 2006. Classifier technology and the illusion of progress. *Statistical Science*, 21(1).
Vincent J Hellendoorn, Premkumar T Devanbu, and Alberto Bacchelli. 2015. Will they like this? evaluating code contributions with language models. In 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories, pages 157–167. IEEE.
Vincent J. Hellendoorn, Sebastian Proksch, Harald C.
Gall, and Alberto Bacchelli. 2019. When code completion fails: A case study on real-world completions.
In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), pages 960–970.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. 2021.
Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938.
Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the 2021 ACM
Conference on Fairness, Accountability, and Transparency. ACM.
Eirini Kalliamvakou. 2022. Quantifying GitHub Copilot's impact on developer productivity and happiness. https://github.blog/2022-09-07-researchquantifying-github-copilots-impact-on-developerproductivity-and-happiness/.
Svetoslav Karaivanov, Veselin Raychev, and Martin Vechev. 2014. Phrase-based statistical translation of programming languages. In Proceedings of the 2014 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming &
Software, pages 173–184.
Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, and Percy S Liang.
2019. Spoc: Search-based pseudocode to code. *Advances in Neural Information Processing Systems*,
32.
Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2021. Deduplicating training data makes language models better. arXiv preprint arXiv:2107.06499.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022. Competition-level code generation with alphacode. *arXiv preprint arXiv:2203.07814*.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81.
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. 2021. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664.
Nitika Mathur, Timothy Baldwin, and Trevor Cohn.
2020. Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4984–4997, Online. Association for Computational Linguistics.
Anh Tuan Nguyen, Tung Thanh Nguyen, and Tien N
Nguyen. 2013. Lexical statistical machine translation for language migration. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, pages 651–654.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. A conversational paradigm for program synthesis. *arXiv preprint arXiv:2203.13474*.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. 2022. The fallacy of AI functionality. In *2022 ACM Conference on* Fairness, Accountability, and Transparency. ACM.
Shuo Ren, Daya Guo, Shuai Lu, Long Zhou, Shujie Liu, Duyu Tang, Neel Sundaresan, Ming Zhou, Ambrosio Blanco, and Shuai Ma. 2020. Codebleu: a method for automatic evaluation of code synthesis. *arXiv* preprint arXiv:2009.10297.
Baptiste Roziere, Marie-Anne Lachaux, Lowik Chanussot, and Guillaume Lample. 2020. Unsupervised translation of programming languages. *Advances in* Neural Information Processing Systems, 33.
Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. Intellicode compose:
Code generation using transformer. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 1433–1443.
Rachel L. Thomas and David Uminsky. 2022. Reliance on metrics is a fundamental challenge for ai. *Patterns*,
3(5):100476.
Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH
Hoi. 2021. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. *arXiv preprint* arXiv:2109.00859.
Justin D Weisz, Michael Muller, Stephanie Houde, John Richards, Steven I Ross, Fernando Martinez, Mayank Agarwal, and Kartik Talamadupula. 2021. Perfection not required? human-ai partnerships in code translation. In *26th International Conference on Intelligent* User Interfaces, pages 402–412.
Frank F Xu, Uri Alon, Graham Neubig, and Vincent J Hellendoorn. 2022. A systematic evaluation of large language models of code. *arXiv preprint* arXiv:2202.13169.
Kaitlyn Zhou, Su Lin Blodgett, Adam Trischler, Hal Daumé III, Kaheer Suleman, and Alexandra Olteanu.
2022. Deconstructing nlg evaluation: Evaluation practices, assumptions, and their implications. arXiv preprint arXiv:2205.06828.
Albert Ziegler, Eirini Kalliamvakou, X. Alice Li, Andrew Rice, Devon Rifkin, Shawn Simister, Ganesh Sittampalam, and Edward Aftandilian. 2022. Productivity assessment of neural code completion. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, MAPS 2022.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7: Limitation
✓ A2. Did you discuss any potential risks of your work?
Section 8: Ethics statement.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1: Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Section 3.3. We use 5 code generation models for our experiments. Two models are open source models available on HuggingFace (CodeGen350 multi, CodeGen2B multi) and 3 models from OpenAI (Codex Cushman, Davinci001, Davinci002)
✓ B1. Did you cite the creators of artifacts you used?
Section 3.3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In section 3.3. we point the reader to the source of models used in the experiment. In the ethics section we also mention we will be releasing a library (api and user interface) which we used generating code snippets used in our human evaluation study.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 3.3. The models we use were created specifically for the task of code generation. In Section 2, we mention how the use of similarity based metrics for evaluation text generative models have been mixed, but contextualize our use of this metric as a surrogate for effort associated with fixing generated code.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Our work explores a specific domain (code generation) that does not usually cover names or offensive content.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
In Section 3.3., we provide documentation on the models used. In Section 3.1, we provide details on the dataset used.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
In Section 3.3., we provide documentation on the models used. In Section 3.1, we provide details on the dataset used. In Figure 2 we provide information on the correlation between metrics and report that they are statistically significant with p < 0.001. In section 4.1, our statistics on correlation between perceived value, effort and accuracy is reported with confidence intervals.
## C ✓ **Did You Run Computational Experiments?** Section 3.3
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
No response.
✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We used pretrained models without any modification. In section 3.3 we point the reader to the exact models used to enable reproducibility.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Correlations reported in this paper specify significance and confidence interval as needed. See Figure 2 and section 4.1
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In section 3.2, we mention the details of our implementation of relative edit similarity and BLEU. In section 3.3 we also point the reader to details on the models used in our study.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3.5, 3.5
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 3.5
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section 3.5
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Section 3.5
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Section 3.5 |
muradoglu-hulden-2023-transformer | Do transformer models do phonology like a linguist? | https://aclanthology.org/2023.findings-acl.541 | Neural sequence-to-sequence models have been very successful at tasks in phonology and morphology that seemingly require a capacity for intricate linguistic generalisations. In this paper, we perform a detailed breakdown of the power of such models to capture various phonological generalisations and to benefit from exposure to one phonological rule to infer the behaviour of another similar rule. We present two types of experiments, one of which establishes the efficacy of the transformer model on 29 different processes. The second experiment type follows a priming and held-out case split where our model is exposed to two (or more) phenomena; one which is used as a primer to make the model aware of a linguistic category (e.g. voiceless stops) and a second one which contains a rule with a withheld case that the model is expected to infer (e.g. word-final devoicing with a missing training example such as b→p) results show that the transformer model can successfully model all 29 phonological phenomena considered, regardless of perceived process difficulty. We also show that the model can generalise linguistic categories and structures, such as vowels and syllables, through priming processes. | # Do Transformer Models Do Phonology Like A Linguist?
Saliha Muradoglu˘ **Mans Hulden**
The Australian National University (ANU) University of Colorado ARC Centre of Excellence for the Dynamics of Language (CoEDL)
[email protected], [email protected]
## Abstract
Neural sequence-to-sequence models have been very successful at tasks in phonology and morphology that seemingly require a capacity for intricate linguistic generalisations.
In this paper, we perform a detailed breakdown of the power of such models to capture various phonological generalisations and to benefit from exposure to one phonological rule to infer the behaviour of another similar rule. We present two types of experiments, one of which establishes the efficacy of the transformer model on 29 different processes.
The second experiment type follows a priming and held-out case split where our model is exposed to two (or more) phenomena; one which is used as a primer to make the model aware of a linguistic category (e.g. voiceless stops) and a second one which contains a rule with a withheld case that the model is expected to infer (e.g. word-final devoicing with a missing training example such as b→p). Our results show that the transformer model can successfully model all 29 phonological phenomena considered, regardless of perceived process difficulty. We also show that the model can generalise linguistic categories and structures, such as vowels and syllables, through priming processes.
## 1 Introduction
In computational linguistics, neural networks have occupied much of recent work. One prime driver is adaptability to multiple facets of linguistic phenomena. As an example, sequence-to-sequence models have been shown to capture inflection patterns across numerous languages (Kodner et al., 2022).
While their performance represents significant advances, the abstractions generated during the modelling process warrant further investigation. We experiment with phonological processes on a constructed language to compare the generalisations learned by transformer models with widespread linguistic phenomena.
In particular, we address the following questions:
- Learning specific phonological processes (are some more difficult than others?)
- Categorisation (can the model generalise a category, vowels, consonants, specific consonant groups, e.g. plosives?)
- Is word structure (syllables) implicitly learned?
We establish that the transformer model successfully models all 29 phonological phenomena we consider, regardless of linguistic complexity. Our results show that the model can generalise to linguistic categories with some caveats. By examining the transformer model's generalisation of haplology, we show that the model appears to learn syllables; the model can recognise the difference between VC and CV and generate previously unseen CV sequences.
## 2 Related Work
Investigating the cognitive reality of linguistic categories defined within phonology has long been of interest to linguistics. Does the natural class of phonemes bear any significance to a cognitive reality? For example, a series of experiments (Finley and Badecker, 2009; Chambers et al., 2010; Skoruppa and Peperkamp, 2011) examine the natural class of vowels and whether phonological patterns can be extended to previously unseen vowels. The studies suggest that participants were mostly able to generalise. In a similar vein, Finley (2011) presents a study on consonant harmony. The results suggest that learners (human learners) can generalise to novel consonants when the phonological pattern is general. However, the learners failed to generalise when the rule triggering the consonant harmony pattern was highly specific.
We adapt this long-standing linguistic question to ask whether Transformer-based abstractions are 8529 linguistically informed. Our experiment setup swaps the human learner with the Transformer architecture. Previous studies investigating phonological phenomena with Transformers include Elsner
(2021), where Transformers can handle reduplication and gemination. To an extent,1the SIGMORPHON shared tasks (Kodner et al., 2022) also demonstrate the capacity of Transformers to represent phonological processes through capturing allomorphs conditioned by phonological environments.
There have been extensive studies on various phonological processes and RNNs. Haley and Wilson (2021) shows that encoder-decoder networks (specifically LSTM and GRU architectures)
can learn infixation and reduplication. Mirea and Bicknell (2019) explores whether phonological distinctive feature information is required for learning word-level phonotactic generalisations using LSTMs. The authors find that information about phonological features hinders model performance, and phonotactic patterns are learnable from the distributional characteristics of each segment alone.
Moreover, distributional information proves to be integral in recovering phonological categories
(Mayer, 2020).
Another way to investigate neural architecture abstractions is to probe the model internally. Silfverberg et al. (2021) examines whether RNN states encode phonological alternations through experiments on Finnish consonant gradation. The authors show that the models often encode consonant gradation in a select number of dimensions. Rodd (1997)
probes the hidden states of an RNN model which controls Turkish vowel harmony. Similarly, Silfverberg et al. (2018) establish a correlation between embedding representations and distinctive phonological features for Finnish, Spanish and Turkish.
This paper focuses on a model-external interrogation of Transformer generalisations by studying the predictions produced.
## 3 Language Design
The phonological phenomena in question are tested on a constructed language. The primary motivation for this is to allow for a controlled experiment and ensure that we can generate enough samples of the required phonological environments for rules to be triggered and thus observed. With this in 1This largely depends on the language considered and the phonological processes it exhibits.
| Feature | Inventory |
|-----------|-----------------------------------------|
| Vowel | {a,e,i,o,u} |
| Consonant | {p,t,k,b,d,g,Ù,Ã,f,s,S,v,m,n,N,l,r,w,j} |
| Onset | {C, Ø, CC} |
| Nucleus | {V,VV} |
| Coda | {C, Ø, CC} |
mind, we require the constructed language to be as representative as possible of natural language.
Therefore, key features were chosen based on the condition of being the most typologically common ones (Maddieson, 1984; Ladefoged and Maddieson, 1996; Maddieson, 2013). The main characteristics are listed in Table. 1.
Generating a lexicon The most complex syllable structure possible in the language is **CCVVCC**
and the simplest one is V. Since our language design aims to generate a synthetic lexicon, we also control for word length distribution. Previous works have shown that word length over word types exhibits a roughly Gaussian distribution with a mean in the range [7, 10], depending on the language (Smith, 2012). We have chosen a mean word length of 8.
An additional constraint when generating a lexicon is the sonority sequencing principle
(SSP) (Selkirk, 1984; Clements, 1990). Syllable structures tend to be highly influenced by the sonority scale, with the general rule that more sonorous elements are internal (i.e., close to the nucleus) and less sonorous elements are closer to the syllable edge. Therefore, we use a sonority metric to avoid generating implausible consonant clusters, with the onset and coda requiring opposite values on the metric, i.e. increasing sonority in the onset and decreasing in the coda.
## 4 Data2
Our data preparation follows three steps: lexicon generation, triplet (lemma, tag, surface form)
formation via the finite-state tool *foma* (Hulden, 2009) and, finally, sampling of these triplets ac-2All data and code is available at https://github.com/
smuradoglu/phon-proc cording to the experiment at hand and formatting for Fairseq.(Ott et al., 2019)
3 We train the model as a standard 'inflection' task
(Kodner et al., 2022), but with tags being identifiers of the processes that are to be triggered instead of morphosyntactic information. For example, the input sequence moupi\#GEMINATION would be paired with the output mouppi. More example triplets are shown in Table 2.
4
| Input | Tag | Output |
|---------|---------------|----------|
| ateiSa | #APOCOPE | ateiS |
| enpanka | #APHAERESIS | npanka |
| a:NÃ | #SHORTENING | aNÃ |
| vepisk | #LENGTHENING | vepi:k |
| moupi | #GEMINATION | mouppi |
| aimggi | #DEGEMINATION | aimgi |
| soute | #INTERVOCALIC | soude |
| refend | #DEVOICE | refent |
| ketedu | #METATHESIS | kedetu |
| totoN | #HAPLOLOGY | toN |
| pima | #COPY | pima |
Lexicon generation entails generating viable syllable structures and filling these abstract structures using vowel and consonant inventories. The syllables are concatenated n times, where n is an integer between 1 and 10. We sample from this uniform distribution to produce a Gaussian distribution for word length with a mean of 8 symbols.
We include a COPY tag, where the input is copied to the output, to negate any performance drop by the model when unseen lemmata are encountered
(Liu and Hulden, 2022). In other words, the model, at test time, will never encounter a completely unseen lemma on which to perform a phonological change, since it will always have witnessed at least an input-output pair of any lemma used that is simply copied to the output.
![2_image_0.png](2_image_0.png)
## 5 Modelling Common Phonological Processes With Varying Degrees Of Complexity
In this experiment, we establish that seq2seq models can successfully capture a range of phonological processes, including more complex rules such as metathesis. As seen in Figure 1, the transformer model performs reasonably well across all phonological phenomena, with little distinction between the complexity of the process considered.
## 6 Linguistic Category Generalisation
We examine whether the transformer model can generalise linguistic categories such as vowels or syllables from examples of alternations. During training, we expose the model to two phenomena at once (priming/held-out cases) of processes where the model could potentially infer relevant categories and extend this knowledge to withheld cases. The first set of experiments focuses on the generalisation of vowels, and the second centres on categorising consonants.
## 6.1 Vowel Experiments 6.1.1 Apocope/Aphaeresis
In this experiment, Aphaeresis–deleting wordinitial vowels—is the priming process and Apocope—deleting word-final vowels—is the heldout case. The training set consists of aphaeresis cases with all five vowels. In other words, lexicon beginning with **a,e,i,o,u** are included. Apocope examples exclude cases where u occurs word-finally.
The u-final words with the Apocope tag are present only at test time. Table. 3 summarizes the results.
From these results, it is clear that the model extends the Apocope rule to the unseen u-vowel. There are only 8 instances within the 10 errors where 'u' is not deleted. The remaining 2 errors are other modelling errors (such as repeating characters): outputting sou instead of the gold so with input sou.
## 6.1.2 Vowel Shortening/Lengthening
Following a similar setup to the Apocope/Aphaeresis experiment, the vowel shortening
(priming) and lengthening (withheld case u) case involves training a model with all vowel cases for shortening, and all vowels except u for the vowel lengthening process. The results show a 100%
accuracy for the previously unseen u-cases for vowel lengthening. The two errors observed are from other categories (i.e., vowel shortening and non-u lengthening).
## 6.2 Consonant Experiments 6.2.1 Gemination/Degemination
This experiment involves training a model for Degemination (priming) and Gemination (withheld case p) processes. The results show that the transformer model has successfully extended the consonant category to include the unseen p. Out of the 453 test cases, only 12 were incorrect p cases, with the remaining five non-target errors. Incorrectly predicted instances follow the pattern of outputting lup with input lup instead of the gold **lupp**.
## 6.2.2 Devoicing/Intervocalic Voicing
This experiment involves final stop Devoicing
(priming) and Intervocalic Voicing (with-held-case
| Process | Test Size | Accuracy |
|------------------------|-------------|------------|
| Aphaeresis | 995 | 0.998 |
| Apocope Overall | 1465 | 0.992 |
| Apocope 'u → 0' | 587 | 0.983 |
| Vowel Shortening | 995 | 0.999 |
| Lengthening Overall | 1071 | 0.999 |
| Lengthening 'u s→ u :' | 95 | 1.000 |
| Degemination | 995 | 0.992 |
| Gemination Overall | 1357 | 0.987 |
| Gemination 'p → p p' | 453 | 0.974 |
| Devoicing | 995 | 1.000 |
| Intervocalic Overall | 1196 | 0.952 |
| Intervocalic 'p→ b' | 250 | 0.776 |
Table 3: Linguistic Categories Experiment. AA, SL,
GD and DI overviews refer to Apocope / Aphaeresis, Shortening / Lengthening, Gemination / Degemination and Devoicing / Intervocalic voicing. The last line refers to the withheld case; e.g. Apocope of u.
p). The training set is comprised of all word-final devoicing cases (b>p,d>t,g>k) and all intervocalic cases except the p case (where p>b).
| Process | Test Size | Accuracy |
|----------------------|-------------|------------|
| W-Initial voicing | 995 | 1.000 |
| Intervocalic Overall | 1196 | 0.8746 |
| Intervocalic 'p→ b' | 250 | 0.4000 |
| W-Initial devoicing | 995 | 1.000 |
| Intervocalic Overall | 1196 | 0.9473 |
| Intervocalic 'p→ b' | 250 | 0.7480 |
Table 4: Word initial (de)voicing and intervocalic voicing Experiment. The last line refers to the withheld case; i.e. Intervocalic voicing of p.
The results show that p is transformed to a b 77.6% of the instances. Where the conversion does not take place, errors typically follow the pattern of, e.g. outputting **epei**Se instead of **ebei**Se with the input **epei**Se To investigate the comparatively low performance. We compare word-initial devoicing with word-initial voicing as a priming process. The results are summarised in Table. 4. The accuracy of the predictions for the unseen p was substantially lower in the case of word-initial voicing (40%) compared with the word-initial devoicing (74.8%). Interestingly, word-initial voicing involves the same process as intervocalic voicing
(p>b), with only different environments triggering the process.
## 7 Word-Internal Representations
To test whether seq2seq models can learn a representation of word-internal structures, such as syllables, we experiment with examples of haplology.
Haplology (tatasa > **tasa**) is the process in which a repeated sequence of sounds is simplified to a single occurrence. For example, if the word **haplology** were to undergo haplology, it would reduce the sequence lolo to lo, haplology > **haplogy**.
In this experiment, we include two additional processes so the model can witness the contrast between vowels and consonants separately: (1) wordfinal vowel deletion and (2) word-final consonant deletion.
| Process | Test Size | Accuracy |
|----------------------|-------------|------------|
| Overview | 3264 | 0.959 |
| → Consonant deletion | 992 | 0.999 |
| → Vowel deletion | 992 | 0.998 |
| → Haplology overview | 1280 | 0.898 |
| Haplology | 920 | 0.972 |
| Unseen CVCV | 269 | 0.944 |
| Double Haplology | 91 | 0.011 |
| VCVC test | 2658 | 0.782 |
To test the generalisation capacity of the model, at test time, we include the following withheld cases: unseen CVCV structures—i.e. cases where haplology should apply, but the specific CVCVsequence is never seen in the training data; words where haplology occurs more than once; and VCVC structures to see if the model (erroneously)
learns to delete any repeating sequence of symbols.
In our experiment, we withhold from the training set the following CVCV-sequences: **dede, fofo,**
kuku, wowo, baba, vivi, papa, titi, soso, momo, nene, rere, lili, SuSu, jiji, ÙuÙu, NaN**a, gugu**.
Note that haplology includes both cases where haplology applies and does not since the input word may or may not contain a CVCV-sequence where the two CVs are identical.
Table 7 summarises the results obtained. The model shows high accuracy for the supplementary word-final vowel and consonant deletion processes.
We separate the haplology cases further into specific test cases. Our results from the unseen CVCV
category show strong evidence for model generalisation of CV structures. We further tested the same model on a separate test set consisting of VCVC structures. We see that for approximately 78% of the set, it correctly recognises these cases as incorrect conditions for haplology. In the remaining instances, the model does show a rare over-generalisation to sometimes delete repeating sequences regardless of the characteristics of the sequence.
The largest source of error within the haplology cases is the scenario in which haplology can be applied twice within the same word. In these cases, typically, the first case of repeating CV is deleted, and the second instance remains untouched, as when outputting **fuejaja** with input **fufuejaja**, instead of the gold **fueja**.
## 8 Conclusion
The transformer model successfully models all 29 phonological phenomena with slight variation across phenomenon complexity. Our results show that the model can generalize linguistic categories and structures. Through haplology, we show that the model appears to learn to recognize and generalize syllabic structure and is capable of recognizing the difference between VC and CV and can also generalize the transformation triggered by haplology to unseen CV sequences.
## Limitations
One drawback of the experiments presented here is the reliance on a constructed language. While we have tried to design a language that is as representative of natural language as possible, there may be additional statistical effects that are not taken into account. For example, it is unlikely that one language would capture all 29 phenomena presented here and that the process would be triggered enough times to produce a large enough corpus.
How these findings extended to existing language corpora is an open question for future studies.
## References
Lyle Campbell. 2013. *Historical Linguistics*. Edinburgh University Press.
Kyle E Chambers, Kristine H Onishi, and Cynthia Fisher. 2010. A Vowel is a Vowel: Generalizing Newly Learned Phonotactic Constraints to New Contexts. *Journal of Experimental Psychology: Learning, Memory, and Cognition*, 36(3):821.
G. N. Clements. 1990. The Role of the Sonority Cycle in Core Syllabification, volume 1 of *Papers in Laboratory Phonology*, page 283–333. Cambridge University Press.
Micha Elsner. 2021. What Transfers in Morphological Inflection? Experiments with Analogical Models. In Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 154–166, Online. Association for Computational Linguistics.
Sara Finley. 2011. Generalization to Novel Consonants in Artificial Grammar Learning. In *Proceedings of* the Annual Meeting of the Cognitive Science Society, volume 33.
Sara Finley and William Badecker. 2009. Artificial Language Learning and Feature-based Generalization. *Journal of memory and language*, 61(3):423–
437.
Coleman Haley and Colin Wilson. 2021. Deep Neural Networks Easily Learn Unnatural Infixation and Reduplication Patterns. In *Proceedings of the Society for Computation in Linguistics 2021*, pages 427–
433, Online. Association for Computational Linguistics.
Mans Hulden. 2009. Foma: a Finite-State Compiler and Library. In *Proceedings of the Demonstrations* Session at EACL 2009, pages 29–32, Athens, Greece.
Association for Computational Linguistics.
Jordan Kodner, Salam Khalifa, Khuyagbaatar Batsuren, Hossep Dolatian, Ryan Cotterell, Faruk Akkus, Antonios Anastasopoulos, Taras Andrushko, Aryaman Arora, Nona Atanalov, Gábor Bella, Elena Budianskaya, Yustinus Ghanggo Ate, Omer Goldman, David Guriel, Simon Guriel, Silvia GurielAgiashvili, Witold Kieras, Andrew Krizhanovsky, ´
Natalia Krizhanovsky, Igor Marchenko, Magdalena Markowska, Polina Mashkovtseva, Maria Nepomniashchaya, Daria Rodionova, Karina Scheifer, Alexandra Sorova, Anastasia Yemelina, Jeremiah Young, and Ekaterina Vylomova. 2022.
SIGMORPHON–UniMorph 2022 Shared task 0:
Generalization and Typologically Diverse Morphological Inflection. In *Proceedings of the 19th* SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 176–203, Seattle, Washington. Association for Computational Linguistics.
Peter Ladefoged and Ian Maddieson. 1996. The Sounds of the World's Languages, volume 1012. Blackwell Oxford.
Ling Liu and Mans Hulden. 2022. Can a Transformer Pass the Wug Test? Tuning Copying Bias in Neural Morphological Inflection Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 739–749, Dublin, Ireland. Association for Computational Linguistics.
Maddieson. 1984. *Patterns of Sounds*. Cambridge Studies in Speech Science and Communication. Cambridge University Press.
Ian Maddieson. 2013. Syllable structure. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online.
Max Planck Institute for Evolutionary Anthropology, Leipzig.
Connor Mayer. 2020. An Algorithm for Learning Phonological Classes from Distributional Similarity.
Phonology, 37(1):91–131.
Nicole Mirea and Klinton Bicknell. 2019. Using LSTMs to Assess the Obligatoriness of Phonological Distinctive Features for Phonotactic Learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1595–
1605, Florence, Italy. Association for Computational Linguistics.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.
Jennifer Rodd. 1997. Recurrent Neural-Network Learning of Phonological Regularities in Turkish. In CoNLL97: Computational Natural Language Learning.
Elisabeth Selkirk. 1984. 0.(1982). The Syllable. The Structure of Phonological Representations, Part I,
Foris, Dordrecht, pages 337–382.
Miikka Silfverberg, Lingshuang Jack Mao, and Mans Hulden. 2018. Sound Analogies with Phoneme Embeddings. In *Proceedings of the Society for Computation in Linguistics (SCiL) 2018*, pages 136–144.
Miikka Silfverberg, Francis Tyers, Garrett Nicolai, and Mans Hulden. 2021. Do RNN States Encode Abstract Phonological Alternations? In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5501–5513, Online. Association for Computational Linguistics.
Katrin Skoruppa and Sharon Peperkamp. 2011. Adaptation to Novel Accents: Feature-based Learning of Context-sensitive Phonological Regularities. *Cognitive Science*, 35(2):348–366.
Reginald D Smith. 2012. Distinct Word Length Frequencies: Distributions and Symbol Entropies.
arXiv preprint arXiv:1207.2334.
## A Summary Of Phonological Processes
Affrication a process where either a stop, or fricative, becomes an affricate.
Anaptyxis (VCCV > VCVCV) a kind of epenthesis where an extra vowel is inserted between two consonants.
Aphaeresis (atata > tata) the deletion of word initial vowels.
Apocope (tata > tat) the loss of a sound, usually a vowel, at the end of a word.
Deaffrication an affricate becomes a fricative.
Degemination (CC > C) a sequence of two identical consonants is reduced to a single occurrence.
Devoicing the devoicing of stops word-finally.
Diphthongization an original single vowel changes into a sequence of two vowels.
Excrescence (amra > ambra; anra > andra; ansa
> antsa) the insertion of a consonant. In our case, the insertion of b, d, or t.
Gemination (C > CC) produces a sequence of two identical consonants from a single consonant.
Hiatus glide (puo > pujo) a semi-vowel/glide is inserted between falling vowel pair.
Hiatus stop (pia -> pika) the insertion of a stop which breaks up a falling vowel pair.
Intervocalic Voicing various sounds become voiced between vowels, in this case we focus on stops.
Lengthening (tast > ta:t) a vowel lengthens subsequent to the loss of a following consonant, also called *compensatory lengthening*.
Metathesis (asta > atsa; asata > atasa) a change in which sounds exchange positions with one another within a word.
Monophthongization a diphthong changes into a single vowel.
Nasal Assimilation (np > mp) the change of nasal sounds to agree with the place of articulation of following stops.
Nasalization (ana > ãna) vowels become nasalized before nasal consonants.
Palatalization (k -> Ù, or d -> j) involves the change of a velar/alveolar sound to palato-alveolar, this often takes place before or after i or e.
Paragoge (tat > tata) adds a vowel to the end of a word.
Prothesis (tata > atata) a kind of epenthesis in which a sound is inserted at the beginning of a word.
Rhotacism (ase > are) s becomes r; this takes place between vowels or glides.
Shortening (ta: -> ta) vowels shorten in a variety of contexts, e.g. word-finally.
Spirantization an affricate is weakened to a fricative, or a stop to a fricative.
Strengthening fortition of sounds; an affricate becomes a stop, or a fricative becomes an affricate.
Syncope (atata > atta) the loss of a vowel from the interior of a word (not initially or finally)
Vowel lowering results in high vowels becoming mid or low vowels, or mid vowels becoming low.
Vowel raising is where low vowels raise to mid
(or high) vowels, or mid vowels to high vowels).
## B Model Details
| Hyperparameter | Value |
|---------------------------------|---------|
| Encoder/Decoder layers | 4 |
| Encoder/Decoder attention heads | 4 |
| Optimization | Adam |
| Embedding size | 256 |
| Hidden layer size | 1024 |
| Learning rate | 0.001 |
| Batch Size | 400 |
| Label Smoothing | 0.1 |
| Gradient clip threshold | 1.0 |
| Warmup updates | 1000 |
| Max updates | 6000 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
norouzi-etal-2023-dims | {D}i{MS}: Distilling Multiple Steps of Iterative Non-Autoregressive Transformers for Machine Translation | https://aclanthology.org/2023.findings-acl.542 | The computational benefits of iterative non-autoregressive transformers decrease as the number of decoding steps increases. As a remedy, we introduce Distill Multiple Steps (DiMS), a simple yet effective distillation technique to decrease the number of required steps to reach a certain translation quality. The distilled model enjoys the computational benefits of early iterations while preserving the enhancements from several iterative steps. DiMS relies on two models namely student and teacher. The student is optimized to predict the output of the teacher after multiple decoding steps while the teacher follows the student via a slow-moving average. The moving average keeps the teacher{'}s knowledge updated and enhances the quality of the labels provided by the teacher. During inference, the student is used for translation and no additional computation is added. We verify the effectiveness of DiMS on various models obtaining 7.8 and 12.9 BLEU points improvements in single-step translation accuracy on distilled and raw versions of WMT{'}14 De-En.Full code for this work is available here: \url{https://github.com/layer6ai-labs/DiMS} | # Dims: Distilling Multiple Steps Of Iterative Non-Autoregressive Transformers For Machine Translation
Sajad Norouzi∗
Layer6 AI
[email protected] Rasa Hosseinzadeh∗
Layer6 AI
[email protected]
## Abstract
The computational benefits of iterative nonautoregressive transformers decrease as the number of decoding steps increases. As a remedy, we introduce Distill Multiple Steps
(**DiMS**), a simple yet effective distillation technique to decrease the number of required steps to reach a certain translation quality. The distilled model enjoys the computational benefits of early iterations while preserving the enhancements from several iterative steps. DiMS relies on two models namely student and teacher.
The student is optimized to predict the output of the teacher after multiple decoding steps while the teacher follows the student via a slow-moving average. The moving average keeps the teacher's knowledge updated and enhances the quality of the labels provided by the teacher. During inference, the student is used for translation and no additional computation is added. We verify the effectiveness of DiMS on various models obtaining 7.8 and 12.9 BLEU points improvements in single-step translation accuracy on distilled and raw versions of WMT'14 De-En. Full code for this work is available here: https://github.com/
layer6ai-labs/DiMS.
## 1 Introduction
Neural machine translation models typically follow an autoregressive decoding strategy, generating the target sentence one token at a time. This sequential nature makes the inference process slow and dependent on the output sequence length. To address this limitation Gu et al. (2018) introduces the NonAutoregressive Transformer (NAT). NAT generates the entire target sentence in parallel, reducing the latency by an order of magnitude. NAT can be considered as a member of a broader family of iterative non-autoregressive Transformers (iNAT) (Lee et al., 2020; Stern et al., 2019; Ghazvininejad et al., 2019) where the number of decoding steps is fixed
∗Equal contribution.
Felipe Pérez
![0_image_0.png](0_image_0.png)
Layer6 AI
[email protected] Maksims Volkovs Layer6 AI
[email protected] and independent of the sequence length. By tuning the number of decoding steps, one can control the trade-off between speed and quality. While iNATs can be considered as efficient alternatives to their autoregressive counterparts, Kasai et al. (2020b)
shows that autoregressive models can be sped up without loss in accuracy by combining shallow decoders with deep encoders. This diminishes the computational advantage of iNATs and challenges their motivation. The focus of recent work has thus been shifted to design single-step NAT models (Ghazvininejad et al., 2020a; Qian et al., 2021; Du et al., 2021).
In order to preserve the enhancements obtained by multiple decoding iterations of iNATs, we introduce Distill Multiple Steps (DiMS), a distillation algorithm applicable to a wide range of iterative models. Given a pre-trained iNAT, referred to as teacher, a student aims to replicate the behavior of multiple iterative steps of the teacher with one decoding pass. This process resembles the wellknown knowledge distillation framework(Hinton et al., 2015). However, instead of reducing the number of parameters, we aim to decrease the number
![1_image_0.png](1_image_0.png)
![1_image_1.png](1_image_1.png)
of decoding passes. The final model then enjoys the translation quality of multi-steps iNAT with the computational efficiency of single-step translation.
The proposed distillation can be repeated iteratively, where at the end of each round the newly optimized student becomes the next teacher. While effective, iterative distillation is slow as it requires multiple rounds of training until convergence. Alternatively, we propose updating the parameters of the teacher with an exponential moving average (EMA) of the student. This gradually transfers the new knowledge learned by the student to the teacher and can be viewed as a continuous variant of iterative distillation. Figure 1 depicts the DiMS
algorithm.
We demonstrate the effectiveness of our approach on several public datasets by showing that DiMS obtains substantial improvements on singlestep translation with gains of up to 7.8 BLEU
points on the distilled training dataset, while the gains on raw datasets are even greater. Notably, we are able to surpass many leading NAT models designed specifically for single-step translation.
We further show that EMA considerably speeds up training and converges to a comparable accuracy with iterative distillation in a fraction of epochs.
## 2 Background
In this section, we lay out a formal framework for iNATs. We use the setup of Conditional Masked Language Models (CMLM). CMLM
first introduced in Ghazvininejad et al. (2019)
and subsequently adopted in many iNAT models (Ghazvininejad et al., 2020b; Kasai et al., 2020a; Huang et al., 2021). The source sentence, target sentence, and target sequence length are denoted by x, y and N, respectively.
## 2.1 Training
Given a partially masked reference sentence y˜ and the corresponding source context x, the model is trained to reveal all the masked positions simultaneously (Ghazvininejad et al., 2019). From a probabilistic perspective, this imposes a conditional independence assumption on the predicted tokens.
Formally, the training loss is:
$$\mathbb{E}_{{\tilde{\mathbf{y}}}\sim\mathbf{M}(\mathbf{y})}\sum_{i\in\xi({\tilde{\mathbf{y}}})}-\log p_{\theta}(y_{i}|\mathbf{x},{\tilde{\mathbf{y}}}),$$
where M is a distribution over all partially masked target sentences and ξ is a function that returns the set of masked indices. The training objective above implicitly assumes access to the target sentence length. To resolve this issue, CMLM trains a parametric model, *length predictor*, to predict the output length.
## 2.2 Inference
The inference begins by creating a template y˜
(0)
with N˜ masked tokens, where N˜ is the output of the length predictor. At iteration t of the inference, the model predicts the translation r
(t) given y˜
(t−1)
and x as inputs. Depending on the number of decoding iterations S, typically a linear unmasking policy is used where at each step *N /S* ˜ tokens with the highest probability are revealed. This process is repeated S times, resulting in a fully revealed sentence. In other words, y˜
(t)
iis set to r
(t)
i when i ∈ arg-topkk= N˜
S
npθ r
(t)
j
x, y˜
(t−1)o, where pθ denotes the output probability of the model.
Otherwise y˜
(t)
istays equal y˜
(t−1)
i.
Note that multiple length candidates can be considered (e.g. N˜ ± 1) with the average token probability as a ranking criterion. This is similar to beam
## Algorithm 1 Dims
Require: Data set D, pre-trained model ϕ, Hidden state loss factor λ, teacher steps n, EMA momentum µ, learning rate η θt, θs ← ϕ ▷ Initialize teacher and student while not converged do
(x, y) ∼ D ▷ Sample data y˜ ∼ M(y) ▷ Sample masking pt ← Iθt
(x, y˜, n) ▷ Run the teacher for n iterative steps ps ← Iθs
(x, y˜, 1) ▷ Run the student for a single step LDiMS ←Pi KLpt,i|ps,i+ λ∥et,i − es,i∥
2 ▷ Compute the DiMS loss θs ← Optimizer(θs, ∇θsLDiMS, η) ▷ Gradient based optimization of the student θt ← (1 − µ)θs + µθt ▷ EMA Update of the teacher end while search in autoregressive models but applied to the output sequence length. It is referred to as length beam.
## 3 Distillation Of Iterative Non-Autoregressive Transformers
Increasing the number of decoding steps typically improves accuracy, but diminishes the computational advantage of iNATs. Our objective is to reduce the number of decoding steps without degrading the performance. More specifically, we want to condense the translation quality of multiple steps of a teacher into one decoding pass of a student. For instance, consider an iterative model
(teacher) that uses eight decoding steps. By replicating four steps of the teacher with one decoding pass, two steps of the student would be sufficient to reach a similar performance.
The standard way of knowledge distillation would have the teacher generate soft labels for all intermediate iterations, and optimize the student to track the teacher's output with fewer steps, but doing such generation on-the-fly greatly increases the training cost. This process can be moved to a pre-processing phase, at the cost of large memory requirement. We propose to use partially masked reference sentences as an approximation to the intermediate predictions of the teacher, which eliminates the need for several decoding passes or large memory capacity.
The distillation process starts by initializing the student and the teacher to the same pre-trained model with parameters ϕ i.e. θs = θt = ϕ where θs and θt denote the parameters of the student and teacher. Then, the teacher processes a partially masked sentence y˜ through n iterative steps with a linear unmasking policy. More precisely, i/n of the originally masked tokens are revealed up to step i and after the final pass, no masked token remains. This is similar to the inference procedure outlined in Section 2.2, but instead of starting from a fully masked sentence, it starts from a partially masked one. The student is optimized to match the teacher's soft labels and a temperature is used to control the smoothness of the labels. With enough capacity, the student is expected to imitate the behavior of n consecutive steps of the teacher with one decoding pass.
## 3.1 Training Loss
We denote the output distribution after n iterative steps on the partially masked sentence y˜ by Iθ (y˜, x, n) where θ represents the parameters of the model. The distillation loss can be described as:
Pi∈ξ(y˜) KLpt,i|ps,iwhere pt = Iθt
(y˜, x, n),
ps = Iθs
(y˜, x, 1) and i in subscript denotes the index in the sentence. Note that the teacher's soft labels do not come from the same decoding iteration i.e. whenever a token is revealed, the corresponding soft labels are fixed in pt. Thus, the student receives labels from various decoding steps of the teacher. Figure 2 depicts the process teacher follows to produce the labels for two iterative steps.
From the student's point of view, the primary difference between DiMS and CMLM training (Section 2.1) is the use of soft labels generated by the teacher instead of the ground truth tokens.
To facilitate the distillation, we combine the KLdivergence with the Euclidean distance of the last layers' hidden states of the teacher and the student.
This transfers the knowledge concealed within the hidden states that might not be discernible in soft labels. We refer to this as *hidden state loss*. Sim-
![3_image_0.png](3_image_0.png)
ilar to the KL-divergence, the hidden state loss is computed over the masked indices.
To summarize, DiMS training loss has two terms:
i) KL-divergence between distributions predicted by the teacher and the student. ii) The Euclidean distance between the last hidden states of two models. Denoting teacher's and student's last hidden state by et and es, DiMS loss can be written formally as:
$${\mathcal{L}}_{\mathrm{DiMS}}=\sum_{i}{\mathrm{KL}}\big(\mathbf{p}_{t,i}|\mathbf{p}_{s,i}\big)+\lambda\|\mathbf{e}_{t,i}-\mathbf{e}_{s,i}\|^{2},$$
where pt = Iθt
(y˜, x, n) and ps = Iθs
(y˜, x, 1).
The hyper-parameter λ controls the contribution of hidden state loss. When the distillation is completed, the student is used for inference.
## 3.2 Ema Update Of The Teacher
As the distillation progresses, the performance gap between multiple steps of the teacher and a singlepass of the student shrinks, making the teacher's labels less informative. Two approaches can be considered to sustain the usefulness of the teacher's labels: i) Increasing the number of teacher's iterative steps. ii) Restarting the distillation where the recently optimized student becomes the new teacher and repeating this process several times, i.e. θ
(n)
t ← θ
(n−1)
s . The former makes the training more expensive as the number of sequential steps grows, and the latter requires repeated distillation rounds leading to a longer training time.
Instead, we propose updating the teacher with the student's recently learned knowledge. As the student's single-step output approaches the teacher's multi-step, the student's multi-step performance would improve as well, and it is beneficial to use the improved student as the new teacher. However, replacing the teacher directly with the student would hurt the training stability, and can lead to a pathological solution of mapping everything to a constant vector. This degenerate solution shortcuts the LDiMS loss by setting it to a global minimum of zero. To alleviate this, we update the teacher with a slow-exponential-moving average of the student, which transfers the new knowledge learned by the student to the teacher in a controlled manner. The updated teacher now provides a better training target for the student, creating a positive feedback loop between the two models. The teacher also benefits from the ensembling effects of the EMA (Izmailov et al., 2018). Algorithm 1 outlines the steps for DiMS training with EMA.
## 4 Experiments 4.1 Experimental Setup
We use Fairseq (Ott et al., 2019) for all the experiments and follow the default data splits. All models are Transformers with encoder-decoder architecture, each having 6 layers and 512-dimensional hidden states. Adam optimizer with inverse squared root learning rate scheduler is used along with mixed precision. EMA and hidden state loss are leveraged with two iterative steps of the teacher unless otherwise stated. We use early stopping based on single-step BLEU score on the validation set. The final model is the average of 5 best checkpoints. Dropout is disabled for the teacher and the student since empirical improvements are observed. We conduct experiments on both the raw and distilled dataset that is obtained from an autoregressive model (Gu et al., 2018). Training is done with 4 Tesla V100 GPUs (32 GB) and we report all the hyper-parameters in Section C of the appendix. The extra computational cost of distillation is a small fraction of original training. We report a detailed comparison in Section E of the appendix.
## 4.2 Main Results
| Model | WMT'14 | WMT'16 | | |
|------------------------------------------|----------|----------|-------|-------|
| En-De | De-En | En-Ro | Ro-En | |
| CMLM (Ghazvininejad et al., 2019) | 18.1 | 21.8 | 27.3 | 28.2 |
| SMART (Ghazvininejad et al., 2020b) | 18.6 | 23.8 | - | - |
| CMLMC (Huang et al., 2021) | 19.6 | 23.6 | 28.2 | 29.0 |
| Aux. Reg. (Wang et al., 2019) | 20.7 | 24.8 | - | - |
| Bag-of-ngram (Shao et al., 2020) | 20.9 | 24.6 | 28.3 | 29.3 |
| Hint-based Loss (Shao et al., 2020) | 21.1 | 25.2 | - | - |
| Bigram CRF (Sun et al., 2019) | 23.4 | 27.2 | - | - |
| EM+ODD (Sun and Yang, 2020) | 24.5 | 27.9 | - | - |
| ENGINE (Tu et al., 2020) | - | 28.1 | - | 28.2 |
| GLAT (Song et al., 2021) | 25.2 | 29.8 | 31.2 | 32.04 |
| CMLMC + DiMS | 26.7 | 31.1 | 33.2 | 33.6 |
| Imputer (Saharia et al., 2020) | 25.8 | 28.4 | 32.3 | 31.7 |
| AXE (Ghazvininejad et al., 2020a) | 23.5 | 27.9 | 30.8 | 31.5 |
| OAXE (Du et al., 2021) | 26.1 | 30.2 | 32.4 | 33.3 |
| AlignNART (Song et al., 2021) | 26.4 | 30.4 | 32.5 | 33.1 |
| FullyNAT + CTC + GLAT(Gu and Kong, 2020) | 27.2 | 31.4 | 33.7 | 34.2 |
| DAT (Huang et al., 2022b) | 27.3 | 31.3 | - | - |
Our main experiments are conducted on WMT'14 En-De and WMT'16 En-Ro datasets with two models: i) CMLM, a pivotal work in iNAT literature showing the effectiveness of conditional masked language models. ii) CMLMC, a recent work improving CMLM by incorporating a correction mechanism. The corresponding official repositories are used to train the teachers. Both models exploit a length predictor that is conditioned on the encoder's hidden states. For CMLMC models we use encoder side masking and prediction (Guo et al., 2020) to further boost the performance of the teacher. To make the length predictor compatible with changes in the encoder, we keep the length predictor loss during distillation.
Figure 3 contrasts the single-step BLEU score of students with teachers evaluated for various number of decoding steps. DiMS considerably improves the translation quality of the single-step inference, reducing or eliminating the gap with multi-step inference. For example, on the WMT'14 De-En dataset, the single-step of CMLMC+DiMS
surpasses the teacher's 4-step performance. We compared our best single-step model with strong baselines in Table 1 showing the effectiveness of our approach. DiMS outperforms all cross-entropy based models and makes cross-entropy based models competitive with their alignment based counterparts.
## 4.3 Results On An Alignment Based Model
| XE-Based Alignment-Based |
|----------------------------|
To show the versatility of DiMS, we conduct experiment on alignment-based models leveraging Connectionist Temporal Classification (CTC) (Graves et al., 2006) objective. Imputer (Saharia et al.,
2020) is among a few models that are both alignment based and iterative. There is no official implementation of Imputer available, therefore we implement a version ourselves (denoted with †)
1.
Table 2 summarizes the results of DiMS applied to Imputer for both directions of the WMT'14 English-German dataset. While DiMS boosts single step translation of Imputer, it still falls behind more recent alignment based models mentioned in Table 1. However, we believe if one incorporates various tricks introduced for alignment based models recently and create a better iterative model, then DiMS can be an effective tool to further enhance the single step translation. Details of Imputer training and distillation are explained in Section F of the appendix.
Table 2: Single-step test set BLEU score for Imputer models trained on WMT'14 English-German.
## 4.4 Dims On Raw Dataset
The performance of the leading iNATs is at best similar to the autoregressive model used for sequence level knowledge distillation. This limits the final performance of iNATs and makes training without distillation desirable (Huang et al., 2021).
Table 3 shows that DiMS improves the raw performance by a large margin even more than the corresponding distilled variant. For instance, DiMS
gets more than 12 BLEU scores improvements on single-step evaluation of CMLMC.
For one decoding pass, when raw variants of CMLMC are distilled with DiMS the performance is superior to training on the distilled dataset
(without DiMS). This makes DiMS preferable to sequence-level knowledge distillation. Nevertheless, the best performance is obtained when the two distillation approaches are combined.
Table 3: Comparison of student and teacher on raw dataset.
## 4.5 Unsupervised Dims
In previous sections, we assume access to a parallel dataset and feed a partially masked reference sentence to both student and teacher. One can use the teacher to generate synthetic target sentences during the distillation. This relaxes the dependence on the references and enables using monolingual datasets for distillation. As usual, there is a tradeoff between computation and sample quality i.e.
using more decoding passes leads to better data while increasing the computational requirements.
We refer to this unsupervised distillation variant as U-DiMS. Note that unsupervised only refers to the distillation, and for training the teacher we still require access to a parallel dataset. The only dis-
Table 4: Single-step test set BLEU score for models trained with U-DiMS.
tinction between U-DiMS and DiMS is the usage of synthetic data generated by the teacher and the remaining parts are untouched. We run U-DiMS on WMT'14 De-En for CMLM and CMLMC using two iterative steps to generate the synthetic samples. Table 4 shows the effectiveness of U-DiMS,
obtaining a similar performance to DiMS.
Table 5: BLEU score on WMT'16 En-Ro validation set with beam length set to one as its done for early stopping. T stands for the number of teacher decoding steps and is set to two if not specified.
## 4.6 Ablation Studies
We conduct all the ablation studies on CMLM over WMT'16 En-Ro as it is smaller than WMT'14 and validation set is used for evaluation.
| Method | WMT'14 De-En |
|----------------|----------------|
| CMLM | 22.77 |
| CMLM + U-DiMS | 29.45 |
| CMLM + DiMS | 29.74 |
| CMLMC | 23.63 |
| CMLMC + U-DiMS | 30.52 |
| CMLMC + DiMS | 30.81 |
| Method | 1-Step BLEU |
|-------------------------|---------------|
| CMLM | 25.77 |
| CMLM + DiMS | 30.85 |
| CMLM + DiMS - Hidden. | 28.69 |
| CMLM + DiMS (T=4) | 31.04 |
| CMLM + DiMS (T=8) | 30.97 |
| CMLM + DiMS + EMA | 31.63 |
| CMLM + DiMS (T=4) + EMA | 31.52 |
| CMLM + DiMS (T=8) + EMA | 31.36 |
## 4.6.1 Hidden State Loss
| CMLMC | | |
|---------|---------|------|
| Teacher | Student | |
| En-De | 11.7 | 23.2 |
| De-En | 16.4 | 29.3 |
| En-Ro | 21.4 | 29.3 |
| Ro-En | 21.8 | 32.7 |
| Method | WMT'14 En-De | WMT'14 De-En |
|-----------------|----------------|----------------|
| Imputer† | 25.9 | 29.0 |
| Imputer† + DiMS | 26.4 | 29.8 |
To investigate the effects of hidden state loss, we conduct an ablation study in this section. The first block in Table 5 includes BLEU scores for the base DiMS model with and without this term. The single-step performance of the distilled model is improved over 2 BLEU points by leveraging this loss. This supports the fact that the hidden states contain extra information that is not available in soft labels. The exact value of λ is selected based on a grid search reported in Section D of the ap-
![6_image_0.png](6_image_0.png)
## 4.6.2 Ema
In order to establish the computational advantages of the slow-moving average, we compare it with running the base variant for 9 iterative rounds. Figure 4 demonstrates that the EMA variant is able to match the iterative distillation with far fewer updates (almost equal to one round of the distillation).
We observed that it is essential to move the teacher toward the student slowly. For example, when µ ≤ 0.9, the collapse to a degenerate solution (explained in Section 3.2) occurs before the end of the first epoch. We plot the validation curve for various values of µ in Section B of the appendix showing the importance of the slow-moving average.
## 4.6.3 Teacher Decoding Steps
One hyper-parameter in DiMS algorithm is the number of teacher's decoding steps. In order to investigate the effect of this hyper-parameter, we set it to 2, 4, and 8 while turning EMA on and off. The two bottom blocks of Table 5 include the results of this ablation. Although running the teacher for 4 decoding steps shows superior performance without EMA, as soon as we turn it on the gap disappears. This shows that EMA can gradually improve the teacher and remove the need for several iterative steps. Thus, we find no reason to set this hyper-parameter larger than 2 as it only increases distillation's computational cost.
Figure 5: Test set BLEU score on WMT'14 De-En
![6_image_1.png](6_image_1.png) based on the target sentence length for CMLM teacher and student.
4.7 Analysis We study the effect of target sentence lengths on DiMS performance. The test set is divided into five equally-sized buckets based on the target length.
The BLEU scores are reported for each bucket in Figure 5. The main benefit of the iterative model is manifested by large sentences. The reason might be the fact that longer sentences require a context and modeling it becomes challenging with the conditional independence assumption in NAT. It is clear in Figure 5, that the performance is improved in every bucket. This improvement is most visible in the bucket with the highest average sentence length.
This is because of the fact that the same bucket has the largest gap between the teacher's single and multi-step evaluation.
We combine the length predictor objective with ours to account for changes in the encoder's parameters. Interestingly enough, DiMS improves the performance of the length predictor as depicted in Figure 6. This shows that the encoder benefits from the distillation as well.
Table 6 shows a qualitative example from the WMT'14 De-En dataset. The improvements in samples are evident by comparing the predictions of the teacher and the student with the target sentence. We provide more qualitative examples in the appendix.
## 5 Related Works
Many techniques have been proposed for iterative non-autoregressive machine translation. Earlier attempts include denoising autoencoder (Lee et al.,
2018) and insertion-deletion (Stern et al., 2019; Gu et al., 2019). More recently, Ghazvininejad et al. (2019) introduced the Mask-Predict improving the performance of iNATs by employing a conditional masked language model. CMLMC (Huang et al., 2021) and SMART (Ghazvininejad et al.,
2020b) improve CMLM by incorporating a correction mechanism. DisCo (Kasai et al., 2020b) is another variant conditioning each token on an arbitrary subset of the other tokens. DiMS is entangled with the progress in this domain as it requires a pre-trained iterative teacher.
The position constraint in cross-entropy can make the NAT training challenging, therefore Ghazvininejad et al. (2020a) propose aligned crossentropy (AXE), an objective that considers the best monotonic alignment between the target and the model's predictions. Du et al. (2021) relaxes the monotonic assumption and introduces Order Agnostic Cross-Entropy (OAXE). CTC (Libovicky`
and Helcl, 2018) is a similar alignment-based objective that fixes the model output length and considers various alignments leading to the same target.
Imputer (Saharia et al., 2020) extends CTC to benefit from iterative refinements.
GLAT (Qian et al., 2021) shows that the optimization challenges of iNATs can be mitigated by introducing a curriculum learning focusing on sentences with only a few masked tokens in the early stages of the training and gradually increasing the masking ratio. ENGINE (Tu et al., 2020) assumes access to a pre-trained autoregressive model and optimizes a NAT model to maximize the likelihood under the probability distribution defined by the pre-trained model.
Salimans and Ho (2021) applies a distillation technique similar to DiMS on generative models to decrease the number of required steps for generating high-quality images. In contrast to DiMS, the distillation is applied progressively. DiMS eliminates the need for progressive distillation by updating the teacher with EMA. Lastly, the proposed EMA has some resemblance to self-supervised learning techniques (Grill et al., 2020; Caron et al.,
2021; He et al., 2020) where two models are updated, one through gradient-based optimization and the other one through EMA. Despite this similarity, the motivations are quite different. In selfsupervised learning, EMA is proposed as a technique to remove large negative sets whereas here EMA enhances the quality of the labels generated by the teacher.
![7_image_0.png](7_image_0.png)
It is not completely clear why knowledge distillation works in general (Zhou et al., 2019; Huang et al., 2022a). But when it comes to DiMS, we hypothesize that the labels generated by the teacher make the task simpler for the student. In other words, it is difficult for the model to close the gap between its single step prediction and ground truth while distillation with teacher-generated labels reduces this gap. The importance of the gap between labels and the model capacity has also been observed before (Mirzadeh et al., 2020).
## 7 Conclusion
We introduce DiMS, an effective distillation algorithm that enhances the single-step translation quality of a pre-trained iterative model. This is done by replicating the model's multi-step behavior through one decoding pass. The distillation can be repeated to achieve greater gains, but this increases the training time noticeably. We show that the same benefits are obtainable by setting the teacher as a moving average of the student while keeping the training time comparable to one round of the distillation.
Experiments over raw and distilled datasets on four translation tasks for supervised and unsupervised variants validate the effectiveness and versatility of DiMS.
Potential directions for future works include: i)
The same family of iterative models have been applied to automatic speech recognition, thus DiMS
is applicable to this domain. ii) One can combine a pyramid of techniques introduced for iNATs to obtain a strong iterative model and make it computationally efficient via DiMS. **iii)** Large monolingual
| Target | The antibodies hunt down any nicotine molecules in the bloodstream , neutralising them before they reached the brain , preventing a smoker from getting a nicotine hit . |
|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Teacher | The antibodies hunt the nicotine molecules molecblood neutralize them before reach brain a smoker not experience high nicotine . |
| Student | The antibodies hunt the nicotine molecules in the blood and neutralize them before they reach the brain , so a smoker does not experience a nicotine high . |
Table 6: A qualitative example from WMT'14 De-En along with teacher and student's predictions on CMLMC.
sets can be used to distill models with U-DiMS.
## Limitations
While DiMS makes the cross-entropy based family competitive with alignment based variants, it still falls behind one some cases. Moreover, DiMS
can improve the performance of models trained on raw data, but the best performance is still achieved when DiMS is applied on distilled datasets. Therefore, DiMS still depends on an auto-regressive model for the best translation quality.
## Acknowledgments
We are grateful to Xiao Shi Huang for insightful comments and reviewing the initial draft of this paper. We also thank Panteha Naderian for helpful conversations.
## References
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. 2021. Emerging properties in self-supervised vision transformers. In *CVPR*.
Cunxiao Du, Zhaopeng Tu, and Jing Jiang. 2021. Orderagnostic cross entropy for non-autoregressive machine translation. In *ICML*.
Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy. 2020a. Aligned cross entropy for non-autoregressive machine translation.
In *ICML*.
Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In EMNLP-IJCNLP.
Marjan Ghazvininejad, Omer Levy, and Luke Zettlemoyer. 2020b. Semi-autoregressive training improves mask-predict decoding. arXiv preprint:2001.08785.
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal
classification: labelling unsegmented sequence data with recurrent neural networks. In *ICML*.
Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. 2020. Bootstrap your own latent-a new approach to self-supervised learning. *NeurIPS*.
Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK
Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In *ICLR*.
Jiatao Gu and Xiang Kong. 2020. Fully nonautoregressive neural machine translation: Tricks of the trade. *arXiv preprint:2012.15833*.
Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. *NeurIPS*.
Junliang Guo, Linli Xu, and Enhong Chen. 2020.
Jointly masked sequence-to-sequence model for nonautoregressive neural machine translation. In ACL.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *CVPR*.
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015.
Distilling the knowledge in a neural network. *arXiv* preprint: 1503.02531.
Fei Huang, Tianhua Tao, Hao Zhou, Lei Li, and Minlie Huang. 2022a. On the learning of non-autoregressive transformers. In *ICML*.
Fei Huang, Hao Zhou, Yang Liu, Hang Li, and Minlie Huang. 2022b. Directed acyclic transformer for nonautoregressive machine translation. arXiv preprint:
2205.07459.
Xiao Shi Huang, Felipe Pérez, and Maksims Volkovs.
2021. Improving non-autoregressive translation models without distillation. In *ICLR*.
Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018.
Averaging weights leads to wider optima and better generalization. *arXiv preprint: 1803.05407*.
Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020a. Non-autoregressive machine translation with disentangled context transformer. In ICML.
Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah Smith. 2020b. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In *ICLR*.
Jason Lee, Elman Mansimov, and Kyunghyun Cho.
2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In *EMNLP*.
Jason Lee, Raphael Shu, and Kyunghyun Cho. 2020.
Iterative refinement in the continuous space for non-autoregressive neural machine translation. In EMNLP.
Jindˇrich Libovicky and Jind ` ˇrich Helcl. 2018. Endto-end non-autoregressive neural machine translation with connectionist temporal classification. In EMNLP.
Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. 2020. Improved knowledge distillation via teacher assistant. In *AAAI*.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. *arXiv preprint: 1904.01038*.
Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. 2021. Glancing transformer for non-autoregressive neural machine translation. In ACL.
Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive machine translation with latent alignments. In *EMNLP*.
Tim Salimans and Jonathan Ho. 2021. Progressive distillation for fast sampling of diffusion models. In ICLR.
Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. 2020. Minimizing the bagof-ngrams difference for non-autoregressive neural machine translation. In *AAAI*.
Jongyoon Song, Sungwon Kim, and Sungroh Yoon.
2021. AligNART: Non-autoregressive neural machine translation by jointly learning to estimate alignment and translate. In *EMNLP*.
Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. In *ICML*.
Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhihong Deng. 2019. Fast structured decoding for sequence models. *NeurIPS*.
Zhiqing Sun and Yiming Yang. 2020. An em approach to non-autoregressive conditional sequence generation. In *ICML*.
Lifu Tu, Richard Yuanzhe Pang, Sam Wiseman, and Kevin Gimpel. 2020. Engine: Energy-based inference networks for non-autoregressive machine translation. In ACL.
Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. In AAAI.
Chunting Zhou, Graham Neubig, and Jiatao Gu.
2019. Understanding knowledge distillation in nonautoregressive machine translation. *arXiv preprint:*
1911.02727.
## A Teacher Comparison
Table 7 compares teachers trained by us with the original work proposing the model.
## B Ema Momentum Effect
We showcase the importance of the slow moving
![10_image_1.png](10_image_1.png)
average in Figure 7. As we increase the momentum the training becomes more stable and leads to a better validation set BLEU score.
## C Hyper-Parameters For Distillation
| Hyper-parameter | CMLM/CMLMC |
|-----------------------------|--------------|
| Learning rate (η) | 1e-3 |
| Adam β | (0.9, 0.98) |
| Warm-up updates | 0 |
| Max-tokens/GPU | 8192 |
| EMA momentum (µ) | 0.9992 |
| Hidden state loss factor(λ) | 0.7 |
| Length loss factor | 0.1 |
| Mask policy | Uniform |
| Temperature | 0.5 |
![10_image_3.png](10_image_3.png)
## D Ablation On Hidden State Loss Coefficient
The importance of the hidden state loss is shown in Section 4.6.1 of the main body. We conduct an ablation study in this section to find the optimal value of λ that controls the contribution of the hidden state loss.
## E Computational Cost
During the distillation we have to run teacher for two steps which adds extra computation. More Figure 8: Best validation BLEU on WMT'16 En-Ro for
![10_image_0.png](10_image_0.png)
CMLM with various hidden state loss coefficient (λ)
specifically on a machine with 4 Nvidia-V100 32GB GPUs, De-En training takes approximately 11 minutes per epoch compared to 27 minutes for distillation and on En-Ro dataset training and distillation take 2 and 8 minutes per epoch, respectively. However, the number of epochs for distillation is significantly less than teacher training.
Precisely, teacher training takes 250 and 200 on De-En and En-Ro datasets respectively while distillations takes 10 epochs for De-En and 30 epochs for En-Ro. Figure 9 compares the overall time for training and distillation on De-En and En-Ro datasets and it shows that the distillation time is one order of magnitude smaller than training time.
![10_image_2.png](10_image_2.png)
Note that teacher is being run in the evaluation mode, thus the activations maps are not kept in the memory. Therefore, the teacher can be run with a larger batch-size which further reduces the computational costs. We leave this as future works as it adds implementation complexity.
## F Imputer Details
As mentioned in the main body, there is no official implementation of Imputer available online. Here, we explain the differences between our implementation and the original paper. Imputer proposes a
| Method | Iteration | WMT' 14 | WMT' 16 | | |
|----------------------------------|-------------|-----------|-----------|------|------|
| En-De | De-En | En-Ro | Ro-En | | |
| CMLM(Ghazvininejad et al., 2019) | 10 | 27.0 | 30.5 | 33.1 | 33.3 |
| CMLM | 10 | 26.9 | 31.2 | 33.1 | 33.6 |
| CMLMC(Huang et al., 2021) | 10 | 28.4 | 31.4 | 34.6 | 34.1 |
| CMLMC | 10 | 27.3 | 31.2 | 34.1 | 34.0 |
| Imputer(Saharia et al., 2020) | 8 | 28.2 | 31.8 | 34.4 | 34.1 |
| Imputer† | 8 | 28.5 | 31.3 | - | - |
Table 7: Comparison of our teachers with the numbers reported in the original papers.
pre-training phase where the model is optimized merely with the CTC objective. We find it unnecessary as the model reaches a better or competitive performance without it. Imputer leverages a unified decoder rather than an encoder-decoder architecture incorporated here. For Imputer training, computing the alignment with the highest probability is necessary. This increases the training cost and (Saharia et al., 2020) proposes either a pre-processing stage or using a stale copy of the active model to manage the extra computation. We compute the best alignment on the fly as it is still computationally feasible. Similar to Imputer inference, extra care is taken to make sure consecutive tokens are not unmasked in the same step. Instead of a Bernoulli masking policy during training, we used a block masking policy.
For the distillation, Imputer mainly benefits from two iterative steps and the gains are not as significant after that. Therefore, there is no incentive to use EMA.
## Cmlm Wmt'14 De-En Cmlmc Wmt'14 De-En
| Target | The rate of 3.1 per cent is indeed better than the previous year and is also better than in September , " however , we had hoped for more , " said Monika Felder - Bauer , acting branch manager of the Employment Agency in Sonthofen . |
|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Teacher | Although the quota was better 3.1 better than last year and better than September , we would hoped more , " Monika - Bauer Deputy of the Labour Agency in Sonthofen . |
| Student | Although the quota at 3.1 % is better than last year and is also better than in September , " we would have hoped for more , " says Monika Felder - Bauer , deputy head of the Labour Agency in Sonthofen . |
| CMLM | WMT'16 Ro-En |
| Target | we must ask these people to learn the language , try to appropriate our values , to stop having one foot in europe and one in their home country , bringing the rest of the family including through marriages of convenience . |
| Teacher | let us ask these people to learn their language , try to take values , stop longer stand in europe and with one their country home origin , bringing the rest of family , including through convenience marriages . |
| Student | let us ask these people to learn the language , try to take over values , no longer stand in europe and with one in their home country , bringing the rest of the family , including through convenience marriages . |
| Target | Edward Snowden , the US intelligence whistleblower , has declared that he is willing to travel to Berlin to give evidence to the German parliament if the US National Security Agency and its director Keith Alexander fail to provide answers about its activities . |
|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Teacher | Edward Snowden , the whistleblower of the US intelligence , has that he is to travel to Berlin and testify the German destag if the National Security Agency its director Keith Alexander not provide answers about their activities . |
| Student | Edward Snowden , the whistleblower of the US intelligence , has said that he is prepared to travel to Berlin and testify to the German destag if the American National Security Agency and its director Keith Alexander do not provide answers to their activities . |
## Cmlmc Wmt'16 Ro-En
| Target | during the routine control , the border policemen observed in the truck 's cab , a large travel bag , which contained personal things of many people , which is why they conducted a thorough check on the means of transport . |
|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Teacher | at specific control , border police officers a large travel of travel the cabvan , where there were things for several people , which is why carried out thorough control over the vehicle of transport . |
| Student | at specific control , border police observed , in the cabin , a large travel getravel , where there were personal things for several people , which is why they carried out thorough control over the vehicle of transport . |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In the Limitation section after conclusion.
✗ A2. Did you discuss any potential risks of your work?
The proposed method is to speed up translation (inference time) which does not seem to have a risk.
Anything that the method makes possible was already possible with more computation at inference time.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 (Experiments).
✓ B1. Did you cite the creators of artifacts you used?
Section 4 (experiments).
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Fairseq is under MIT license which is one of the permissive licenses.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The code will be released under MIT license.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The used data were public datasets. (Namely WMT14 and WMT17)
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4 (Experiments)
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix E contains computational cost. Since we did not introduce new model and distilled existing ones the details of those models can be found in the original papers.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 (experiments) and Appendix C.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
It is transparent that a single run is reported.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 (experiments). We used Fairseq.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
jung-etal-2023-retrieval | Retrieval-augmented Video Encoding for Instructional Captioning | https://aclanthology.org/2023.findings-acl.543 | Instructional videos make learning knowledge more efficient, by providing a detailed multimodal context of each procedure in instruction.A unique challenge posed by instructional videos is key-object degeneracy, where any single modality fails to sufficiently capture the key objects referred to in the procedure. For machine systems, such degeneracy can disturb the performance of a downstream task such as dense video captioning, leading to the generation of incorrect captions omitting key objects. To repair degeneracy, we propose a retrieval-based framework to augment the model representations in the presence of such key-object degeneracy. We validate the effectiveness and generalizability of our proposed framework over baselines using modalities with key-object degeneracy. | # Retrieval-Augmented Video Encoding For Instructional Captioning
Yeonjoon Jung♠ Minsoo Kim♠ **Seungtaek Choi**♣
Jihyuk Kim♡ Minji Seo♠ **Seung-won Hwang**∗♠
♠Seoul National University ♣Riiid AI Research ♡Yonsei University
{y970120, minsoo9574, minjiseo, seungwonh}@snu.ac.kr
{seungtaek.choi}@riiid.co {jihyukkim}@yonsei.ac.kr
## Abstract
Instructional videos make learning knowledge more efficient, by providing a detailed multimodal context of each procedure in instruction. A unique challenge posed by instructional videos is *key-object degeneracy*, where any single modality fails to sufficiently capture the key objects referred to in the procedure. For machine systems, such degeneracy can disturb the performance of a downstream task such as dense video captioning, leading to the generation of incorrect captions omitting key objects.
To repair degeneracy, we propose a retrievalbased framework to augment the model representations in the presence of such key-object degeneracy. We validate the effectiveness and generalizability of our proposed framework over baselines using modalities with key-object degeneracy.
## 1 Introduction
Instructions, which provide detailed information about the procedures required to achieve the desired goal, are a central part of how humans acquire procedural knowledge. Instructions decompose a sequence of complex procedures into key objects and the associated actions expressed as verbs. As machine systems increasingly aim to provide realworld utility for humans, their ability to translate human goals into natural language instructions to follow becomes essential (Ahn et al., 2022). In this light, instructional captioning, summarizing *instructional videos* into a set of succinct instructions, is thus an important component of enabling the distillation of human-level procedural knowledge to machines.
For instructional captioning, we focus on the task of dense video captioning (DVC) (Krishna et al., 2017) which aims to produce a precise set of instructions from visual input (e.g. instructional videos). For example, to illustrate the procedure
∗Corresponding author.
s 2in Figure 1, the instructional video details the procedure, while simultaneously showing how this action is performed. DVC system can then summarize this video into a set of salient captions, forming a set of instructions that enhances the visual demonstration with informative text descriptions.
While the task of extracting a salient instruction from complex visual input can be effortless for humans, it presents a unique challenge for machine systems, which we denote as *key-object degeneracy*. That is, machine systems can often fail at the fundamental task of key-object recognition, which is core to instructions. This is due to the fact that frequently, key objects are not easily recognized from either images (Shi et al., 2019a; Zhou et al.,
2018a) or transcripts of the frames (Huang* et al.,
2018) during a demonstrative and conversational presentation. While humans can impute such missing information by flexibly aggregating across various available modalities, key-object degeneracy can cause critical failures in existing DVC systems.
Input Modality Recognizability
Image (X) 56.07
+Transcript (*X, T*) 63.16
+Instructional Script (*X, T, R*) 74.60
Table 1: Statistics of the key objects in recognizable
forms, recognizability.
To quantify the degeneracy in instructional videos, we first conduct a study measuring the number of recognizable key objects from the images X
and transcripts T in one of our target instructional video corpora, YouCook2 (Zhou et al., 2018a)
1. We define *recognizability* as the percentage of key objects which are recognizable in at least one modality, and present the statistics in Table 1.
From the result in Table 1, we can observe that many key objects are not recognizable from the image alone. Though we can observe that recognizability improves when the image is augmented 1We provide detail of computing degeneracy in Sec. 7.2
![1_image_0.png](1_image_0.png)
with the temporally paired transcript, this does not entirely resolve key-object degeneracy, as nearly 40% of key objects remain unrecognized. For instance, in Figure 1, the key object of procedure s 3 ,
chicken , is not recognizable from either the image or transcript of Frame 3.
Having different reasons for degeneracy, each modality has distinct methods to make key objects recognizable: 1) reducing occlusion of key objects in images or 2) reducing ambiguity by mentioning the key objects with nouns in text. Based on the preliminary study, we pursue the latter, and propose a disambiguation method based on retrieval from instructional scripts, such as recipes for cooking.
The sufficient condition of instructional scripts for our method is that they contain disambiguated key objects, and provide adequate coverage of valid (key-object, action) pairs. For the YouCook2 dataset, we quantitatively confirm the efficacy of instructional scripts in repairing degeneracy, in Table 1, where it is shown that the instructional scripts can successfully make the unrecognized key objects recognizable. For example, in Figure 1, the unrecognizable key object in the third and fourth frames, chicken , becomes recognizable after the procedural sentence r 3 ∈ R S (middle left of Figure 1) explicitly mentioning "chicken" is paired with the image and transcript.
While such well-aligned procedural sentences can reduce key-object degeneracy, in most cases, there exists no alignment supervision between the video frame and procedural sentences, as the two are generated independently. Our solution is to generate such alignment using a machine retriever. However, key-object degeneracy in the video frame negatively affects the existing retrieval systems as well, e.g. , image-text retrieval, from retrieving the aligned procedural sentence.
Inspired by the contextualized understanding of previous/following frames (Qi et al., 2022 ), our distinction is to guide the retriever to achieve keyobject-aware alignment with procedural sentences, by conducting retrieval based on aggregating interframe information in an object-centric manner. For this goal, we propose Key Object aware Frame Contrastive Learning (KOFCL) for improved differentiation of nearby frames of distinctive procedures, and more robust contextualization of the key object beyond a single procedure.
Our major contributions are threefold: 1) propose a temporal description retrieval task to find the procedural sentences procedurally aligned to each frame in instructional videos, 2) propose a key object-aware frame contrastive learning objective (KOFCL) to improve temporal description retrieval, and 3) show the improved temporal description retrieval repairs degeneracy and improves DVC significantly.
## 2 Preliminaries And Related Work
We first introduce our target domain, namely, instruction, and its representations and previous research on their characteristics (§2.1). Our goal is to improve the encoding of video frame G (§2.2).
Then we provide a concise overview of our downstream task, DVC (§2.3).
## 2.1 Target Domain: Instruction And Video, Script
Instruction Instruction refers to structured knowledge explaining how to perform a wide variety of real-world tasks. An instruction S can be represented as a list of N procedures, S = {s j}
N
j=1, where each procedure describes the action required for the task, as a tuple of verb a jand key object set Oˆj, s j = (a j, Oj). For example, the instruction for cooking *chicken parmesan* would be a list composed of tuples such as (coat, [chicken, mixture])
which is written in text or shown in the video for human consumption as depicted in Figure 1.
Instructional Video Instructional video, denoted as VS, is a video explaining instruction S. It consists of a list of frames, VS = {v j i|i ≤
|VS| and j ≤ N}. The procedure s jis represented in the key clip k j, the subset of video frames starting at b jand ending at e j. Then, the i-th frame, v j i
,
represents the corresponding procedure s j when it is included in the key clip k j or the null procedure s 0if it is not covered by any key clip. For example, Frame 1 in Figure 1 explains its procedure by showing and narrating its key objects in its image x j i and transcript t j i
.
It is widely known that degeneracy is prevalent in each modality of instructional videos (Zhou et al.,
2018a). Specifically, this indicates a large difference between the key object set Ojand the key objects recognizable in the frame v j i
, Oˆj i
. There have been previous works that discovered and addressed the degeneracy in a single modality of image (Shi et al., 2019b) or transcript (Huang* et al.,
2018). However, our approach aims to repair the degeneracy in both modalities, by leveraging the procedural sentences from instructional transcripts.
Instructional Script An instructional script RS = {r j}
N
j=1 consists of procedural sentences where each procedural sentence r jrepresents its corresponding procedure s jexplicitly as words describing the action a jand the key objects Oj.
Representing procedures in disambiguated form, previous works construct instruction S from its corresponding instructional script RS (Lau et al.,
2009; Maeta et al., 2015; Kiddon et al., 2015). We propose to adopt RS to disambiguate the unrecognizable key object for mitigating degeneracy.
## 2.2 Baseline: Representation G J
i A baseline to overcome degeneracy is to encode the temporally paired image and transcript (x j i
, tj i
) into joint multi-modal representation g j i
. For such purpose, we leverage pretrained LXMERT (Tan and Bansal, 2019)
2, as it is widely adopted to encode the paired image transcript of video frame (Kim et al., 2021; Zhang et al., 2021). Specifically, the transcript t j i and image x j i of the video frame v j i are fed together to pretrained LXMERT. We utilize the representation at the special [CLS] token as the frame representation g j i as follows:
$$g_{i}^{j}=L X M E R T(x_{i}^{j},t_{i}^{j}).\qquad\qquad(1)$$
$\mathbf{c}=\mathbf{f}\cdot\mathbf{j}$, .
We use the resulting representation G = {g j i|i ≤
|VS| and j ≤ N} as features of individual frames that will be fed to DVC systems.
## 2.3 Target Task: Dvc
Given an instructional video VS describing instruction S, DVC consists of two subtasks of key clip extraction and caption generation.
Key Clip Extraction Given a sequence of video frames, key clip extraction module predicts key clip ˆk = (ˆb, eˆ) by regressing its starting/ending time ˆb and eˆ (Zhou et al., 2018a; Wang et al., 2021). It also outputs the likelihood Pk( ˆk) estimating the predicted clip ˆk to be a key clip which is further used to select the key clips for caption generation.
Caption Generaton The caption generation task aims to generate caption cˆ describing the predicted key clip ˆk. The predicted key clip ˆk is fed to the 2We refer to a survey (Du et al., 2022) for overview of multi-modal representation techniques, as our focus is not on enhancing multi-modal representation.
captioning module which generates each word wˆi by estimating the probability distribution over vocabulary set W conditioned on key clip ˆk:
wˆi = *argmax*w∈W P(w|w≤i−1, ˆk). (2)
We adopt EMT and PDVC, DVC systems which are widely adopted or SOTA, as our DVC systems.
We refer (Zhou et al., 2018b; Wang et al., 2021)
for further details, as our focus is not on improving downstream task models, but on repairing the degeneracy of input instructional videos, which is applicable to any underlying models.
## 3 Our Approach
Building on preliminaries, we now describe our retrieval augmented encoding framework in detail.
First, we explain how instructional scripts can contribute to repairing the degeneracy (§3.1). Our framework combines a cross-modal TDR module
(§3.2), which can aggregate the key objects across frames (§3.3), to build robust multi-modal representations which repair key-object degeneracy.
## 3.1 Representation Augmentation With Procedural Sentence
Our hypothesis to mitigate degeneracy is that a procedural sentence r j i in RS represent a procedure s˜
j i similar to the procedure s j of each frame v j i
. Explaining a similar procedure, the key object set O˜j i of r j i has common key objects sufficient to repair degeneracy. Our first distinction is to augment the individual frame representation g j i with the representation d j i of such procedural sentence r j i
. Thus, when procedural sentence r j i is provided with video frame v j i
, more key objects become recognizable,
$$n(O_{i}^{j}\cap O^{j})\leq n((O_{i}^{j}\cup\tilde{O}_{i}^{j})\cap O^{j}),$$
j), (3)
and the degeneracy in video frames can be reduced.
## 3.2 Temporal Description Retrieval (Tdr) Cross-Modal Retrieval For Aligning Sentences
with Frames The preliminary study in Sec. 3.1 establishes the potential of procedural sentences to repair key-object degeneracy. However, it assumes the ideal scenario where the procedure described by the procedural sentence r j, matches that of the frame v j i
, which we call procedural alignment. However, such procedural alignment between procedural sentences and frames is not available in practice, as data of the two modalities are generated completely independently.
$$i{\leq}i{-}1,{\hat{k}}).$$
We, therefore, propose a cross-modal retrieval task, Temporal Description Retrieval (TDR), as a solution to *learn* such procedural alignments. We train a frame-sentence retriever, ϕ(v j i
, RS) to take the query frame v j i from video VS, and the instructional script RS as input, and predict, for every procedural sentence r i ∈ RS, their relevance.The goal of ϕ is to find the procedural sentence rˆi which best explains the procedure s j.
Here, it is important to note that the retrieval task itself is also susceptible to key-object degeneracy, making TDR more challenging. In the presence of key-object degeneracy, single-modality (image or text) encodings can exacerbate this problem, due to a potential information imbalance between the two modalities. Therefore, we formulate the crossmodal TDR as retrieving text encodings using a joint image-text query, using the LXMERT joint image-text representation, g j i
.
Finally, we augment the feature vector g j i of the frame with vector representation d j i of the retrieved procedural sentence rˆi as depicted in Figure 1.
Dense Retrieval for Efficiency There can be several options to implement the frame-sentence retriever ϕ(v j i
, RS). Existing architectures fall into two categories, cross retrievers and dense retrievers (Humeau et al., 2020). These differ in how the interaction between the query frame v j i and the procedural sentence rlis modeled.
As TDR conducts retrieval for each frame in VS, efficiency should be prioritized, and we mainly consider the dense retrieval architecture. First architecture, the cross retrieval requires the exhaustive computation of O(|VS*| × |*RS|) as the v j i and rl interact within a single neural network. However, the dense retrieval conducts the retrieval with little computation cost, at O(|VS| + |RS|), by reusing the encoding of the v j i and rl.
Specifically, the dense retriever consists of two distinct encoders ΩV and ΩR, which encode the query frame v j i and the procedural sentence rlindependently. Then, the interaction between v j i and rlis modeled as a simple dot product operation, resulting in retrieval as follows:
$${\hat{r}}_{i}=\operatorname{argmax}_{r_{l}}\Omega_{V}(v_{i}^{j})\cdot\Omega_{R}(r_{l}).$$
For training, we adopt the contrastive learning objective (Mnih and Kavukcuoglu, 2013), denoted by LTDR, that guides the retriever to assign larger relevance for the gold procedural sentence r
+ than that of negative procedural sentences r−:
$$\mathcal{L}_{\text{TDR}}=-\log\frac{\exp(\Omega_{V}(v_{i}^{j}).\Omega_{R}(r^{+}))}{\exp(\Omega_{V}(v_{i}^{j}).\Omega_{R}(r^{+}))+\sum\exp(\Omega_{V}(v_{i}^{j}).\Omega_{R}(r^{-}))},\tag{5}$$
We utilize the caption c jas the gold procedural sentence r
+, as there is no available gold procedural sentence, and this approach is reported to be effective in previous work (Gur et al., 2021). We also utilize in-batch negatives, treating all other gold procedural sentences representing different procedures from the identical instructional video, as negative procedural sentences.
## 3.3 Key Object-Aware Frame Contrastive Learning (Kofcl)
The key aspect separating instructional videos from standard image-text or textual retrieval is the additional temporal dimension. In order to repair keyobject degeneracy, it is critical to aggregate interframe information across this temporal dimension.
To illustrate, consider the key object of frames 3 and 4 in Figure 1, "chicken", which is not recognizable from either the transcript or the images of Frame 3 and 4, but is clearly recognizable in both image v 1 1 and transcript t 11 of Frame 1.
We adopt LSTM as a sequence encoder similar to existing video works (Zhou et al., 2018a)
and build LXMERT-I
2 which encodes precedent/following frames, g
≤j
≺i and g
≥j
≻i
, and outputs the resulting query frame encoding ←→g j i as follows:
$$\stackrel{\leftrightarrow}{g}_{i}^{j}=F C N(\stackrel{\leftarrow}{L}\overrightarrow{S T M}(g_{i}^{j},g_{\preceq i}^{\leq j},g_{\succ i}^{\geq j})).\quad\quad(6)$$
However, the locality of the frame-level procedure annotations biases such model to simply encode *temporally local* inter-frame information (Wang et al., 2020), not the key objects. Specifically, the procedures are represented as temporally local frames and such local frames of identical procedures can contribute to repair degeneracy.
However, as all local frames are not of identical procedures, e.g. boundaries of the key clips, encoding such frames cannot repair degeneracy and rather confuse the models to consider as the preceding/following procedures. For Frame 3 in Figure 1, temporally local inter-frame information of Frame 2 and 3 is redundant with the given frame, adding little new information. Even worse, confusing that Frame 2 and 3 describe the identical procedure, the model misaligns Frame 3 to the procedural sentence r 2 of the different procedure. On the other hand, identifying the key object which appears in Frame 1, and binding this information into the encoding for Frame 3, would successfully repair the key-object degeneracy of Frame 3.
A recent approach, frame contrastive learning
(FCL) (Dave et al., 2022), partially addresses the temporal locality bias. It regards the arbitrary frame pair (v j i
, vm n) as positive when they represent identical procedure and negative otherwise as follows:
$$\mathbf{1}(v_{i}^{j},v_{n}^{m})={\begin{cases}1,&{\mathrm{if}}j=m\\ 0.&{\mathrm{otherwise}}\end{cases}}$$
$$\mathbf{\Pi}$$
(7)
What makes FCL address the temporal locality bias is that it supervises the difference in the procedures between the local frames so that local frames of different procedures, such as Frame 2 for given Frame 3 in Figure 1, can be less aggregated.
Then, the frame encoder is supervised to map the frames of identical procedures close together in the representation space, while pushing away those of different procedures by FCL loss, Laux(v j i
, vm n),
defined as follows:
$$\begin{array}{c}{{y_{i n}=\sigma(\stackrel{\longleftrightarrow}{g}_{\ i}^{j}\cdot W_{a u x}\cdot\stackrel{\longleftrightarrow}{g}_{\ n}^{m})}}\\ {{\mathcal{L}_{a u x}(v_{i}^{j},v_{l}^{k})=B C E(1(v_{i}^{j},v_{n}^{m}),y_{i n}),}}\end{array}$$
(8) (9) $\frac{1}{2}$
n) (8)
n), yin), (9)
where σ is sigmoid function and Waux is parameter of bilinear layer. Finally, the retriever is optimized to simultaneously minimize LTDR and Laux:
$${\mathcal{L}}={\mathcal{L}}_{\mathrm{TDR}}+\lambda_{\mathrm{aux}}{\mathcal{L}}_{\mathrm{aux}},$$
$$(10)$$
where λaux is a hyper-parameter weighing contribution of Laux during training.
However, FCL is limited to contextualizing local frames of identical procedure as the inter-frame information. To extend such contextualization beyond the single procedure, we propose key objectaware frame contrastive learning (KOFCL), which encourages contextualizing the frames of different procedures when they share common key objects, based on a globally shared notion of key objects.
The clear advantage of such contextualization is that it enables retrieving the correctly aligned procedural sentence, even when key objects are hardly recognizable in the query frame, by leveraging keyobject information. For example, the missing key object "chicken" of Frames 3 and 4 in Figure 1 can be found in Frame 1 of procedure s 1, where Frames 1, 3, and 4 will be encouraged to share similar representations through KOFCL. More concretely, we label the frame pair v j i and v m nas positive when they have common key objects. To measure how many key objects a frame pair shares, we computed the intersection of union (IoU) between the key object set of frame pair3as follows:
$$\mathrm{IoU}_{o b j}(v_{i}^{j},v_{n}^{m})=\frac{n(O^{j}\cap O^{m})}{n(O^{j}\cup O^{m})}.\qquad(11)$$
$\hat{\mathbf{i}}$ .
$\cos\lambda$ ...
Using IoUobj (v j i
, vm n), we labeled the frame pair, v j i and v m m, when they share key objects over predefined threshold µ as follows:
$$\mathbb{1}_{obj}(v_{i}^{j},v_{n}^{m})=\begin{cases}1,&\text{if IoU}_{obj}(v_{i}^{j},v_{n}^{m})>\mu\\ 0,&\text{otherwise}\end{cases}\tag{12}$$
Converting the FCL label in Eq.(7) into our proposed label in Eq.(12), KOFCL supervises to map frame pair, v j i and v m n
, close when they not only describe the identical procedure but also share key objects. Thus, the retriever can build a more robust understanding of the key objects in the query frame v j i with key object aware inter-frame information.
## 4 Experimental Setup 4.1 Dataset
We used two distinct instructional video datasets, YouCook2 (Zhou et al., 2018a), a dataset of instructional cooking videos and IVD (Alayrac et al.,
2017), a dataset of instructional videos with 5 distinct goals such as CPR, jump the car. As each video provides its goals, we collected the instructional scripts by querying its goal to the web archive4for YouCook2 following previous work (Kiddon et al., 2015) and the Google search engine for IVD dataset. Our instructional script collection contains an average of 15.33 scripts with 10.15 sentences for each goal in YouCook2 and 1 instructional script with an average of 7.4 sentences for each goal in IVD dataset. We used transcripts generated by YouTube ASR engine following previous works (Xu et al., 2020; Shi et al., 2019a, 2020).5 4.2 Evaluation Settings TDR We evaluated TDR in two distinctive settings to utilize both gold captions and our collected instructional scripts. First, we report the recall metric (R@K) of the gold captions, where all the 3Human-annotated key object is limited to subset of videos.
Therefore, we applied pos-tagging on the groud-truth caption and filtered out the nouns and proper nouns.
4www.allrecipes.com 5We provide further details of our datasets in Appendix 7.4 captions in the same video are considered candidates for retrieval. Second, we evaluated TDR
performance on our collected instructional scripts using NDCG*ROUEGE*−L metric (Messina et al.,
2021a,b). It replaces the relevance annotation between the query frame and procedural sentences with lexical similarity score, ROUGE-L, between gold captions and procedural sentences. We report each metric on top-1/3/5 retrieval result. Especially, for recall metrics, we mainly considered the top-1 retrieval result as our priority is to address key object degeneracy. Specifically, retrieving sentences of different procedures containing the same key objects may result in a slightly lower R@3,5.
DVC For the caption generation of DVC, following convention (Krishna et al., 2017; Zhou et al.,
2018b), we report lexical similarity of generated captions with gold captions, using BLEU@4 (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), CIDEr (Vedantam et al., 2015), and RougeL (Lin, 2004), abbreviated as B-4, M, C, and R. For the key clip extraction, we report the average recall of the predicted key clips denoted as AR following convention (Escorcia et al., 2016; Zhou et al.,
2018b). For every metric, we provide the average and standard deviation of 5 repetitive experiments.
5 Results We now present our experimental results, aiming to address each of the following research questions:
RQ1: Is our cross-modal retrieval using joint image-text query more effective than standard retrieval approaches for TDR?
RQ2: Does KOFCL address key-object degeneracy in TDR, and help the retriever to build a robust understanding of key objects?
RQ3: Does retrieval-augmentation using procedural sentences improve DVC by repairing key-object degeneracy?
## 5.1 Rq1: Effectiveness Of Joint Image-Text Query Formulation For Tdr
Query Encoder Input R@1 R@3 R@5
BM25 t
j
i 35.02 59.34 74.88
BERT t
j
i 41.45 72.4 86.95
TERAN x
j
i 39.73 72.39 86.75
NAAF x
j
i 39.37 72.89 88.17
LXMERT x
j i
, tj
i 47.30 78.50 91.14
LXMERT (NAIVE DISAMB.) x
j
i
, τ
j
i 44.75 77.31 90.42
LXMERT-I
2+KOFCL (Ours) x
j
i
, tj
i 56.83 84.49 94.45
Table 2: Recall (R@1,3,5) for Youcook2 Retrieval with
different query frame modality
| Dataset | YouCook2 | IVD | | | | | | | |
|---------------|------------|-------|-------|-------|-------|-------|-------|-------|-------|
| Metric | NDCG | R@K | R@K | | | | | | |
| Query Encoder | K=1 | K=3 | K=5 | K=1 | K=3 | K=5 | K=1 | K=3 | K=5 |
| LXMERT | 39.56 | 41.93 | 43.50 | 47.30 | 78.50 | 91.14 | 30.83 | 62.10 | 78.54 |
| 2 | 41.90 | 44.21 | 45.99 | 55.24 | 85.86 | 95.09 | 40.35 | 77.77 | 89.83 |
| LXMERT-I +FCL | 42.01 | 44.25 | 45.82 | 55.88 | 85.55 | 94.89 | 40.51 | 74.46 | 87.15 |
| +KOFCL (OURS) | 42.73 | 44.92 | 46.50 | 56.83 | 84.49 | 94.45 | 43.42 | 76.58 | 87.86 |
![6_image_0.png](6_image_0.png)
To verify the effectiveness of our joint imagetranscript query formulation for TDR, we compare our approach with baselines consisting of existing textual and image-text retrieval systems as follows:
- BM25 (Robertson, 2009) and BERT (Devlin et al., 2019) are widely used approaches in text retrieval. We adopt them as a baseline using the transcript as a query.
- TERAN (Messina et al., 2021a) and NAAF (Zhang et al., 2022) are the state-ofthe-art image-text retrievers. We adopt them as baselines using the image x j i as a query.
Table 2 shows TDR result of the baselines and our joint image-text query formulation LXMERT
for the YouCook2 dataset. We can observe that baselines using single modality queries, i.e. BM25 or TERAN, are insufficient for finding the aligned procedural sentence, with R@1 score lower than 40%. LXMERT shows higher TDR results with large margins over baselines in every metric, confirming the effectiveness of our proposed joint image-transcript query. For comparison, we also include the TDR result of our full model, which further improves significantly over LXMERT.
Additionally, we compare a straightforward method to repair degeneracy, by disambiguating pronouns in transcripts. Following previous work (Huang* et al., 2018), we use a co-reference module (Gardner et al., 2017) to convert transcripts into their disambiguated versions, τ j i
. Interestingly, we observe a degradation of TDR in every metric.
We hypothesize that the co-reference resolution introduces noise from several sources, including the module's inaccuracy itself, but also incorrect pronoun resolution using key objects belonging to other, adjacent procedures.
## 5.2 Rq2: Kofcl Contextualize Key Objects And Improves Tdr.
Next, we evaluate the effectiveness of inter-frame information, in conjunction with KOFCL, in improving the performance of TDR. In Table 3, we report the respective results of TDR on the YouCook2 and IVD datasets, with varying inter-frame information supervision approaches.
First, on both datasets, we observe a large improvement of LXMERT-I
2 over LXMERT, reflecting the importance of inter-frame information for TDR. Next, we focus on the effect of jointly supervising LXMERT-I
2 with FCL or KOFCL.
When LXMERT-I
2supervised by FCL, the increase in R@1 is negligible. In contrast, when supervised with our proposed KOFCL, we can observe a meaningful improvement in R@1, on both datasets. These results indicate that KOFCL improves TDR by capturing key-object aware interframe information in a generalizable manner.
| Query Encoder | R@1 |
|-----------------|-------|
| LXMERT-I 2 | 55.04 |
| +FCL | 55.16 |
![6_image_1.png](6_image_1.png)
Table 4: Recall@1 score on the isolated set.
In order to further verify that KOFCL contextualizes key objects and repairs key-object degeneracy, we collect an isolated subset of YouCook2, where nearby frames are prone to confuse frame-sentence retrievers with a temporal locality bias. Specifically, we collect the query frames v j i whose corresponding procedure s j has distinct6 key objects from neighboring procedures s j−1and s j+1.
We report the R@1 score on this isolated set in Table 4. Whereas FCL fails to improve over LXMERT-I
2, R@1 improves meaningfully when the frame-sentence retriever is supervised with our proposed KOFCL. These results indicate that KOFCL contributes to the contextualization of key objects, and alleviates the temporal locality bias.
## 5.3 Rq3: Retrieved Procedural Sentences Repair Degeneracy And Improve Dvc
Next, we evaluate the impact of repairing degeneracy on improving downstream task of dense video 6We considered procedure s jto have distinct key objects with neighboring procedures when their IoUobj defined in Eq.(12) is lower than 0.05
| DVC Model | EMT | PDVC | | | | | | | | |
|--------------------|------------|-----------|------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| Representation | Captioning | KCE | Captioning | KCE | | | | | | |
| M | C | R | B-4 | AR | M | C | R | B-4 | AR | |
| i | 7.140.20 | 18.201.09 | 20.130.52 | 0.800.09 | 65.912.95 | 6.210.42 | 28.762.61 | 14.460.75 | 1.180.16 | 17.171.09 |
| i | 8.030.25 | 21.680.61 | 21.950.80 | 1.000.08 | 66.552.99 | 6.800.44 | 31.222.10 | 15.580.94 | 1.290.14 | 19.061.27 |
| 2 + KOFCL 8.370.25 | 24.370.67 | 22.950.44 | 1.400.17 | 68.931.72 | 7.170.15 | 33.860.78 | 16.550.45 | 1.320.13 | 20.160.83 | |
Table 5: BLEU-4, METEOR, CIDEr, Rouge-L for captioning, Average Recall (AR) for Key Clip Extraction (KCE).
captioning, which is the main objective of this work.
We evaluate our proposed approach, which uses a trained retriever to retrieve procedural sentences from instructional scripts to augment frame representations, with a baseline without any consideration of key-object degeneracy, as well as an advanced baseline, which augments frame representations using the disambiguated version of the transcript τ j i
, instead of procedural sentences.
We first report the DVC performance on YouCook2 in Table 5. The advanced baseline, which augments the baseline representation g j i with d j i using τ j i
, improves performance on both captioning and key clip extraction, showing that DVC can be improved by augmenting frame representations with disambiguated key-object information. Notably, our proposed framework, which augments using procedural sentences retrieved using the LXMERT-I
2 + KOFCL retriever, significantly outperforms both baselines, on all metrics measured, for both tasks. These results indicate that by repairing key-object degeneracy, our retrieved procedural sentences are a better source to augment frame representations for DVC. Moreover, our augmented representations improve results on both EMT and PDVC downstream models, which confirms that our method can be easily applied to improve standard DVC systems, without dramatic modification of the downstream task models.
Table 6: BLEU-4, METEOR, CIDEr, Rouge-L for captioning, Average Recall (AR) for Key Clip Extraction
(KCE).
| Representation | Captioning | KCE | | |
|-------------------------------------------------------------------------------------|-------------------------------------------------|-------|-----|----|
| M | C | R | B-4 | AR |
| g i | 7.140.20 18.201.09 20.130.52 0.800.09 65.912.95 | | | |
| j j j | j | | | |
| g i ; d i w/ τ i w/ LXMERT | 7.690.21 20.400.69 21.910.49 1.120.15 66.851.08 | | | |
| i | 8.030.25 21.680.61 21.950.80 1.000.08 66.552.99 | | | |
| g j ; d j i j j i w/ LXMERT-I g i ; d 2 | 7.970.33 21.801.21 22.340.50 1.200.15 67.670.25 | | | |
| g j ; d j i w/ LXMERT-I 2 + KOFCL 8.370.25 24.370.67 22.950.44 1.400.17 68.931.72 i | | | | |
Table 7: Dense video captioning results on IVD dataset.
METEOR, CIDEr, Rouge-L for captioning, Average Recall (AR) for Key Clip Extraction (KCE).
| Representation | Captioning | KCE | | |
|-----------------------------------------------|--------------------------------|-----------|-----------|-----------|
| M | C | R | AR | |
| j g i | 9.20.73 | 61.693.73 | 14.880.61 | 36.072.08 |
| g j ; d j i w/ LXMERT-I 2 | 16.011.26 102.656.84 24.521.13 | 27.830.97 | | |
| i g j ; d j i w/ LXMERT-I 2 + KOFCL 19.760.85 | 123.694.88 | 29.790.96 | 37.971.61 | |
| i | | | | |
Next, we conduct an ablation study of the contribution of each of our framework components. In Tables 6 and 7, we report the results of DVC on YouCook2 and IVD respectively, using the EMT
model with various frame-sentence retrievers. The results confirm that the improvement in the retrieval outcomes translates to better downstream performance on DVC, with LXMERT-I
2and KOFCL
meaningfully improving DVC performance on both datasets. Also, our proposed retrieval augmentation method showed more improvement in the IVD dataset than YouCook2. The key difference between the Youcook2 and IVD datasets is that the IVD dataset is composed of more distinctive instructions, such as "jump the car", "re-pot the plant" and "make coffee", than YouCook2, which contains only cooking instructions. For such distinctive instructions, knowing the key objects can act as clarifying information about the instruction and thus can help generate more accurate captions.
Table 8: CIDEr scores results on definite/degenerative sets.
Finally, to verify that the improvement in DVC
performance is attributable to the repair key-object degeneracy, we divided the test set into definite and degenerative sets and compared the results of baseline representation g j i and our augmented representation g j i
; d j i w/ LXMERT-I
2 + KOFCL.
Specifically, the caption c jis considered degenerative when the video frames corresponding to the ground-truth key clip k j have lower than 60% recognizability of image and transcript, and definite when the recognizability is higher than 80%. In Table 8, in contrast to representation g j i
, whose CIDEr score decreases on the degenerative set, our augmented representation g j i
; d j i w/ LXMERT-I
2 +
KOFCL increases the score on the degenerative set, showing that our augmented representation using retrieved procedural sentences is effective in re-
| Representation | Definite | Degenerative |
|-----------------------------------|------------|----------------|
| j g i | 16.38 | 13.61 |
| g j ; d j i w/ LXMERT-I 2 + KOFCL | 15.33 | 17.15 |
| i | | |
solving the key-object degeneracy in instructional videos.
## 6 Conclusion
We proposed retrieval-augmented encoding, to complement video frames, by repairing degeneracy and considering correlations between steps.
Our evaluation results validated that our proposed framework improves existing DVC systems significantly.
## Limitations
Our method overcomes degeneracy in instructional videos under the assumption of the existence of textual instructional scripts describing the exact instructions of instructional videos. Thus, our method is applicable to instructional videos having such recipe documents. However, we note that similar documents exist for various types of instructions other than cooking, such as topics in other datasets (Alayrac et al., 2017), *e.g.*, how to jump start a car, or change a tire.
## Acknowledgements
This research was supported by MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program
(IITP-2023-2020-0-01789) and grants [NO.20210-0268, AI Hub, SNU], [No.2022-0-00077, AI
Technology Development for Commonsense Extraction, Reasoning, and Inference from Heterogeneous Data], and [NO.2021-0-01343, AI Graduate School] supervised by the IITP (Institute for Information & Communications Technology Planning
& Evaluation).
## References
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. 2022. Do as i can, not as i say: Grounding language in robotic affordances.
Jean-Baptiste Alayrac, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. 2017. Joint discovery of object states and manipulation actions. In *International* Conference on Computer Vision (ICCV).
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang.
2018. Bottom-up and top-down attention for image captioning and visual question answering.
Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In *Proceedings of* the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72.
Ishan Dave, Rohit Gupta, Mamshad Nayeem Rizve, and Mubarak Shah. 2022. Tclr: Temporal contrastive learning for video representation. Computer Vision and Image Understanding, 219:103406.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Yifan Du, Zikang Liu, Junyi Li, and Wayne Xin Zhao.
2022. A survey of vision-language pre-trained models.
Victor Escorcia, Fabian Caba Heilbron, Juan Carlos Niebles, and Bernard Ghanem. 2016. Daps: Deep action proposals for action understanding. In *Computer Vision - ECCV 2016*, pages 768–784, Cham.
Springer International Publishing.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.
Shir Gur, Natalia Neverova, Chris Stauffer, Ser-Nam Lim, Douwe Kiela, and Austin Reiter. 2021. Crossmodal retrieval augmentation for multi-modal classification.
De-An Huang*, Shyamal Buch*, Lucio Dery, Animesh Garg, Li Fei-Fei, and Juan Carlos Niebles. 2018.
Finding "it": Weakly-supervised, reference-aware visual grounding in instructional videos. In IEEE
Conference on Computer Vision and Pattern Recognition (CVPR).
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Chloé Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi. 2015. Mise en place: Unsupervised interpretation of instructional recipes.
In *EMNLP*.
Kyungho Kim, Kyungjae Lee, and Seung won Hwang.
2021. Instructional video summarization using attentive knowledge grounding. In Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track - European Conference, ECML PKDD 2020, Proceedings, pages 565–569, Germany.
Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In International Conference on Computer Vision (ICCV).
Tessa Lau, Clemens Drews, and Jeffrey Nichols. 2009.
Interpreting written how-to instructions. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, IJCAI'09, page 1433–1438, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81.
Hirokuni Maeta, Tetsuro Sasada, and Shinsuke Mori.
2015. A framework for procedural text understanding. In *Proceedings of the 14th International Conference on Parsing Technologies*, pages 50–60, Bilbao, Spain. Association for Computational Linguistics.
Nicola Messina, Giuseppe Amato, Andrea Esuli, Fabrizio Falchi, Claudio Gennaro, and Stéphane Marchand-Maillet. 2021a. Fine-grained visual textual alignment for cross-modal retrieval using transformer encoders. ACM Trans. Multimedia Comput.
Commun. Appl., 17(4).
Nicola Messina, Fabrizio Falchi, Andrea Esuli, and Giuseppe Amato. 2021b. Transformer reasoning network for image- text matching and retrieval. In *2020* 25th International Conference on Pattern Recognition (ICPR), pages 5222–5229.
Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In *Advances in neural information processing systems*, pages 2265–2273.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Jiyang Qi, Yan Gao, Yao Hu, Xinggang Wang, Xiaoyu Liu, Xiang Bai, Serge Belongie, Alan Yuille, Philip Torr, and Song Bai. 2022. Occluded video instance segmentation: A benchmark. *International Journal* of Computer Vision.
S. Robertson. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends®
in Information Retrieval, 3(4):333–389.
Zhiqiang Shen, Jianguo Li, Zhou Su, Minjun Li, Yurong Chen, Yu-Gang Jiang, and Xiangyang Xue. 2017.
Weakly supervised dense video captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Botian Shi, Lei Ji, Yaobo Liang, Nan Duan, Peng Chen, Zhendong Niu, and Ming Zhou. 2019a. Dense procedure captioning in narrated instructional videos. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6382–
6391, Florence, Italy. Association for Computational Linguistics.
Botian Shi, Lei Ji, Zhendong Niu, Nan Duan, Ming Zhou, and Xilin Chen. 2020. *Learning Semantic Concepts and Temporal Alignment for Narrated Video* Procedural Captioning, page 4355–4363. Association for Computing Machinery, New York, NY, USA.
Jing Shi, Jia Xu, Boqing Gong, and Chenliang Xu.
2019b. Not all frames are equal: Weakly-supervised video grounding with contextual similarity and visual clustering losses. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
pages 10436–10444.
Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers.
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In *Proceedings of the IEEE*
conference on computer vision and pattern recognition, pages 4566–4575.
Teng Wang, Ruimao Zhang, Zhichao Lu, Feng Zheng, Ran Cheng, and Ping Luo. 2021. End-to-end dense video captioning with parallel decoding.
Zhenzhi Wang, Ziteng Gao, Limin Wang, Zhifeng Li, and Gangshan Wu. 2020. Boundary-aware cascade networks for temporal action segmentation. In ECCV
(25), volume 12370 of Lecture Notes in Computer Science, pages 34–51. Springer.
Frank F. Xu, Lei Ji, Botian Shi, Junyi Du, Graham Neubig, Yonatan Bisk, and Nan Duan. 2020. A benchmark for structured procedural knowledge extraction from cooking videos.
Kun Zhang, Zhendong Mao, Quan Wang, and Yongdong Zhang. 2022. Negative-aware attention framework for image-text matching. In *2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition*
(CVPR), pages 15640–15649.
Yanhao Zhang, Qiang Wang, Pan Pan, Yun Zheng, Cheng Da, Siyang Sun, and Yinghui Xu. 2021. Fashion focus: Multi-modal retrieval system for video commodity localization in e-commerce. *Proceedings* of the AAAI Conference on Artificial Intelligence, 35(18):16127–16128.
Luowei Zhou, Chenliang Xu, and Jason J Corso. 2018a.
Towards automatic learning of procedures from web instructional videos. In *AAAI Conference on Artificial Intelligence*, pages 7590–7598.
Luowei Zhou, Yingbo Zhou, Jason J. Corso, Richard Socher, and Caiming Xiong. 2018b. End-to-end dense video captioning with masked transformer.
CoRR, abs/1804.00819.
## 7 Appendix 7.1 Implementation Details 7.1.1 Temporal Description Retrieval
For temporal description retrieval, we followed the convention of (Krishna et al., 2017; Zhou et al., 2018b; Shi et al., 2019a) and obtained the image frames from the video by down-sampling for every 4.5s. The obtained image frames are then fed to pre-trained object detector (Anderson et al., 2018)
to yield the sequence of object region features. For image encoder Ωv and the text encoder Ωr, we used the image encoder of pretrained LXMERT
and BERT-base-uncased (Devlin et al., 2019), respectively. For training temporal description retrieval, we used one video as a batch, resulting in all the sampled frames and recipe sentences in a batch coming from the same video. We adopt an Adam optimizer with a learning rate of 0.0001. We set the weighing contribution λaux in Eq. 10 to be 0.05 and the threshold µ for KOFCL to be 0.1, based on validation set result.
## 7.2 Computation Of Recognizability
To compute the joint recognizability of the image and transcript, instructional script, we first computed the recognizability in each modality. In the image, we considered the key objects to be recognizable when they are labeled to be inside the image without occlusion in human annotation (Shen et al.,
2017). In the textual modality, transcript and instructional script, the key objects are considered to be recognizable when they are lexically referred in transcripts or instructional scripts. Then, we considered the key objects to be recognizable when they are in the union of the recognizable key object set of each modality.
## 7.3 Ablation On Sequence Encoder
Here, we show the result of TDR with distinct sequence encoders. In Table 9, LSTM showed the
| Sequence Encoder | R@1 |
|--------------------|-------|
| CNN | 50.65 |
| TRANSFORMER | 43.69 |
| LSTM (OURS) | 55.04 |
Table 9: Recall@1 score with different sequence encoder.
highest R@1 score. While we adopted LSTM as our sequence encoder, our KOFCL is orthogonal to any sequence encoder and can be adapted to any existing sequence encoder.
## 7.3.1 Dense Video Captioning
EMT For the key clip extraction task, we follow the convention of (Zhou et al., 2018b) to use 16 different kernel sizes for the temporal convolution layer, *i.e.*, from 3 to 123 with the interval step of 8, which can cover the different lengths. We use a transformer encoder and decoder with 768 inner hidden sizes, 8 heads, and 2 layers which we fed context-aware recipe sentences and video frame features after concatenation. We adopt an AdamW
optimizer with learning rate of 0.00001 to train the model. The batch size of training is 12 and we use one RTX2080Ti GPU to train our model.
PDVC We use single transformer models with 768 inner hidden sizes, 12 heads, and 2 layers which we fed context-aware recipe sentences and video frame features after concatenation. We adopt an AdamW optimizer with learning rate of 0.00005 to train the model. The batch size of training is 1 and we use one RTX2080Ti GPU to train our model.
## 7.4 Dataset
We conducted experiments on the two distinct instructional video datasets, YouCook2 (Zhou et al.,
2018a), a dataset of instructional cooking videos and IVD dataset (Alayrac et al., 2017), a dataset of instructional videos with 5 distinct topics.
Though YouCook2 originally provides 2000 videos, as some videos are unavailable on YouTube, we collect the currently available videos, obtaining 1,356 videos. For the dataset split, we follow the original split ratio from (Zhou et al., 2018a)
to YouCook2: 910 for training, 312 for validation, and 135 for testing for YouCook2. For the IVD
dataset, we used 104 for training, 17 for validation, and 32 for testing.
This split is used for both TDR and DVC. Each video is labeled with starting and ending times of key clips, and their textual descriptions. For transcripts, we use YouTube's ASR engine. We collected the instructional documents from the web archive7for YouCook2 following previous work (Kiddon et al., 2015) and top-1 retrieved result from the google search engine for IVD dataset.
Our instructional document collection contains an average of 15.33 documents with 10.15 sentences for YouCook2 dataset and 1 instructional document with 20 sentences for IVD dataset.
## 7.5 Qualitative Results
Here, we provide the generated result of EMT without/with our retrieved recipes in Figure 2. In all examples, there exist the key objects hardly recognizable from the images which EMT fail to mention in the generated caption. However, our retrieved recipes provide the disambiguated reference of such key objects and enable EMT to generate more accurate caption containing them.
![12_image_0.png](12_image_0.png)
| like a 1 glass of hot | I'm gonna break my | I'm gonna mix it so | basically like sighs | |
|------------------------------------------------------|----------------------|-----------------------|------------------------|---------------------|
| Transcript | water and right' | magic screen | once we are | some marinated this |
| Retrieved | Transfer chicken to | In a saucepan, | Pour blended sauce | Transfer chicken |
| Instructional | aplate | over chicken in the | and any pan juices | |
| cmbine chicken , | | | | |
| Script | carrots, peas, and | skillet | into the sauce | |
| celery | | | | |
| Baseline (wo/Recipe) : add the sauce to the pan | | | | |
| Ours (w/Recipe): add the chicken to the pan and stir | | | | |
Ours (w/Recipe): add the chicken to the pan and stir
![12_image_1.png](12_image_1.png)
| Transcript | You're gonna burn it |
|-------------------|------------------------|
| season with salt, | |
| Retrieved | black pepper , and |
| Instructional | oregano |
| Script | |
| Right I've already | | |
|------------------------|-------------------------|-------------------------|
| Burnt beef tastes like | So you're gonna just | |
| treat it like a steak | said that | |
| season with salt , | Whisk the balsamic | Whisk the balsamic |
| black pepper , and | vinegar, oil, salt, and | vinegar, oil, salt, and |
| oregano | pepper in a small | pepper in a small |
| bowl | bowl | |
Baseline (wo/Recipe) : add the <unk> <unk> to the bowl and mix Ours (w/Recipe) : Add salt and pepper to the pan
![12_image_2.png](12_image_2.png)
| fill the boston the | i need to fill a pot | sing the control of the | |
|-----------------------|----------------------------------------------|---------------------------|-----|
| Transcript | cost | the new plant in | |
| Retrieved | Layer soil in the new Layer soil in the new | Layer soil in the new | |
| Instructional | pot | pot | pot |
| Script | Baseline (wo/Recipe) : even surface | | |
Ours (w/Recipe): put soil Figure 2: Example of the retrieved procedural sentence and generated captions without/with retrieved procedural sentence. Top 2 figures are from YouCook2 dataset and bottom figure is from IVD dataset.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
somayajula-etal-2023-bi | Bi-level Finetuning with Task-dependent Similarity Structure for Low-resource Training | https://aclanthology.org/2023.findings-acl.544 | Training a large language model in low-resource settings is challenging since they are susceptible to overfitting with limited generalization abilities. Previous work addresses this issue by approaches such as tunable parameters reduction or data augmentation. However, they either limit the trained models{'} expressiveness or rely on task-independent knowledge. In this paper, we propose the Bi-level Finetuning with Task-dependent Similarity Structure framework where all parameters, including the embeddings for unseen tokens, are finetuned with task-dependent information from the training data only. In this framework, a task-dependent similarity structure is learned in a data-driven fashion, which in turn is used to compose soft embeddings from conventional embeddings to be used in training to update all parameters. In order to learn the similarity structure and model parameters, we propose a bi-level optimization algorithm with two stages{---}search and finetune{---}to ensure successful learning. Results of experiments on several classification datasets in low-resource scenarios demonstrate that models trained with our method outperform strong baselines. Ablation experiments further support the effectiveness of different components in our framework. Code is available at \url{https://github.com/Sai-Ashish/BFTSS}. | # Bi-Level Finetuning With Task-Dependent Similarity Structure For Low-Resource Training
Sai Ashish Somayajula♠,∗ Lifeng Jin♣,∗ Linfeng Song♣ Haitao Mi♣ **Dong Yu**♣
♠UC San Diego, USA
[email protected]
♣Tencent AI Lab, USA
{lifengjin, lfsong, haitaomi, dyu}@tencent.com
## Abstract
Training a large language model in lowresource settings is challenging since they are susceptible to overfitting with limited generalization abilities. Previous work addresses this issue by approaches such as tunable parameters reduction or data augmentation. However, they either limit the trained models' expressiveness or rely on task-independent knowledge. In this paper, we propose the Bilevel Finetuning with Task-dependent Similarity Structure framework where all parameters, including the embeddings for unseen tokens, are finetuned with task-dependent information from the training data only. In this framework, a task-dependent similarity structure is learned in a data-driven fashion, which in turn is used to compose soft embeddings from conventional embeddings to be used in training to update all parameters. In order to learn the similarity structure and model parameters, we propose a bi-level optimization algorithm with two stages—search and finetuneto ensure successful learning. Results of experiments on several classification datasets in low-resource scenarios demonstrate that models trained with our method outperform strong baselines. Ablation experiments further support the effectiveness of different components in our framework. Code is available at https://github.com/Sai-Ashish/BFTSS.
## 1 Introduction
Finetuning pretrained large models in low-resource scenarios 1face many challenges (Kann et al., 2020; Hedderich et al., 2021; ¸Sahin, 2022). One of the challenges is the overfitting of the model when finetuned on the small training data. Different approaches have been proposed to tackle this problem and achieved great results. Some approaches restrict the number of parameters to be updated in
*Equal Contribution 1Our work is conducted in low resource scenarios that comprise a few hundred instances of data.
the finetuning process to avoid overfitting to small amounts of data (Xu et al., 2021). However, parameter restriction while model finetuning may impact the model's expressiveness.
Other methods such as data augmentation (Wei and Zou, 2019; Hoang et al., 2018) aim at increasing the training data size through synthesizing new training data examples to boost generalization ability. These methods rely on either external lexical resources and heuristics, which are limited in domain and language; or pretrained language models, the semantic similarity space of which is not taskdependent. For example, Apple may be replaced by Microsoft in synonym replacement based on some lexical resource, but the "replace-ability" really depends on whether the task is "separate tech companies from gas companies" or "find companies with their headquarters in Washington state", the information of which pretrained language models or static lexical resources are not able to provide.
Our motivation is to combine strengths of both approaches. For generalization ability to unseen data, all parameters, especially embeddings of words not in the training data, should participate in the finetuning. At the same time, no external knowledge source should be required for such finetuning to ensure the method being scalable to different tasks. The ideal training framework for these goals should allow training signals to flow from tokens in training data to unseen tokens in a task-dependent way. Such a framework will ensure that the generalization ability of the trained model is strengthened through finetuning without the risk of overfitting quickly to a small amount of training data.
Our approach proposed in this paper, Bi-level Finetuning with Task-dependent Similarity Structure (BFTSS), aims to meet these goals. First, we propose a low-resource finetuning method where all parameters of the model, including the embeddings of unseen tokens, can be tuned directly through soft embeddings. The soft embeddings 8569 are constructed through the use of a similarity matrix with pairwise similarity scores between words, termed a similarity structure2in this paper. Second, we propose a bi-level optimization algorithm to learn a task-dependent similarity structure with lowresource task data, where no extra data, knowledge source or task-dependent prior knowledge is required. Since the similarity structure is usually very large, two different methods are proposed to reduce its size and make the learning tractable. Finally, we conduct extensive experiments on different datasets and with different model sizes. Comparison to baseline models and ablated models shows the effectiveness of our approach, where the performance of the models trained in the proposed method surpasses all baselines by large margins.
## 2 Related Work
Low-resource training has been a challenging but important task in natural language processing (Hedderich et al., 2021). Approaches have been proposed to tackle the issues encountered in lowresource training. Robust finetuning methods are applied in such training scenarios, such as approaches restricting tunable parameters (Houlsby et al., 2019; Lee et al., 2019; Chen et al., 2020; Xu et al., 2021) and noise-robust training methods (Jia et al., 2019; Onoe and Durrett, 2019; Jin et al., 2021). They alleviate the overfitting problem but introduce no new information beyond the training data into the model. Data augmentation on the token level (Wei and Zou, 2019; Raiman and Miller, 2017; Vania et al., 2019) as well as on the sentence level using syntax (¸Sahin and Steedman, 2018; ¸Sahin, 2022), back-translation (Hoang et al.,
2018; Xie et al., 2020) or generation models (Ding et al., 2020; Lowell et al., 2021; Liu et al., 2022; Zhou et al., 2022; Wang et al., 2022; Somayajula et al., 2022) are approaches which aims at introducing extra information into model training. Other similar methods rely on pseudo-labeling extra data
(Mintz et al., 2009; Le and Titov, 2019; Lison et al.,
2020) using task insights and heuristics. Most of them are designed for a specific task, or require external knowledge sources.
Bi-level optimization (BLO) has wide applications in machine learning. The neural architecture search proposed by Liu et al. (2018) uses BLO. It is also used in data selection (Shu et al., 2019; Wang et al., 2020; Ren et al., 2020) and meta-learning
(Finn et al., 2017). Feurer et al. (2015) proposed a BLO-based optimization framework for hyperparameter tuning. Baydin et al. (2017) proposed BLO-based learning rate adaptation. Noisy label correction using BLO is proposed by Baydin et al.
(2017). Among the papers mentioned above, the lower parameters are the model weights, and the upper parameters are the meta-variables, such as hyperparameters, architecture, training example weights, etc., to be optimized using BLO.
## 3 Bi-Level Finetuning With Task-Dependent Similarity Structure
The Bi-level Finetuning with Task-dependent Similarity Structure framework, as shown in Figure 1, centers around how to learn and utilize a taskdependent similarity structure S. The structure is first initialized and learned along with model parameters with bi-level optimization on task data, where soft embeddings of words are derived from the structure to propagate training signals into unseen words. After this first phase of training, named Search phase, the Finetune phase follows in which only model parameters are updated with the similarity structure fixed.
## 3.1 Motivation And Overview Of Bftss
In BFTSS, a task-dependent similarity structure is first learned with a small training data and then used to improve the performance of the language models. The motivation for the task-dependent similarity structure comes from the observation that only a few words appear in the training data in a low-resource scenario with their word embeddings updated in training. However, we want to pass more information about the unseen words to the model to train it. One way to do that is to identify a word's similar words in the vocabulary and estimate gradients of them from the seen word's gradient. This similarity structure is encoded as a similarity matrix S, with each row corresponding to a word3in the vocabulary with size V . The row entries represent the task-dependent semantic proximity of a word with all other words in the vocabulary.
Previous methods for data augmentation implicitly 3The smallest string units for a pretrained language model may be words, subwords or other symbols. We use *word* to refer to all items in the tokenization vocabulary.
![2_image_0.png](2_image_0.png)
assume that the similarity structure for a task is identical to a general similarity structure, which can be derived from pretrained models or gathered from external lexical semantic resources. However, similarity structures are task-specific with varying degrees of closeness to the general similarity structure, as shown in the Apple-Microsoft example. In our framework, they are trained with task data to ensure they are task-specific.
With a task-specific similarity structure, we are able to train all parameters through soft embeddings. Soft embeddings are defined as a linear combination of the embedding vectors weighted by the entries of the S matrix. Intuitively, it means that when a model with parameters W sees a word in the text input, it also sees all the related words weighted by the entries of the corresponding row in S. Thus the optimal model weights learned this way would be dependent on S, i.e., W∗(S).
For training, the task-dependent similarity matrix S should not be learned by reducing the training loss on the model parameters W∗(S), because this is similar to adding more parameters to an already huge language model. Therefore, the taskdependent similarity matrix S is learned using a bilevel optimization approach. Bi-level optimization is used because of the inter-dependency between the optimal model weights W∗and the optimal similarity matrix S∗. The learned optimal model weights W∗(S) depend on S and S is learned in a way that further improves the performance of W∗(S). This shows that both parameters influence and benefit from each other. With bi-level optimization, we are able to first estimate the W parameters by one-step gradient descent with S fixed on one portion of the training data, and learn the S parameter by using the learned W parameter on a different portion. This bi-level optimization phase of W and S is called the Search Phase.
Finally, with the learned S and W from the Search Phase, normal finetuning is conducted in the Finetuning Phase for further tuning the W parameters on the entire training data. The learned S
parameters is fixed throughout the phase.
## 3.2 Similarity Structure Initialization
The similarity structure is encoded in this work as a similarity matrix S ∈ R
V ×V, where V is the vocabulary size. Each row of the S matrix represents a word in the vocabulary. The row entries represent a word's semantic proximity with all the other words in the vocabulary. One way to initialize this S matrix is to add the inner product of the pretrained language model's embedding matrix E ∈ R
H×V
with itself (where H is the hidden dimension of the embedding layer of the language model) to the identity matrix:
$$S=\alpha I+(1-\alpha)f(\{\hat{E}^{T}\hat{E}\}^{d}),$$
where Eˆ is the normalized E matrix where each column vector is a unit norm vector, d is the inverse temperature, I ∈ R
V ×Vis the identity matrix, α is trade-off parameter, and f is the normalizing function that normalizes each row to sum to 1. The identity matrix is added to make the initialization a spiky distribution with the highest weight for the diagonal elements. The pretrained model's embedding layer decides the weights of the off-diagonal elements. These language models are pretrained on a huge corpus of texts and the cosine distance between the embedding vectors depict the semantic proximity between the words. The inner-product matrix is raised to the power of inverse temperature and normalized to make it a smooth and light-tailed distribution. α controls how strong the pretrained embeddings influence the similarity values. By setting α to 1, we have Vanilla finetuning where the similarity terms are not considered. In this paper, we set both terms to have equal weights and omit it in the following sections.
## 3.3 Soft Embeddings
Soft embeddings are defined as the linear combination of all the related embedding vectors whose weights are determined by the similarity structure S. Formally, we define the soft embedding vector of a word as follows,
$${\hat{e}}_{i}^{(t)}=\sum_{j=0}^{K}s_{i,j}^{(t)}E_{j}^{(t)}=e_{i}^{(t)}S^{(t)}\{E^{(t)}\}^{T}$$
where K is the number of related words used to calculate soft embeddings, E(t) ∈ R
H×Vand S
(t) ∈ R
V ×Vare the embedding matrix and similarity matrix at t th iteration. E
(t)
jis the embedding vector of j th word. s
(t)
i,j is the {*i, j*}
th element of the S
(t) matrix which describes how similar the j-th word is to the i-th word. e
(t)
iis the one-hot representation of the i-th word and eˆ
(t)
iis the soft embedding of it.
When the model weights are updated with backpropagation, the embeddings of all the similar words (determined by the entries of the similarity matrix S) are updated, propagating task knowledge into all parts of the model:
$$\nabla e_{i}=\sum_{j=0}^{K}s_{i j}^{(t)}\nabla E_{j}^{(t)}.$$
## 3.4 Bi-Level Learning Of A Task-Dependent Similarity Structure
Because the similarity structure needs to be trained with task data, a bi-level optimization-based approach is proposed in this work for learning such a task-dependent similarity structure. There are two stages in the bi-level learning process. In the first stage, the model weights W is updated to minimize the loss on one dataset, searching for the optimal model weights W∗(S) on that dataset. In the second stage, the task-dependent similarity matrix S is updated searching for S∗that attains the minimum loss on a different dataset.4
## 3.4.1 Training W
In the first stage, model parameters W are trained on BFTSS training set DB-train with the similarity matrix S fixed:
$$W^{*}(S)=\operatorname*{min}_{W}L(W,S,{\mathcal{D}}^{\mathrm{B-train}}),$$
$$(1)$$
B-train), (1)
where W is model parameters, S is the similarity matrix, and L is the task loss. The optimal model weights are learned on DB-train given a similarity matrix S. Hence we learn W∗(S), which is dependent on S since W∗ depends on the loss function L(·) which is a function of S. S is not updated in this stage because this would overfit the BFTSS
training set; instead it will be updated in the second stage.
## 3.4.2 Training S
In the second stage, the optimal similarity matrix S is learned on BFTSS validation set DB-val given the optimal model weights W∗(S) learned in the first stage on DB-train. The model trained in the first stage W∗(S) is evaluated on DB-val and S is updated by minimizing the validation loss. The following optimization problem is solved at this stage:
$$\operatorname*{min}_{S}\quad L(W^{*}(S),S,{\mathcal{D}}^{\mathrm{B-val}}).\qquad\quad(2)$$
By performing both the stages iteratively with different parameters being fixed at each stage, we do not overfit on the any of the two dataset DB-train and DB-val.
## 3.4.3 A Bi-Level Optimization Framework
Combining both the stages, we have the following bi-level optimization framework:
$$\begin{array}{r l}{\operatorname*{min}_{S}}&{{}L(W^{*}(S),S,{\mathcal{D}}^{\mathrm{B-val}})}\\ {s.t.}&{{}W^{*}(S)=\operatorname*{min}_{W}L(W,S,{\mathcal{D}}^{\mathrm{B-train}})}\end{array}\tag{3}$$
The algorithm consists of two learning stages.
From the bottom, the optimization problem corresponds to the learning stage 3.4.1 and then 3.4.2.
These two stages are conducted end-to-end. The solution obtained in the first stage W∗(S) is a function of S. We solve for S by minimizing the validation loss in the second stage. The S learned in the second stage changes the training loss in the first stage, which changes the solution W∗(S).
## 3.5 The Full Optimization Algorithm
The full optimization algorithm, shown in Algo.1 includes two distinct phases. 1) Search phase: In this phase, optimal similarity matrix S∗is estimated by an iterative algorithm to solve the Bi-level optimization problem in Equation 3. The algorithm learns S0 ≈ S∗. 2) Finetune Phase: In this phase, we finetune the language model on the whole Dtrain for optimal model weights W∗ with a fixed S0.
For optimization in the search phase, we develop a gradient-based optimization algorithm to solve the problem defined in Equation 3 (Liu et al., 2018).
W is approximated using the one-step gradient descent.
W∗(S) ≈ W0 = W − ηw∇W *L(W, S,* D
B-train) (4)
W0is plugged into the second-level objective function. The gradient with respect to the S matrix is calculated to update S:
$$S^{*}\approx S^{\prime}=S-\eta_{s}\nabla_{S}L(W^{\prime},S,{\mathcal{D}}^{\mathsf{B-val}}).$$
B-val). (5)
Gradient of the loss function with respect to S is calculated using the chain rule. W0is an implicit function of S.
$$\begin{array}{l}{{\nabla_{S}L(W^{\prime},S,\mathcal{D}^{\mathrm{B-val}})=}}\\ {{\nabla_{S}L(W-\eta_{w}\nabla_{W}L(W,S,\mathcal{D}^{\mathrm{B-train}}),S,\mathcal{D}^{\mathrm{B-val}})}}\\ {{=\nabla_{S}L(W^{\prime},S,\mathcal{D}^{\mathrm{B-val}})-\eta_{w}\times}}\\ {{\nabla_{S,W}^{2}L(W,S,\mathcal{D}^{\mathrm{B-train}})\nabla_{W^{\prime}}L(W^{\prime},S,\mathcal{D}^{\mathrm{B-val}})}}\end{array}$$
Solving for S0involves an expensive matrix-vector product, whose computational complexity can be reduced by a finite difference approximation:
$$\begin{array}{c}{{\nabla_{S,W}^{2}L(W,S,\mathcal{D}^{\mathrm{B-train}})\nabla_{W^{\prime}}L(W^{\prime},S,\mathcal{D}^{\mathrm{B-val}})=}}\\ {{\nabla_{S}L(W^{+},S,\mathcal{D}^{\mathrm{B-train}})-\nabla_{S}L(W^{-},S,\mathcal{D}^{\mathrm{B-train}})}}\\ {{2\epsilon}}\end{array},$$
$$(6)$$
where
$$\begin{array}{c}{{W^{\pm}=W\pm\epsilon\nabla_{W^{\prime}}L(W^{\prime},S,\mathcal{D}^{\mathrm{B-val}}),}}\\ {{\epsilon=\frac{0.01}{\|\nabla_{W^{\prime}}L(W^{\prime},S,\mathcal{D}^{\mathrm{B-val}})\|_{2}}.}}\end{array}$$
This procedure is carried out iteratively until convergence, at which time the Finetune phase starts.
With the trained S0from the concluded first phase, the whole model is further finetuned for optimal weights W(S0) on the entire training data in Dtrain with S0 fixed. This allows the model parameters to be tuned on the unseen DB-val as well as DB-train.
Algorithm 1 Optimization algorithm Split training dataset Dtrain into two halves,
{DB-train, DB-val}.
\# Search phase while not converged do Update model weights W using Eq.(4) on DB-train Update similarity matrix S using Eq.(5) on DB-val end while \# Finetune phase With the learned S0, learn for optimal W on Dtrain until convergence.
## 3.6 Dimensionality Reduction Of S
The dimension of S is V × V which is generally difficult to optimize when V is very large. In this section, we discuss ways to reduce the dimensionality of S, making it substantially more convenient to optimize. We propose two different ways to reduce the dimension of S from V × V to V × K where K V .
BFTSS Top-K : After the S matrix initialization as stated in Section 3.2, we choose the K words with the highest similarity scores and their corresponding indices from each row in the S matrix.
The entries corresponding to the top-k words in each row of the S matrix are updated, thus reducing the dimension from V ×V to V ×Ktop-K where Ktop-K V .
BFTSS U-V : From Section 3.2, we initialize S
as follows,
$$S=I+f(\left\{{\hat{E}}^{T}{\hat{E}}\right\}^{d})=I+{\hat{S}}$$
where Sˆ = f({EˆT Eˆ}
d). S is a full rank matrix because of the added identity matrix, but Sˆ may not be a full-rank matrix.5 Sˆ can be decomposed into a product of two lower-rank matrices. Thus to efficiently reduce the dimension of the S matrix, we apply rank reduction on Sˆ. We use PARAFAC (Bro, 1997) to decompose Sˆ into two factors U and V ∈ R
V ×KU-V of rank KU-V V such that,
$$=I+f(\left\{{\hat{E}}^{T}{\hat{E}}\right\}^{d})=I+{\hat{S}}\approx I+U\times V^{T}$$
5There is an interdependency between similar words making their corresponding rows dependent.
Reconstruction of S matrix is not needed to perform soft embedding operations. The following operation is performed instead,
$$\hat{e}_{i}^{(t)}=e_{i}^{(t)}\times S^{(t)}\times\left\{E^{(t)}\right\}^{T}\tag{7}$$ $$=\left\{E_{i}^{(t)}\right\}^{T}+h(((e_{i}^{(t)}\times U)\times V^{T})\times\left\{E^{(t)}\right\}^{T}),$$
where h is the Top-K operation which selects similar words dynamically contrary to the static selection in BFTSS Top-K. Multiplication is performed in a specific order to avoid reconstruction of S matrix everytime for soft embeddings.
(e
(t)
i × U) ∈ RKU-V is a KU-V-dimensional vector.
V ∈ RV ×KU-V . Thus, it boils down to product of a KU-V-dimensional vector with KU-V × V dimensional matrix (V
T). It is then multiplied with the embedding matrix to get a H dimensional soft embedding vector. Thus the computational complexity is of the same order as BFTSS Top-K approach.
## 4 Experiments 4.1 Datasets
We perform experiments on several datasets from GLUE (Warstadt et al., 2018; Wang et al., 2019).
The GLUE datasets span a wide range of tasks such as linguistic acceptability (CoLA), semantic textual similarity (STS-B), paraphrase (QQP), natural language inference (RTE, QNLI, MNLI), and sentiment classification (SST-2). To simulate a low-resource finetuning scenario, 100, 300, and 1k examples are sampled from the original training dataset for training. The models are evaluated on the original development set following Xu et al.
(2021).
## 4.2 Baselines
Many different baselines are compared with the proposed method in this paper. Vanilla finetuning is the classic finetuning method where the whole training dataset is used for training. RecAdam
(Chen et al., 2020) is an advanced version of weight decay with time-varying coefficients for the crossentropy loss term and the regularization loss term.
Child-D and Child-F (Xu et al., 2021) are methods where a mask is applied to the gradients to restrict the number of parameters being updated to avoid overfitting. Top-K-layer finetuning (Houlsby et al., 2019) only updates the top K layers, and Mixout (Lee et al., 2019) randomly replaced the updated parameters with the pretrained parameters.
Finally EDA (Wei and Zou, 2019) is a popular data augmentation method with a dependency on the knowledge resource WordNet (Fellbaum, 1998).
All baselines are evaluated with the pretrained BERT-base and BERT-large models. They are finetuned on the subsampled training datasets. The averaged results on the original development set over ten random seeds are reported following Xu et al. (2021). For BFTSS Top-K and BFTSS U-V,
top 50 words and U-V dimension of 100 worked the best among other choices. More information about the hyperparameter tuning process can be found in the appendix.
## 4.3 Main Results
Table 1 shows the results of the average scores6for the proposed and baseline methods. Models trained with our methods are most accurate compared to the baselines over all the sampled data sizes for both BERT-base and BERT-large models, often by large margins. This improvement indicates that our approach to using bi-level optimization to learn a task-dependent similarity structure for finetuning without any external knowledge is very effective in boosting the model's performance over baselines.
Among baseline methods, Mixout and Top-K-layer Tuning perform better than other baselines. However, there is still a substantial performance gap between these methods and our proposed methods.
For example, BFTSS Top-K method achieves an average gain of 10.58%, 4.73%, and 1.50% over mixout in 100, 300, and 1K training examples scenario, respectively, on the BERT-base model . Our BFTSS U-V method achieves an average gain of 10.75%, 4.77%, and 1.39% over mixout in 100, 300, and 1K training examples scenario, respectively, on the BERT-base model . The trend is similar for BERT-large models and also when compared to Top-K-layer Tuning.
Because Mixout proposes to replace randomly sampled model parameters with pretrained model parameters while finetuning (Lee et al., 2019), and Top-K-layer Tuning only tunes the Top-K layers while freezing the remaining bottom weights, they both can be considered as putting restrictions on model capacity to avoid overfitting. Different from these methods, the proposed methods utilize information about unseen words in the form of a task-dependent similarity structure to serve as an informative prior for the model in finetuning. The model, especially the embeddings of the unseen 6Task performance values are in the appendix.
Method 100 300 1000 Vanilla 33.11 46.17 65.28 RecAdam 36.65 44.46 68.28 Child-D 38.38 52.65 66.88 Child-F 38.09 50.89 66.52 Top-K-layer 39.91 58.01 68.47
Mixout 43.97 58.28 68.80 EDA 52.95 56.95 62.92
BFTSS Top-K *54.55 63.01* **70.30**
BFTSS U-V 54.72 63.05 *70.19*
(a) Test Results (%) on all datasets with a BERT-base model.
Method 100 300 1000
Vanilla 38.70 56.80 69.31 RecAdam 36.53 56.92 70.16
Child-D 48.05 64.14 71.37 Child-F 47.51 63.05 70.18
Top-K-layer 51.86 64.94 72.05 Mixout 52.98 64.22 72.32
EDA 52.75 60.14 65.04
BFTSS Top-K *58.00* 66.53 *72.86*
BFTSS U-V 58.10 *66.50* **73.11**
words, receives informative updates from limited training data. Results here show that by providing more information about unseen words in the vocabulary instead of restricting the tunable parameters, models can be trained to have better generalization without overfitting.
We also compare to the popular data augmentation method, EDA (Wei and Zou, 2019). Different from the baselines above, an external lexical resource, WordNet, is used in EDA for synonym replacement. Our method outperforms EDA in all datasplits despite having no access to any additional data. On the BERT-base model , our BFTSS
Top-K method outperforms EDA by an average performance gain of 1.6%, 6.06%, and 7.38% in 100, 300, and 1000 training examples scenarios.
Similarly, on the BERT-base model, our BFTSS
U-V method outperforms EDA by an average performance gain of 1.77%, 6.1%, and 7.27% in 100,
![6_image_0.png](6_image_0.png)
300, and 1000 training examples scenarios. The trend is similar for the BERT-large models.
EDA can be seen as creating symbolic data augmentations by replacing words according to a general similarity structure coupled with other operations such as random swap, insertion and deletion.
With increasing training examples, the accuracy improvement of our method over EDA increases.
This result indicates that initially, the general similarity structure is helpful due to low amount of training data compared to no augmentation at all.
However, as the training data increases, the general similarity structure along with other heuristics brings more noise than information, resulting in smaller gains and even performance loss compared to Vanilla finetuning. The task-specific similarity structure from our method can benefit the models in all cases, because it is close to the general similarity structure when the training data is small, and moves to a similarity structure tailored for the task when training data increases.
Finally, the two dimensionality reduction methods, Top-K and U-V, perform quite similarly under different conditions, which indicates that both dimensionality reduction methods provides similar benefits to model training.
## 4.4 Ablation Experiments
4.4.1 Vanilla S-W
The effectiveness of the bi-level optimization training is examined in the Vanilla S-W experiments, where Vanilla S-W method represents training the similarity structure S and the model W on the whole training dataset, DB-train, without the BFTSS
framework. We use the U-V procedure for the dimensionality reduction of S. Fig. 2 compares our method with Vanilla S-W on BERT-base and
| Method(Top-K) | BERT-base | BERT-large |
|-----------------|-------------|--------------|
| Vanilla | 33.11 | 38.70 |
| Random S | 40.04 | 43.78 |
| Initial S | 42.33 | 51.01 |
| BFTSS Random | 50.68 | 51.13 |
| BFTSS | 54.55 | 58.00 |
(a) Impact of initialization of S and using BFTSS Top-K
at 100 data split settings. We report the average scores on all datasets
| Method (U-V) | BERT-base | BERT-large |
|----------------|-------------|--------------|
| Vanilla | 33.11 | 38.70 |
| Random S | 29.30 | 28.45 |
| Initial S | 42.51 | 52.46 |
| BFTSS Random | 37.01 | 33.56 |
| BFTSS | 54.72 | 58.10 |
(b) Impact of initialization of S and using BFTSS U-V at 100 data split settings. We report the average scores on all datasets.
Table 2: Results of experiments showing the impact of initialization of S. The Top-K and U-V indicate if Top-K or U-V, respectively, was used for dimensionality reduction of S.
BERT-large in 100 training examples scenario. Results show our method outperforms Vanilla S-W
by a large margin. In Vanilla S-W, both the S and W parameters are learned on the training dataset without the bi-level optimization framework. Compared to Vanilla finetuning where S is not used, performance from Vanilla S-W models are much higher, indicating that the initial similarity structure is already helpful for updating all parts of the model. However, the bi-level learning of S and W
is able to provide further benefit for model learning.
4.4.2 Initialization of S
Initial values in the S matrix are important to the success of the BFTSS. Experiments are conducted using the following initialization methods:
- Random S: Use a randomly initialized S for Finetune phase of the optimization algorithm without learning for a task-dependent similarity matrix.
- Initial S: Use the initial S for Finetune phase of the optimization algorithm without learning for a task-dependent similarity matrix.
- BFTSS Random: Perform the optimization algorithm on a randomly initialized S.
Table 2 compares our method with the three initialization methods of S described above. The first two initialization methods do not learn S with task data.
Results show that our method outperforms both initialization methods emphasizing the need to learn for a task-dependent S. Interestingly, Random S
Top-K outperforms Vanilla, indicating that when S
is initialized randomly, static similar word selection strategy (BFTSS Top-K) regularizes the model's performance where as dynamic similar word selection strategy (BFTSS U-V) is injecting noise into the model that is making the performance worse.
Furthermore, Initial S performs better than Vanilla and Random S for both the rank reduction methods, indicating the initialization's importance.
We further inspect the role of learning of S in our algorithm. By comparing BFTSS , BFTSS Random, and Random S, we can observe that BFTSS
Random performs better than Random S. This observation indicates that a learned S is more helpful than a random initialization of S. Furthermore, BFTSS performs better than BFTSS Random, indicating the initialization of S from pretrained language models provides a more informative prior to the similarity structure than a random initialization.
The proposed initialization of S is derived from the pretrained model's embedding layer. Such a model is pretrained on a huge corpus of texts in an unsupervised fashion. The model would have learned a latent representation of the words in the vocabulary that captures semantic proximity between them. Thus an Initialized S is better than a Random S. However, since a pretrained model is pretrained on a huge corpus of texts, there is no guarantee that the proposed initialization of S is task-dependent. Hence, we need to further learn for a task-dependent S.
## 5 Conclusion
To mitigate the impact of overfitting leading to low generalization ability of large language models, especially in low-resource scenarios, we propose Bilevel Finetuning with Task-dependent Similarity Structure-BFTSS, a bi-level optimization framework to learn a task-specfic similarity structure and further enable the generalization ability of updating
'unseen' or 'similar' words of language models on a given task. We also introduce two variants, BFTSS
Top-K and BFTSS U-V, to reduce the dimensionality and make computation efficient. Extensive experimental results on various datasets show the effectiveness of our approaches in low-resource scenarios when comparing to different baseline methods.
## 6 Limitation
The way the method applies to larger datasets needs further exploration. As the number of training examples increases, the accuracy gain over vanilla finetuning reduces, indicating that our method best works in low-resource scenarios. Another limitation is that we performed experiments only in one language. It will be interesting to apply our method to tasks in other languages and understand the impact of task-dependent similarity structure on the model's performance in those scenarios.
BFTSS Top-K and BFTSS U-V methods perform similarly. Scenarios where BFTSS Top-K and BFTSS U-V differ in performance, should be further explored. We plan to address them in our future works.
## References
Atilim Gunes Baydin, Robert Cornish, David Martinez Rubio, Mark Schmidt, and Frank Wood. 2017. Online learning rate adaptation with hypergradient descent. *arXiv preprint arXiv:1703.04782*.
Rasmus Bro. 1997. Parafac. tutorial and applications.
Chemometrics and intelligent laboratory systems, 38(2):149–171.
Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. 2020. Recall and learn:
Fine-tuning deep pretrained language models with less forgetting. *arXiv preprint arXiv:2004.12651*.
Sang Keun Choe, Willie Neiswanger, Pengtao Xie, and Eric Xing. 2022. Betty: An automatic differentiation library for multilevel optimization. *arXiv* preprint arXiv:2207.02849.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, and Chunyan Miao. 2020. DAGA: Data augmentation with a generation approach for low-resource tagging tasks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6045–6057, Online. Association for Computational Linguistics.
Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books.
Matthias Feurer, Jost Springenberg, and Frank Hutter. 2015. Initializing bayesian hyperparameter optimization via meta-learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017.
Model-agnostic meta-learning for fast adaptation of deep networks. In *International conference on machine learning*, pages 1126–1135. PMLR.
Michael A. Hedderich, Lukas Lange, Heike Adel, Jannik Strötgen, and Dietrich Klakow. 2021. A survey on recent approaches for natural language processing in low-resource scenarios. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2545–2568, Online. Association for Computational Linguistics.
Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In *Proceedings of the 2nd Workshop on Neural Machine* Translation and Generation, pages 18–24, Melbourne, Australia. Association for Computational Linguistics.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly.
2019. Parameter-efficient transfer learning for nlp.
In *International Conference on Machine Learning*,
pages 2790–2799. PMLR.
Wei Jia, Dai Dai, Xinyan Xiao, and Hua Wu. 2019.
ARNOR: Attention regularization based noise reduction for distant supervision relation classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1399–
1408, Florence, Italy. Association for Computational Linguistics.
Lifeng Jin, Linfeng Song, Kun Xu, and Dong Yu. 2021.
Instance-adaptive training with noise-robust losses against noisy labels. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5647–5663, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Katharina Kann, Ophélie Lacroix, and Anders Søgaard. 2020. Weakly supervised pos taggers perform poorly on truly low-resource languages. volume 34, pages 8066–8073.
Phong Le and Ivan Titov. 2019. Boosting entity linking performance by leveraging unlabeled documents. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1935–
1945, Florence, Italy. Association for Computational Linguistics.
Cheolhyoung Lee, Kyunghyun Cho, and Wanmo Kang.
2019. Mixout: Effective regularization to finetune large-scale pretrained language models. *arXiv* preprint arXiv:1909.11299.
Pierre Lison, Jeremy Barnes, Aliaksandr Hubin, and Samia Touileb. 2020. Named entity recognition without labelled data: A weak supervision approach.
In *Proceedings of the 58th Annual Meeting of the* Association for Computational Linguistics, pages 1518–1533, Online. Association for Computational Linguistics.
Hanxiao Liu, Karen Simonyan, and Yiming Yang.
2018. Darts: Differentiable architecture search.
arXiv preprint arXiv:1806.09055.
Yongtai Liu, Joshua Maynez, Gonçalo Simões, and Shashi Narayan. 2022. Data augmentation for lowresource dialogue summarization. In *Findings of the* Association for Computational Linguistics: NAACL
2022, pages 703–710, Seattle, United States. Association for Computational Linguistics.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization (2017). arXiv preprint arXiv:1711.05101.
David Lowell, Brian Howard, Zachary C. Lipton, and Byron Wallace. 2021. Unsupervised data augmentation with naive augmentation and without unlabeled data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4992–5001, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP,
pages 1003–1011, Suntec, Singapore. Association for Computational Linguistics.
Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. *arXiv preprint arXiv:2006.04884*.
Yasumasa Onoe and Greg Durrett. 2019. Learning to denoise distantly-labeled data for entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2407–2417, Minneapolis, Minnesota. Association for Computational Linguistics.
Jonathan Raiman and John Miller. 2017. Globally normalized reader. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language* Processing, pages 1059–1069, Copenhagen, Denmark. Association for Computational Linguistics.
Zhongzheng Ren, Raymond Yeh, and Alexander Schwing. 2020. Not all unlabeled data are equal: Learning to weight data in semi-supervised learning.
Advances in Neural Information Processing Systems, 33:21786–21797.
Gözde Gül ¸Sahin. 2022. To augment or not to augment? a comparative study on text augmentation techniques for low-resource NLP. *Computational* Linguistics, 48(1):5–42.
Gözde Gül ¸Sahin and Mark Steedman. 2018. Data augmentation via dependency tree morphing for lowresource languages. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 5004–5009, Brussels, Belgium. Association for Computational Linguistics.
Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. 2019. Metaweight-net: Learning an explicit mapping for sample weighting. *Advances in neural information processing systems*, 32.
Sai Ashish Somayajula, Linfeng Song, and Pengtao Xie. 2022. A multi-level optimization framework for end-to-end text augmentation. Transactions of the Association for Computational Linguistics, 10:343–358.
Clara Vania, Yova Kementchedjhieva, Anders Søgaard, and Adam Lopez. 2019. A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1105–1116, Hong Kong, China. Association for Computational Linguistics.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In the Proceedings of ICLR.
Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, and Daxin Jiang.
2022. PromDA: Prompt-based data augmentation for low-resource NLU tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4242–4255, Dublin, Ireland. Association for Computational Linguistics.
Yulin Wang, Jiayi Guo, Shiji Song, and Gao Huang.
2020. Meta-semi: A meta-learning approach for semi-supervised learning. arXiv preprint arXiv:2007.02394.
Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2018. Neural network acceptability judgments.
arXiv preprint arXiv:1805.12471.
Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 6382–6388, Hong Kong, China. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing:
System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In *Advances in Neural* Information Processing Systems, volume 33, pages 6256–6268. Curran Associates, Inc.
Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, and Fei Huang.
2021. Raise a child in large language model: Towards effective and generalizable fine-tuning. *arXiv* preprint arXiv:2109.05687.
Ran Zhou, Xin Li, Ruidan He, Lidong Bing, Erik Cambria, Luo Si, and Chunyan Miao. 2022. MELM:
Data augmentation with masked entity language modeling for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2251–2262, Dublin, Ireland. Association for Computational Linguistics.
## A Baselines
We include the following baselines in our experiments:
- Vanilla: Vanilla finetuning of the language model on the training dataset.
- RecAdam: Chen et al. (2020) proposed RecAdam optimizer to mitigate the effect of catastrophic forgetting while finetuning a large language model. RecAdam is an advanced version of weight decay with timevarying coefficients for the cross-entropy loss term and the regularization loss term.
- Child-D: Xu et al. (2021) proposed to update the weights of a sub-network within a large language model to tackle the fine-tuning instability and catastrophic forgetting issue. ChildD estimates a static mask with probability pD
from the fisher information matrix at the beginning of the training. pD = {0.1, 0.2, 0.3}.
- Child-F: Also proposed in (Xu et al., 2021).
Unlike Child-D, Child-F utilizes a dynamic mask during gradient computation. At every iteration, a mask is sampled from a Bernoulli distribution parameterized by pF = {0.2, 0.3, 0.4}.
- Top-K-layer Finetuning: Only the top-K layers are finetuned with the remaining bottom layers frozen (Houlsby et al., 2019). K = {0, 3, 6, 12}.
- Mixout: Lee et al. (2019) proposed to replace the language model parameters with their corresponding pretrained weights while finetuning with probability p = {0.1, 0.2, . . . ,
0.8}. This way, the authors propose reducing the model's deviation from the pretrained weights, thereby tackling the catastrophic forgetting issue.
## B Hyperparameter Settings
For all methods, we finetune the pretrained BERTbase7and BERT-large8 models provided by Huggingface (Wolf et al., 2020). We follow the same settings for the models as Devlin et al. (2018).
All models are trained using a batch size of 7https://huggingface.co/bert-base-cased/tree/main 8https://huggingface.co/bert-large-cased/tree/main 16, a warm-up ratio of 10%, and AdamW optimizer (Loshchilov and Hutter, 2019) with β1 = 0.9, β2 = 0.999, = 1e-6.
For one dataset, the averaged task performance is calculated using model performance evaluated from ten runs with different random seeds. The average scores, reported in the result tables in the main paper, are the averaged value of all task performance numbers. The task performance numbers are reported in the tables below in the appendix.
For our method the training can be decomposed into two phases, 1) Search phase and 2) Finetune phase. We report the hyperparameter settings for each phase and grid searched for the best hyperparameter setting for a task following Xu et al.
(2021).
- Search phase: We grid search for Ktop-K parameter in {50, 100} and KU-V decomposition dimension in {100, 300}. We split the training dataset Dtrain into two halves, {DB-train, DB-val} to be used for stage 3.4.1 and then 3.4.2 of search phase respectively.
We use the Adam optimizer for S with β1 =
0.9, β2 = 0.999, = 1e-8. We grid search for the learning rate of S in {5e-06, 4e-05}.
We follow the same settings for W as Devlin et al. (2018). We grid search for the learning rate of W in {4e-05, 2e-05, 5e-06} and number of epochs in {3, 6, 9}. Since it is an optimization problem, we search for the hyperparameters from the provided choices that lead to a smooth reduction in the loss curves.
Grid search is performed in a similar fashion as Xu et al. (2021).
- Finetune phase: We use the default hyperparameter settings for learning W∗following Devlin et al. (2018). We train for optimal W
on full training dataset Dtrain in this phase. We use the S∗ obtained in the search phase.
We follow the settings reported in the paper and use the code provided for all the baselines. For a fair comparison (Mosbach et al., 2020), we run the baselines for the same number of training steps as our entire algorithm (search phase and finetuning phase). We grid-searched over all choices of hyperparameters reported in their respective papers and report results on the best set of hyperparameters found. We used Betty library (Choe et al., 2022) for the MLO implementation. We use the V100 GPU
| Dataset | Dev | Metrics |
|-----------|-------|---------------|
| CoLA | 1.0k | Matthews Corr |
| STS-B | 1.5k | Spearman Corr |
| SST-2 | 872 | Accuracy |
| QQP | 40k | F1 |
| QNLI | 5.5k | Accuracy |
| MNLI | 9.8k | Accuracy |
| RTE | 277 | Accuracy |
## C Tables
Table 3 describes the datasets used for the training and evaluation of the models. It reports the number of examples in the dev set and the evaluation metrices for each task. Tables 4-9, shows the comparison of our method with all the baselines mentioned above for each dataset and split settings. Tables 1013 shows the results of our ablation studies with different initializations of S matrix to understand the impact of initialization. Last column (Avg) is the average performance metric of the model for a given approach across all the tasks. Table 14 shows the comparison of our method with Vanilla S-W on BERT-base and BERT-large for each task to understand the impact of BFTSS.
| 1.32, | | |
|-------------|--------------|----------------------|
| Vanilla | 2.9 (4.64) | 22.49, 23.77 (50.44) |
| 0.98, | | |
| RecAdam | 3.19 (8.02) | 33.44, 44.13 (63.09) |
| 1.62, | | |
| Child-D | 4.04 (10.16) | 41.08, 3.91 (47.82) |
| 2.12, | | |
| Child-F | 2.77 (6.02) | 37.4, 5.45 (44.04) |
| 2.63, | | |
| Top-K-layer | 3.71 (7.38) | 45.57, 12.88 (61.98) |
| 6.82, | | |
| Mixout | 6.98 (17.63) | 58.02, 8.65 (72.54) |
| 5.72, | | |
| EDA | 7.34 (14.98) | 71.3, 4.39 (76.2) |
| 16.27, | | |
| BFTSSTop-K | 8.19 (26.51) | 78.29, 1.37 (80.47) |
| 15.84, 8.86 | | |
| BFTSSU-V | (26.58) | 78.56, 1.32 (80.93) |
| 55.44, 4.98 (64.33) | 13.0, 18.13 (46.46) | 52.47, 11.88 (69.49) |
|-----------------------|-----------------------|------------------------|
| 53.37, 5.46 (63.07) | 29.49, 29.59 (59.48) | 54.93, 9.12 (64.51) |
| 63.02, 5.98 (71.1) | 16.72, 24.87 (60.34) | 62.93, 9.01 (70.16) |
| 62.31, 5.98 (72.36) | 21.36, 28.21 (64.09) | 60.52, 11.41 (69.63) |
| 65.61, 7.24 (75.92) | 17.2, 27.71 (59.5) | 63.08, 8.59 (69.23) |
| 64.32, 8.12 (77.52) | 28.02, 25.66 (63.41) | 65.9, 5.38 (70.91) |
| 78.26, 5.46 (83.14) | 60.06, 2.31 (63.73) | 68.94, 3.94 (71.99) |
| 74.71, 6.52 (83.03) | 57.75, 3.6 (63.5) | 71.7, 2.13 (74.81) |
| 75.54, 4.51 (81.65) | 58.37, 3.8 (63.94) | 71.84, 2.04 (73.93) |
Method CoLA STSB SST-2 QQP QNLI MNLI MNLI-m RTE Avg
| 33.71, 3.35 (39.35) | 33.93, 3.6 (39.49) | 52.56, 3.32 | 33.11 |
|-----------------------|----------------------|---------------|---------|
| (55.96) | | | |
| 35.08, 3.83 (38.7) | 34.95, 3.8 (39.23) | 50.97, 3.91 | 36.65 |
| (55.96) | | | |
| 34.89, 3.5 (40.15) | 35.61, 4.22 (41.99) | 51.16, 3.49 | 38.38 |
| (55.6) | | | |
| 34.73, 3.39 (40.03) | 35.27, 4.02 (41.56) | 51.05, 3.05 | 38.09 |
| (54.51) | | | |
| 36.03, 2.85 (39.83) | 36.47, 3.35 (40.76) | 52.67, 3.0 | 39.91 |
| (56.32) | | | |
| 37.31, 3.23 (41.13) | 38.36, 4.15 (42.96) | 52.96, 2.98 | 43.97 |
| (57.4) | | | |
| 42.28, 3.42 (47.79) | 42.4, 4.04 (48.17) | 54.66, 3.2 | 52.95 |
| (57.76) | | | |
| 39.98, 2.26 (43.1) | 41.66, 2.61 (45.3) | 56.03, 1.9 | 54.55 |
| (58.48) | | | |
| 39.98, 2.17 (42.52) | 41.72, 2.48 (44.84) | 55.96, 2.6 | 54.72 |
| (60.29) | | | |
Table 4: Test Results (%) on BERT-base model in 100 data split settings. For each task, average, standard deviation and maximum of the evaluation metric over ten random seeds have been reported in the tables. The format used is average, standard deviation (maximum).
| Method | CoLA | STSB | SST-2 | QQP | QNLI | MNLI | MNLI-m | RTE | Avg |
|-------------|---------------|---------------------|---------------------|----------------------|---------------------|---------------------|---------------------|-------------|-------|
| 3.57, | | | | | | | | | |
| Vanilla | 5.81 (15.07) | 67.85, 9.14 (74.6) | 73.77, 9.14 (81.88) | 28.37, 23.51 (61.32) | 66.35, 8.74 (74.19) | 37.33, 4.55 (41.77) | 38.11, 5.39 (43.49) | 54.04, 4.19 | 46.17 |
| (58.84) | | | | | | | | | |
| 6.25, | | | | | | | | | |
| RecAdam | 6.97 (17.02) | 68.17, 3.77 (73.69) | 68.85, 9.13 (78.67) | 11.34, 23.67 (58.51) | 69.22, 4.14 (72.58) | 38.32, 3.46 (41.4) | 39.16, 3.88 (43.05) | 54.4, 2.52 | 44.46 |
| (58.48) | | | | | | | | | |
| 4.61, | | | | | | | | | |
| Child-D | 6.19 (17.58) | 78.6, 4.51 (84.31) | 79.77, 4.29 (84.86) | 49.2, 21.38 (65.58) | 71.97, 4.91 (76.95) | 39.74, 3.28 (44.45) | 40.98, 3.76 (47.06) | 56.32, 2.56 | 52.65 |
| (59.57) | | | | | | | | | |
| 3.84, | | | | | | | | | |
| Child-F | 5.04 (14.6) | 79.18, 1.94 (83.25) | 79.6, 4.61 (82.22) | 39.87, 29.02 (66.05) | 70.34, 7.04 (76.19) | 38.42, 3.47 (43.4) | 39.62, 3.82 (45.61) | 56.25, 2.71 | 50.89 |
| (61.37) | | | | | | | | | |
| 22.81, | | | | | | | | | |
| Top-K-layer | 11.79 (34.59) | 80.74, 2.31 (84.25) | 84.31, 1.14 (85.89) | 62.52, 2.63 (67.2) | 73.47, 1.84 (75.36) | 40.91, 3.23 (45.67) | 42.69, 4.05 (48.54) | 56.61, 1.78 | 58.01 |
| (58.84) | | | | | | | | | |
| 21.87, | | | | | | | | | |
| Mixout | 10.88 (37.67) | 81.83, 1.56 (83.81) | 83.39, 2.09 (85.78) | 57.68, 7.06 (68.63) | 74.94, 2.53 (78.24) | 43.02, 2.36 (46.82) | 45.48, 2.42 (49.42) | 58.05, 2.5 | 58.28 |
| (62.82) | | | | | | | | | |
| 9.45, | | | | | | | | | |
| EDA | 2.88 (15.29) | 77.69, 3.69 (82.02) | 84.47, 1.64 (86.7) | 62.29, 1.25 (63.87) | 72.17, 1.39 (74.21) | 46.23, 3.95 (52.07) | 47.69, 4.07 (51.6) | 55.56, 1.79 | 56.95 |
| (59.57) | | | | | | | | | |
| 33.18, | | | | | | | | | |
| BFTSSTop-K | 4.71 (38.29) | 83.77, 0.81 (85.37) | 84.68, 1.79 (86.47) | 66.89, 1.19 (68.14) | 76.59, 1.67 (78.82) | 48.4, 2.9 (52.37) | 51.18, 3.35 (56.05) | 59.35, 1.63 | 63.01 |
| (61.37) | | | | | | | | | |
| 32.86, | | | | | | | | | |
| BFTSSU-V | 5.13 (42.06) | 83.8, 0.71 (84.65) | 84.87, 1.94 (87.27) | 66.25, 1.69 (69.32) | 76.97, 0.99 (78.36) | 48.97, 2.37 (52.24) | 51.64, 2.91 (55.52) | 59.03, 2.26 | 63.05 |
| (62.45) | | | | | | | | | |
Table 5: Test Results (%) on BERT-base model in 300 data split settings. For each task, average, standard deviation and maximum of the evaluation metric over ten random seeds have been reported in the tables. The format used is average, standard deviation (maximum).
Vanilla
40.90,
6.09
(45.63)
RecAdam
43.81,
3.78
(49.95)
Child-D
42.18,
3.46
(44.96)
Child-F
41.02,
8.80
(45.0)
Top-K-layer
43.93,
2.0
(47.6)
Mixout
45.87,
1.75
(48.49)
EDA
18.75,
5.36
(27.09)
BFTSSTop-K
44.33,
4.03
(49.66)
BFTSSU-V
43.77,
2.35
(46.33)
| 40.90, 6.09 (45.63) | 84.05, 1.54 (85.99) |
|-----------------------|-----------------------|
| 43.81, 3.78 (49.95) | 86.41, 0.72 (86.87) |
| 42.18, 3.46 (44.96) | 84.95, 1.43 (86.39) |
| 41.02, 8.80 (45.0) | 84.55, 1.43 (86.31) |
| 43.93, 2.0 (47.6) | 85.44, 1.0 (86.69) |
| 45.87, 1.75 (48.49) | 86.32, 1.17 (87.61) |
| 18.75, 5.36 (27.09) | 80.95, 1.62 (83.1) |
| 44.33, 4.03 (49.66) | 87.05, 0.37 (87.35) |
| 43.77, 2.35 (46.33) | 87.18, 0.32 (87.57) |
| 86.37, 0.7 (87.61) | 69.21, 1.31 (70.97) | 79.64, 0.95 (80.98) |
|----------------------|-----------------------|-----------------------|
| 87.47, 1.16 (89.56) | 70.59, 0.74 (72.22) | 80.72, 0.98 (82.06) |
| 86.56, 0.96 (87.61) | 69.27, 1.82 (71.87) | 79.77, 1.22 (81.29) |
| 86.92, 0.96 (87.84) | 69.49, 1.82 (71.52) | 79.55, 1.22 (81.05) |
| 86.94, 1.24 (89.11) | 70.69, 1.08 (72.09) | 80.33, 0.67 (81.44) |
| 86.88, 0.87 (88.42) | 68.16, 3.15 (71.61) | 80.22, 0.89 (81.27) |
| 87.27, 0.69 (88.3) | 66.14, 1.05 (68.07) | 76.95, 1.2 (78.69) |
| 87.9, 1.15 (89.68) | 71.33, 0.51 (72.05) | 81.08, 0.93 (81.97) |
| 87.66, 0.99 (89.11) | 71.15, 0.57 (72.08) | 81.3, 0.41 (82.17) |
Method CoLA STSB SST-2 QQP QNLI MNLI MNLI-m RTE Avg
| 49.78, 3.45 (54.57) | 52.63, 4.11 (57.94) | 59.71, 2.55 | 65.28 |
|-----------------------|-----------------------|---------------|---------|
| (63.18) | | | |
| 55.77, 3.47 (59.7) | 58.63, 3.12 (62.36) | 62.82, 2.94 | 68.28 |
| (66.43) | | | |
| 53.64, 1.72 (57.52) | 56.58, 1.87 (61.1) | 62.09, 2.48 | 66.88 |
| (66.79) | | | |
| 53.79, 1.72 (57.17) | 56.64, 1.87 (59.12) | 60.22, 2.48 | 66.52 |
| (64.98) | | | |
| 57.5, 3.17 (60.44) | 60.55, 3.05 (63.11) | 62.35, 2.61 | 68.47 |
| (65.7) | | | |
| 58.31, 1.88 (61.88) | 60.88, 1.72 (64.27) | 63.72, 2.0 | 68.80 |
| (67.15) | | | |
| 56.96, 2.14 (60.09) | 58.64, 2.03 (61.44) | 57.69, 1.52 | 62.92 |
| (60.29) | | | |
| 61.69, 1.79 (63.64) | 63.99, 1.71 (66.02) | 65.05, 2.23 | 70.30 |
| (69.31) | | | |
| 61.46, 1.76 (63.84) | 63.97, 1.6 (66.47) | 65.05, 2.27 | 70.19 |
| (67.87) | | | |
Table 6: Test Results (%) on BERT-base model in 1000 data split settings. For each task, average, standard deviation and maximum of the evaluation metric over ten random seeds have been reported in the tables. The format used is average, standard deviation (maximum).
Method CoLA STSB SST-2 QQP QNLI MNLI MNLI-m RTE Avg
| 8.72, | | |
|-------------|---------------|----------------------|
| Vanilla | 14.33 (37.75) | 51.93, 13.06 (68.51) |
| 1.48, | | |
| RecAdam | 2.62 (4.64) | 44.85, 13.61 (57.33) |
| 14.62, | | |
| Child-D | 11.18 (36.65) | 70.62, 14.35 (80.45) |
| 14.42, | | |
| Child-F | 10.21 (27.59) | 64.23, 16.7 (78.93) |
| 24.67, | | |
| Top-K-layer | 15.52 (40.53) | 71.88, 6.17 (77.96) |
| 23.37, | | |
| Mixout | 13.66 (43.49) | 74.03, 9.0 (82.85) |
| 7.15, | | |
| EDA | 11.22 (32.74) | 71.64, 8.0 (81.33) |
| 28.21, | | |
| BFTSSTop-K | 8.83 (38.83) | 76.19, 3.48 (81.34) |
| 30.48, | | |
| BFTSSU-V | 8.22 (44.52) | 76.62, 2.16 (79.55) |
Table 7: Test Results (%) on BERT-large model in 100 data split settings. For each task, average, standard deviation and maximum of the evaluation metric over ten random seeds have been reported in the tables. The format used is average, standard deviation (maximum).
| 58.42, 9.64 (78.33) | 9.72, 18.35 (46.86) | 61.83, 8.11 (70.05) | 33.94, 1.96 (36.33) | 33.82, 1.63 (35.81) | 51.19, 3.63 | 38.70 |
|-----------------------|-----------------------|-----------------------|-----------------------|-----------------------|---------------|---------|
| (56.32) | | | | | | |
| 53.56, 2.15 (57.34) | 11.98, 23.16 (55.86) | 57.02, 4.67 (64.23) | 35.92, 2.84 (38.09) | 35.71, 3.04 (38.91) | 51.7, 2.33 | 36.53 |
| (53.79) | | | | | | |
| 69.69, 8.1 (81.19) | 29.64, 28.75 (66.6) | 68.09, 5.95 (73.99) | 37.91, 3.86 (42.93) | 38.78, 3.89 (43.86) | 55.09, 4.04 | 48.05 |
| (61.73) | | | | | | |
| 68.58, 8.2 (81.08) | 34.19, 25.27 (62.46) | 67.8, 5.29 (74.08) | 37.69, 2.52 (40.57) | 38.41, 3.05 (42.39) | 54.8, 4.15 | 47.51 |
| (61.73) | | | | | | |
| 77.13, 7.59 (88.65) | 43.79, 25.06 (65.52) | 65.09, 6.21 (72.74) | 38.17, 3.0 (43.61) | 39.28, 4.0 (47.08) | 54.84, 4.92 | 51.86 |
| (62.45) | | | | | | |
| 80.22, 8.16 (87.16) | 45.21, 15.06 (62.38) | 68.56, 5.83 (77.7) | 38.2, 3.77 (44.78) | 38.82, 3.81 (45.16) | 55.42, 4.64 | 52.98 |
| (63.9) | | | | | | |
| 82.29, 5.72 (88.76) | 53.92, 19.03 (62.52) | 70.43, 3.14 (74.04) | 41.01, 4.37 (47.35) | 41.61, 5.03 (48.6) | 53.9, 3.02 | 52.75 |
| (59.21) | | | | | | |
| 85.55, 1.81 (87.96) | 60.0, 5.9 (68.37) | 73.11, 3.91 (77.96) | 41.64, 2.01 (44.83) | 42.68, 2.42 (45.71) | 56.57, 3.5 | 58.00 |
| (63.18) | | | | | | |
| 87.31, 1.7 (89.68) | 58.35, 4.78 (63.24) | 71.98, 1.85 (74.35) | 40.72, 2.14 (43.76) | 41.88, 4.04 (50.16) | 57.44, 3.93 | 58.10 |
| (67.51) | | | | | | |
| 41.57, | | |
|-------------|--------------|----------------------|
| Vanilla | 4.97 (48.7) | 75.01, 14.67 (84.06) |
| 35.42, | | |
| RecAdam | 9.07 (46.58) | 81.05, 1.08 (82.3) |
| 42.32, | | |
| Child-D | 3.87 (47.92) | 83.19, 2.8 (86.73) |
| 38.7, | | |
| Child-F | 7.13 (46.09) | 82.55, 2.53 (85.33) |
| 44.64, | | |
| Top-K-layer | 4.39 (54.52) | 83.48, 2.12 (85.7) |
| 43.46, | | |
| Mixout | 5.15 (49.96) | 84.75, 2.2 (86.98) |
| 13.95, | | |
| EDA | 3.94 (18.07) | 81.24, 1.78 (84.65) |
| 44.57, | | |
| BFTSSTop-K | 3.28 (49.89) | 84.71, 0.9 (86.18) |
| 44.4, | | |
| BFTSSU-V | 3.83 (51.31) | 85.2, 1.07 (86.84) |
| 83.88, 6.89 (90.37) | 40.81, 27.44 (67.27) | 74.98, 1.67 (77.54) |
|-----------------------|------------------------|-----------------------|
| 86.03, 1.96 (89.33) | 39.19, 29.21 (68.25) | 74.93, 3.22 (79.97) |
| 88.75, 1.37 (90.25) | 64.39, 3.16 (69.27) | 77.53, 4.41 (80.62) |
| 88.76, 1.23 (90.48) | 63.27, 6.95 (69.81) | 76.0, 2.5 (79.22) |
| 89.24, 0.97 (90.6) | 65.08, 5.03 (70.46) | 79.11, 1.38 (81.05) |
| 88.67, 0.65 (89.68) | 62.97, 7.88 (70.48) | 77.37, 2.24 (82.3) |
| 89.21, 0.95 (90.48) | 63.08, 5.04 (67.34) | 74.3, 2.58 (78.13) |
| 88.99, 1.0 (90.25) | 66.01, 2.74 (70.19) | 78.47, 1.57 (81.07) |
| 89.23, 0.8 (90.83) | 66.58, 1.92 (69.65) | 78.48, 1.83 (80.62) |
Method CoLA STSB SST-2 QQP QNLI MNLI MNLI-m RTE Avg
| 39.07, 5.19 (48.02) | 40.51, 5.98 (50.34) | 58.59, 4.16 | 56.80 |
|-----------------------|-----------------------|---------------|---------|
| (64.62) | | | |
| 40.13, 2.8 (43.5) | 41.27, 5.21 (48.68) | 57.36, 4.02 | 56.92 |
| (64.26) | | | |
| 47.27, 5.19 (54.06) | 49.68, 6.17 (57.66) | 59.96, 3.25 | 64.14 |
| (63.9) | | | |
| 46.62, 5.26 (55.43) | 49.08, 5.88 (57.83) | 59.42, 3.37 | 63.05 |
| (63.9) | | | |
| 48.01, 4.45 (54.99) | 50.24, 5.22 (58.28) | 59.75, 3.97 | 64.94 |
| (65.34) | | | |
| 47.33, 7.12 (57.65) | 49.12, 8.25 (61.38) | 60.11, 2.43 | 64.22 |
| (63.18) | | | |
| 51.06, 2.74 (55.08) | 53.11, 2.94 (57.5) | 55.16, 3.42 | 60.14 |
| (61.01) | | | |
| 53.17, 4.65 (57.18) | 55.18, 4.86 (59.64) | 61.16, 1.59 | 66.53 |
| (63.54) | | | |
| 52.75, 5.8 (58.63) | 55.09, 2.85 (58.93) | 60.25, 2.16 | 66.50 |
| (64.98) | | | |
Table 8: Test Results (%) on BERT-large model in 300 data split settings. For each task, average, standard deviation and maximum of the evaluation metric over ten random seeds have been reported in the tables. The format used is average, standard deviation (maximum).
RecAdam
50.66,
2.15
(53.18)
Top-K-layer
51.73,
2.59
(55.58)
Mixout
53.52,
2.06
(55.67)
EDA
22.79,
13.75
(36.45)
BFTSSTop-K
51.11,
3.67
(55.48)
BFTSSU-V
51.45,
2.78
(54.95)
Method CoLA STSB SST-2 QQP QNLI MNLI MNLI-m RTE Avg
Vanilla 47.48
(-)
81.86
(-)
90.25
(-)
71.30
(-)
81.68
(-)
55.72
(-)
61.09
(67.44)
65.09
(-) 69.31
Child-D 50.37
(-)
82.76
(-)
90.39
(-)
71.79
(-)
83.42
(-)
62.93
(-)
61.24
(67.04)
68.09
(-) 71.37
Child-F 48.44
(-)
82.25
(-)
90.34
(-)
72.15
(-)
83.09
(-)
62.47
(-)
57.19
(65.92)
65.52
(-) 70.18
| 81.86 | 90.25 | 71.30 | 81.68 | |
|----------------------|---------------------|---------------------|---------------------|---------------------|
| (-) | (-) | (-) | (-) | (-) |
| 50.66, 2.15 (53.18) | 86.97, 1.3 (88.38) | 90.28, 0.5 (90.94) | 71.46, 1.59 (73.51) | 82.74, 1.11 (84.7) |
| 82.76 | 90.39 | 71.79 | 83.42 | |
| (-) | (-) | (-) | (-) | (-) |
| 82.25 | 90.34 | 72.15 | 83.09 | |
| (-) | (-) | (-) | (-) | (-) |
| 51.73, 2.59 (55.58) | 86.95, 1.67 (88.92) | 90.56, 0.98 (91.74) | 72.66, 1.07 (74.07) | 83.4, 0.66 (84.5) |
| 53.52, 2.06 (55.67) | 87.91, 0.54 (88.77) | 90.4, 0.64 (91.63) | 70.75, 2.0 (73.49) | 83.03, 1.59 (84.51) |
| 22.79, 13.75 (36.45) | 85.61, 1.31 (87.14) | 90.53, 0.78 (91.97) | 68.6, 3.45 (72.24) | 79.86, 2.9 (83.14) |
| 51.11, 3.67 (55.48) | 88.98, 0.5 (89.56) | 90.41, 0.3 (90.94) | 72.62, 0.76 (73.68) | 83.28, 0.68 (84.46) |
| 51.45, 2.78 (54.95) | 89.27, 0.33 (89.94) | 90.76, 0.75 (91.63) | 72.47, 0.98 (74.06) | 83.23, 0.88 (84.39) |
Table 9: Test Results (%) on BERT-large model in 1000 data split settings. For each task, average, standard deviation and maximum of the evaluation metric over ten random seeds have been reported in the tables. The format used is average, standard deviation (maximum). The results for Vanilla, Child-D, and Child-F were taken from the original paper (Xu et al., 2021). The paper only reported the mean of ten seeds.
| 55.72 | 61.09 | 65.09 (-) | 69.31 |
|---------------------|---------------------|-------------|---------|
| (-) | (67.44) | | |
| 56.41, 4.25 (60.58) | 58.72, 4.32 (62.85) | 64.04, 2.25 | 70.16 |
| (67.51) | | | |
| 62.93 | 61.24 | 68.09 (-) | 71.37 |
| (-) | (67.04) | | |
| 62.47 | 57.19 | 65.52 (-) | 70.18 |
| (-) | (65.92) | | |
| 61.13, 3.54 (65.9) | 63.55, 3.47 (68.28) | 66.39, 3.92 | 72.05 |
| (70.76) | | | |
| 62.06, 3.04 (65.86) | 64.92, 3.1 (68.82) | 65.92, 2.46 | 72.32 |
| (69.68) | | | |
| 56.41, 9.35 (64.02) | 58.33, 10.1 (66.52) | 58.23, 4.29 | 65.04 |
| (62.45) | | | |
| 63.65, 2.61 (67.52) | 66.04, 1.73 (68.47) | 66.79, 1.48 | 72.86 |
| (68.95) | | | |
| 64.47, 3.67 (68.24) | 66.37, 2.21 (68.39) | 66.86, 2.65 | 73.11 |
| (71.84) | | | |
| Vanilla Random S Top-K Initialized S Top-K BFTSSRandomTop-K BFTSSTop-K |
|--------------------------------------------------------------------------|
| 1.32, 2.9 (4.64) | 22.49, 23.77 (50.44) |
|---------------------|------------------------|
| 0.95, 2.0 (4.86) | 43.12, 15.71 (69.8) |
| 1.99, 3.78 (10.3) | 52.45, 9.4 (69.0) |
| 4.65, 4.16 (14.97) | 73.42, 3.85 (78.04) |
| 16.27, 8.19 (26.51) | 78.29, 1.37 (80.47) |
Method CoLA STSB SST-2 QQP QNLI MNLI MNLI-m RTE Avg
| 55.44, 4.98 (64.33) | 13.0, 18.13 (46.46) | 52.47, 11.88 (69.49) | 33.71, 3.35 (39.35) | 33.93, 3.6 (39.49) | 52.56, 3.32 (55.96) |
|-----------------------|-----------------------|------------------------|-----------------------|----------------------|-----------------------|
| 56.4, 5.5 (65.71) | 31.29, 25.64 (63.43) | 65.65, 5.08 (69.89) | 35.01, 3.28 (40.45) | 34.94, 3.65 (42.18) | 52.96, 2.58 (57.04) |
| 63.28, 7.11 (73.17) | 30.54, 27.53 (59.87) | 65.31, 9.74 (72.93) | 35.76, 4.25 (40.57) | 36.8, 5.39 (42.95) | 52.49, 2.08 (54.51) |
| 65.68, 6.23 (75.8) | 62.99, 1.12 (64.48) | 69.25, 1.05 (70.88) | 38.27, 1.45 (41.28) | 38.25, 2.45 (40.97) | 52.92, 3.72 (56.32) |
| 74.71, 6.52 (83.03) | 57.75, 3.6 (63.5) | 71.7, 2.13 (74.81) | 39.98, 2.26 (43.1) | 41.66, 2.61 (45.3) | 56.03, 1.9 (58.48) |
52.56,
3.32
(55.96)
33.11
52.96,
2.58
(57.04)
40.04
52.49,
2.08
(54.51)
42.33
52.92,
3.72
(56.32)
50.68
56.03,
1.9
(58.48)
54.55
Table 10: Impact of initialization of S and using BFTSS(Top-K) on BERT-base at 100 data split settings. For each task, average, standard deviation and maximum of the evaluation metric over ten random seeds have been reported in the tables. The format used is average, standard deviation (maximum).
51.19,
3.63
(56.32)
38.70
| Method | CoLA | STSB | SST-2 | QQP | QNLI | MNLI | MNLI-m | RTE | Avg |
|---------------------|---------------------|----------------------|----------------------|---------------------|---------------------|---------------------|---------------------|---------------------|-------|
| 8.72, | | | | | | | | | |
| Vanilla | 14.33 (37.75) | 51.93, 13.06 (68.51) | 58.42, 9.64 (78.33) | 9.72, 18.35 (46.86) | 61.83, 8.11 (70.05) | 33.94, 1.96 (36.33) | 33.82, 1.63 (35.81) | 51.19, 3.63 (56.32) | |
| 6.31, | | | | | | | | | |
| Random S Top-K | 5.3 | | | | | | | | |
| (16.75) | 62.4, 13.44 (75.49) | 61.72, 9.73 (76.83) | 36.71, 31.27 (64.6) | 59.54, 7.91 (73.55) | 35.5, 3.16 (39.87) | 35.77, 3.5 (41.12) | 52.27, 4.4 (58.12) | | |
| 19.16, | | | | | | | | | |
| Initialized S Top-K | 12.01 (36.34) | 68.71, 13.74 (78.67) | 80.38, 7.73 (87.27) | 39.57, 23.47 (60.9) | 66.79, 7.0 (72.69) | 37.9, 2.39 (40.64) | 39.02, 4.14 (44.59) | 56.53, 4.14 (63.54) | |
| 9.68, | | | | | | | | | |
| BFTSSRandomTop-K | 10.43 (27.52) | 72.11, 4.28 (77.97) | 67.63, 12.15 (83.26) | 58.16, 5.0 (63.51) | 67.98, 4.17 (72.8) | 39.43, 3.14 (43.21) | 40.2, 3.65 (44.28) | 53.83, 4.53 (64.26) | |
| 28.21, | | | | | | | | | |
| BFTSSTop-K | 8.83 (38.83) | 76.19, 3.48 (81.34) | 85.55, 1.81 (87.96) | 60.0, 5.9 (68.37) | 73.11, 3.91 (77.96) | 41.64, 2.01 (44.83) | 42.68, 2.42 (45.71) | 56.57, 3.5 (63.18) | |
52.27,
4.4
(58.12)
43.78
56.53,
4.14
(63.54)
51.01
53.83,
4.53
(64.26)
51.13
56.57,
3.5
(63.18)
58.00
Table 11: Impact of initialization of S and using BFTSS(Top-K) on BERT-large at 100 data split settings. For each task, average, standard deviation and maximum of the evaluation metric over ten random seeds have been reported in the tables. The format used is average, standard deviation (maximum).
52.56,
3.32
(55.96)
33.11
52.17,
2.71
(55.23)
29.3
| Method | CoLA | STSB | SST-2 | QQP | QNLI | MNLI | MNLI-m | RTE | Avg |
|-------------------|--------------|-----------------------|---------------------|----------------------|----------------------|---------------------|----------|-------|-------|
| 1.32, | | | | | | | | | |
| Vanilla | 2.9 (4.64) | 22.49, 23.77 (50.44) | 55.44, 4.98 (64.33) | 13.0, 18.13 (46.46) | 52.47, 11.88 (69.49) | 33.71, 3.35 (39.35) | | | |
| 0.51, | | | | | | | | | |
| Random S U-V | 2.52 (4.64) | -16.31, 53.34 (55.63) | 51.23, 5.33 (62.39) | 28.19, 29.72 (58.16) | 51.36, 7.12 (67.05) | 33.68, 3.08 (38.7) | | | |
| 2.34, | | | | | | | | | |
| Initialized S U-V | 4.73 (13.68) | 52.45, 8.97 (68.62) | 63.23, 7.0 (77.06) | 31.63, 27.98 (61.44) | 65.45, 9.16 (71.54) | 36.01, 4.51 (40.87) | | | |
| 1.39, | | | | | | | | | |
| BFTSSRandom U-V | 2.02 (4.64) | 37.29, 26.61 (59.0) | 52.61, 5.94 (63.88) | 33.26, 28.61 (56.49) | 52.64, 4.56 (63.43) | 34.27, 2.81 (39.29) | | | |
| 15.84, | | | | | | | | | |
| BFTSSU-V | 8.86 (26.58) | 78.56, 1.32 (80.93) | 75.54, 4.51 (81.65) | 58.37, 3.8 (63.94) | 71.84, 2.04 (73.93) | 39.98, 2.17 (42.52) | | | |
| 33.93, 3.6 (39.49) | 52.56, 3.32 (55.96) |
|----------------------|-----------------------|
| 33.59, 3.02 (38.74) | 52.17, 2.71 (55.23) |
| 36.81, 5.47 (43.1) | 52.13, 2.58 (55.23) |
| 34.47, 3.14 (40.93) | 50.18, 4.91 (55.96) |
| 41.72, 2.48 (44.84) | 55.96, 2.6 (60.29) |
52.13,
2.58
(55.23)
42.51
50.18,
4.91
(55.96)
37.01
55.96,
2.6
(60.29)
54.72
Table 12: Impact of initialization of S and using BFTSS(U-V) on BERT-base at 100 data split settings. For each task, average, standard deviation and maximum of the evaluation metric over ten random seeds have been reported in the tables. The format used is average, standard deviation (maximum).
| Method | CoLA | STSB | SST-2 | QQP | QNLI | MNLI | MNLI-m | RTE | Avg |
|-------------------|---------------|----------------------|---------------------|----------------------|---------------------|---------------------|---------------------|---------------------|-------|
| 8.72, | | | | | | | | | |
| Vanilla | 14.33 (37.75) | 51.93, 13.06 (68.51) | 58.42, 9.64 (78.33) | 9.72, 18.35 (46.86) | 61.83, 8.11 (70.05) | 33.94, 1.96 (36.33) | 33.82, 1.63 (35.81) | 51.19, 3.63 (56.32) | |
| 1.14, | | | | | | | | | |
| Random S U-V | 3.03 (6.56) | -14.66, 35.26 (29.0) | 50.5, 1.35 (53.56) | 22.42, 28.75 (57.26) | 52.59, 3.82 (61.47) | 32.99, 1.82 (36.94) | 33.01, 1.49 (36.07) | 49.57, 3.03 (53.07) | |
| 18.82, | | | | | | | | | |
| Initialized S U-V | 10.59 (31.91) | 72.48, 7.79 (81.19) | 82.49, 4.32 (87.61) | 46.12, 18.53 (66.9) | 66.71, 7.48 (73.57) | 38.38, 2.79 (43.8) | 39.18, 3.64 (43.5) | 55.52, 3.92 (61.37) | |
| 0.71, | | | | | | | | | |
| BFTSSRandom U-V | 3.28 (5.42) | 7.25, 38.19 (53.02) | 50.88, 2.18 (55.62) | 38.8, 26.77 (57.17) | 51.4, 5.45 (65.07) | 34.14, 2.38 (38.32) | 34.22, 2.76 (40.5) | 51.05, 3.38 (55.23) | |
| 30.48, | | | | | | | | | |
| BFTSSU-V | 8.22 (44.52) | 76.62, 2.16 (79.55) | 87.31, 1.7 (89.68) | 58.35, 4.78 (63.24) | 71.98, 1.85 (74.35) | 40.72, 2.14 (43.76) | 41.88, 4.04 (50.16) | 57.44, 3.93 (67.51) | |
51.19,
3.63
(56.32)
38.70
49.57,
3.03
(53.07)
28.45
55.52,
3.92
(61.37)
52.46
51.05,
3.38
(55.23)
33.56
57.44,
3.93
(67.51)
58.10
| Dataset | Vanilla S-W | Top-K | U-V | Dataset | Vanilla S-W | Top-K | U-V |
|---------------------------------------------------------------------------------------------------------|---------------|---------|-------|-----------|---------------|---------|-------|
| CoLA | 2.05 | 16.27 | 15.84 | | | | |
| (7.38) | (26.51) | (26.58) | | | | | |
| STSB | 55.12 | 78.29 | 78.56 | | | | |
| (72.12) | (80.47) | (80.93) | | | | | |
| SST-2 | 63.62 | 74.71 | 75.54 | | | | |
| (72.36) | (83.03) | (81.65) | | | | | |
| QQP | 16.02 | 57.75 | 58.37 | | | | |
| (58.16) | (63.5) | (63.94) | | | | | |
| QNLI | 59.29 | 71.70 | 71.84 | | | | |
| (71.77) | (74.81) | (73.93) | | | | | |
| MNLI | 35.60 | 39.98 | 39.98 | | | | |
| (40.60) | (43.1) | (42.52) | | | | | |
| MNLI-m | 35.46 | 41.66 | 41.72 | | | | |
| (43.27) | (45.3) | (44.84) | | | | | |
| RTE | 51.59 | 56.03 | 55.96 | | | | |
| (53.79) | (58.48) | (60.29) | | | | | |
| Avg | 39.84 | 54.55 | 54.72 | | | | |
| (a) Comparison of Vanilla S − W with our approaches, with BERT-base model, on 100 data split settings. | CoLA | 16.29 | 28.21 | 30.48 | | | |
| (35.16) | (38.83) | (44.52) | | | | | |
| STSB | 70.52 | 76.19 | 76.62 | | | | |
| (80.68) | (81.34) | (79.55) | | | | | |
| SST-2 | 79.02 | 85.55 | 87.31 | | | | |
| (86.24) | (87.96) | (89.68) | | | | | |
| QQP | 34.58 | 60.00 | 58.35 | | | | |
| (63.55) | (68.37) | (63.24) | | | | | |
| QNLI | 69.24 | 73.11 | 71.98 | | | | |
| (75.65) | (77.96) | (74.35) | | | | | |
| MNLI | 38.15 | 41.64 | 40.72 | | | | |
| (41.17) | (44.83) | (43.76) | | | | | |
| MNLI-m | 39.46 | 42.68 | 41.88 | | | | |
| (43.64) | (45.71) | (50.16) | | | | | |
| RTE | 53.21 | 56.57 | 57.44 | | | | |
| (64.26) | (63.18) | (67.51) | | | | | |
| Avg | 50.06 | 58.00 | 58.10 | | | | |
| (b) Comparison of Vanilla S − W with our approaches, with BERT-large model, on 100 data split settings. | | | | | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We discussed in Section 6 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
We discussed in Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** We Discussed In Section 4.1 And Section Appendix B
✓ B1. Did you cite the creators of artifacts you used?
We discussed in Section 4.1 and Section Appendix B
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We discussed in Section 4.1 and Table 3.
## C ✓ **Did You Run Computational Experiments?** We Discussed In Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We discussed in Appendix B.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We discussed in Section 4.2 and Appendix B.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We discussed in Appendix B.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wang-etal-2023-kanbun | Kanbun-{LM}: Reading and Translating Classical {C}hinese in {J}apanese Methods by Language Models | https://aclanthology.org/2023.findings-acl.545 | Recent studies in natural language processing (NLP) have focused on modern languages and achieved state-of-the-art results in many tasks. Meanwhile, little attention has been paid to ancient texts and related tasks. Classical Chinese first came to Japan approximately 2,000 years ago. It was gradually adapted to a Japanese form called Kanbun-Kundoku (Kanbun) in Japanese reading and translating methods, which has significantly impacted Japanese literature. However, compared to the rich resources of ancient texts in mainland China, Kanbun resources remain scarce in Japan.To solve this problem, we construct the first Classical-Chinese-to-Kanbun dataset in the world. Furthermore, we introduce two tasks, character reordering and machine translation, both of which play a significant role in Kanbun comprehension. We also test the current language models on these tasks and discuss the best evaluation method by comparing the results with human scores. We release our code and dataset on GitHub. | # Kanbun-Lm: Reading And Translating Classical Chinese In Japanese Methods By Language Models
Hao Wang Hirofumi Shimizu Daisuke Kawahara Waseda University
{conan1024hao@akane., bowen1205@toki., dkw@}waseda.jp
## Abstract
Recent studies in natural language processing (NLP) have focused on modern languages and achieved state-of-the-art results in many tasks. Meanwhile, little attention has been paid to ancient texts and related tasks. Classical Chinese first came to Japan approximately 2,000 years ago. It was gradually adapted to a Japanese form called Kanbun-Kundoku
(Kanbun) in Japanese reading and translating methods, which has significantly impacted Japanese literature. However, compared to the rich resources for ancient texts in mainland China, Kanbun resources remain scarce in Japan. To solve this problem, we construct the first Classical-Chinese-to-Kanbun dataset in the world. Furthermore, we introduce two tasks, character reordering and machine translation, both of which play a significant role in Kanbun comprehension. We also test the current language models on these tasks and discuss the best evaluation method by comparing the results with human scores. We release our code and dataset on GitHub1.
## 1 Introduction
Classical Chinese was introduced to Japan approximately 2,000 years ago (Okimori, 2017). Then Classical Chinese began to be adapted to a Japanese form in Japanese reading and translating methods in the 8th century A.D. (Kin, 2010). This form is called *Kanbun-Kundoku*. For simplicity, we call it Kanbun in this paper. Kanbun has influenced many famous Japanese literary works, such as *Manyoshu* (Kobayashi, 1964) and The Tale of Genji (Duan, 2008). To this day, Kanbun still occupies 50 points out of 200 in the common test for Japanese university admissions, which shows the deep influence of Kanbun on Japanese culture.
Although Chinese and Japanese have many characters in common, reading Classical Chinese is not easy for Japanese people because of the following 1https://github.com/nlp-waseda/Kanbun-LM
two reasons. First, Chinese (also Classical Chinese) is in SVO (Subject-Verb-Object) word order, which is the same as English. On the other hand, Japanese is in SOV (Subject-Object-Verb) word order, which leads to difficulties in understanding Chinese. Second, Chinese is an isolating language with little to no morphological variation and a nearly one-to-one ratio of morphemes to words. However, Japanese is an agglutinative language that attaches prefixes and suffixes to a word to indicate the grammatical relationship of that word in a sentence. These differences led to the creation of Kanbun. To make the text from SVO to SOV, from isolating to agglutinative, Japanese people developed a system of various conventional reading punctuation, diacritical and syntactic markers (Crawcour, 1965). We list the three main types of markers below and show a specific example of Kanbun in Figure 1. Since the Kanbun system is highly sophisticated, we omit to explain all the rules in this paper. There are also other systems for reading Classical Chinese in other regions like Korean Peninsula (Fujimoto, 2014) and Khitan, but we focus on the Japanese Kanbun system in this paper.
Kaeriten (ja:返り点) marks placed on the left side of characters indicating the characters need to be read in reverse, making the sentence from SVO
to SOV. (e.g., "我有レ兄" (en:I have a brother) should be read as "我兄有", "レ" is the mark)
Yomigana (ja:読み仮名) Hiragana (Japanese phonological units) that are placed on the right side of characters, indicating the characters' reading in Japanese. (e.g., "不" (en:no) is read as "ず")
Okurigana (ja:送り仮名) Katakana (Phonological units, collectively referred to as Kana with Hiragana) that are placed on the right side of characters for making the sentence from isolating to agglutinative. (e.g., the Chinese character "飲" (en:drink)
is "飲む" in Japanese, which has an extra Kana)
![1_image_2.png](1_image_2.png)
Compared to the vast amount of research and language resources available for Classical Chinese, there is little research on Kanbun, and the language resources for Kanbun are highly scarce. For instance, over 48,900 Tang poems (poems written in the characteristic style of the Tang dynasty) are included in *Quan Tangshi* and are all accessible via the Internet. However, to our knowledge, only around 500 Tang poems adapted to Kanbun are accessible. This large gap makes the research on Kanbun increasingly difficult. Although a lot of data of Kanbun exists in ancient books, it is beyond our ability to apply OCR to them and compile the results into clean data. Therefore, building a high-performance Classical-Chinese-to-Kanbun translator is the most efficient way to address the lack of Kanbun language resources. Moreover, understanding the mechanisms of Kanbun will also lead to understanding Classical Japanese literature
(such as Wakan konkobun, a mixture of Japanese ¯
and Chinese writing styles), as well as Japanese culture and thought.
In previous work, Yasuoka (2018, 2019); Yasuoka et al. (2022) proposed a series of applications for Classical Chinese using Universal Dependencies (Nivre et al., 2016). Yasuoka (2020a,b) proposed a method for Classical-Chinese-to-Kanbun machine translation. However, this method is rulebased and less precise, and the author did not make
![1_image_0.png](1_image_0.png)
![1_image_1.png](1_image_1.png)
a dataset to conduct a quantitative evaluation. In this work, we construct the first Classical-Chineseto-Kanbun dataset in the world. Based on this, we introduce Kanbun-LM, where we fine-tune language models for reading and translating Classical Chinese in Japanese methods, trying to fill the resource gap.
The main contributions of our work are summarized as follows:
- We construct the first Classical-Chinese-toKanbun dataset in the world, which addresses the lack of Kanbun language resources.
- We introduce two tasks for the dataset, character reordering and machine translation, both of which are significant in Kanbun comprehension. We conduct quantitative evaluations for both tasks and achieved state-of-the-art results in both tasks using language models, which has shown major improvement over the baseline (Yasuoka, 2020a,b). We also construct a pipeline for the tasks and verify whether prereordering is helpful to machine translation.
- We discuss the best evaluation method for Classical-Chinese-to-Kanbun translation by comparing the results with human scores, which is not covered in existing work.
## 2 Related Work 2.1 Work For Classical Chinese
Although Classical Chinese is still an unstudied field, it has enough resources for exploration compared to other low-resource ancient texts.
Daizhige2contains approximately 3.3 billion tokens and is the largest dataset for Classical Chinese. The *Siku Quanshu* corpus is made from the largest collection of books in Chinese history, with 36,381 volumes and approximately 997 million words. Chinese-Poetry3is a database that contains more than 300,000 ancient Chinese poems. There are also several corpora with extra information that can be used for downstream tasks. For example, the Ancient Chinese Corpus (ACC)4is a dataset of Zuo Zhuan (a Pre-Qin Chinese book published late in the 4th century BC) that contains the information of word segmentation and POS tags.
Since BERT (Devlin et al., 2019) and BERTlike models (Liu et al., 2019; Lan et al., 2019; He et al., 2020) were proposed, pre-training language models on a large corpus and fine-tuning them on downstream tasks have become a paradigm in NLP
studies. In the Classical Chinese field, several pretrained models have also been proposed. SikuBERT and SikuRoBERTa (Wang et al., 2021) are pre-trained on the *Siku Quanshu* corpus and evaluated on the following four tasks using the ACC
dataset: word segmentation, punctuation restoration, POS tagging, and named entity recognition.
GuwenBERT5is pre-trained on the *Daizhige* corpus and evaluated on the CCLUE6 benchmark.
Meanwhile, GPT (Radford et al., 2019)-based models such as SikuGPT27and T5 (Raffel et al., 2020)-
based models such as Mengzi-T5 (Zhang et al.,
2021) are also proposed for text generation.
To evaluate the general performance of pretrained language models, benchmarks for natural language understanding (NLU) tasks have been proposed in many languages. For Classical Chinese, CCLUE provides five NLU tasks, including sentence segmentation, named entity recognition, text classification, and text retrieval. Recently, WYWEB (Anonymous, 2022) has been proposed.
It contains eight tasks, including sentence classification, sequence labeling, reading comprehension, and machine translation.
## 2.2 Work For Kanbun
Yasuoka (2018) proposed a method to reorder Classical Chinese sentences to Japanese reading order using dependency parsing by Universal Dependencies (Nivre et al., 2016). First, the method applies morphological analysis to Classical Chinese sentences to segment them into tokens and assign POS
tags. Second, it obtains dependency relations using the arc-planar algorithm (Gómez-Rodríguez and Nivre, 2010), which was mainly trained on Universal Dependencies of Mengzi, *Lunyu*, and *Liji*
(these are all ancient Chinese books). Finally, it applies character reordering based on the results of dependency parsing and 24 rules proposed by the author.
Furthermore, Yasuoka (2020a,b) proposed an encode-reorder-decode model, called UD-5https://github.com/ethan-yt/guwenbert 6https://cclue.top 7https://huggingface.co/JeffreyLau/SikuGPT2 Kundoku, to translate Classical Chinese to Kanbun, while the encoding and reordering modules take the approaches introduced in Yasuoka (2018).
To make the reordered sentences into Kanbun, the author introduced a rule-based decoding module that adds Okurigana to sentences and makes the sentences from isolating to agglutinative. Okurigana can be roughly divided into two categories:
auxiliary words and inflectional suffixes. The rules also support special characters, such as characters left unpronounced and characters that need to be read twice when reading Kanbun.
Yasuoka (2020b) also conducted a brief evaluation for generated Kanbun results using BLEU (Papineni et al., 2002) and RIBES (Hirao et al., 2011).
However, the author only evaluated a few examples and did not make an in-depth discussion.
## 3 Our Dataset And Tasks
We construct a parallel dataset for Classical Chinese and Kanbun. The dataset consists of original ancient Chinese texts, Japanese reading orders, and Kanbun texts. We show examples in Table 1.
Although it is crucial to choose texts that cover as many periods as possible since vocabulary and grammar change with time, it is difficult to construct a comprehensive dataset. To our knowledge, Tangshixuan8(Selection of Tang Poems) is the largest resource containing both original ancient Chinese texts and translated Kanbun texts. We use this resource to make our dataset. For preprocessing, we extract the Japanese reading order from Kanbun by a rule-based program. For the special tokens that may not appear in Kanbun or appear multiple times, we annotated them manually.
We also convert the characters from old character forms to new character forms (kind of like transforming Traditional Chinese to Simplified Chinese, but in Japanese character forms) using dictionaries to mitigate the out-of-vocabulary problem.
Tangshixuan contains a total of 465 poems. We split the dataset using group shuffle split to ensure that all sentences in one poem would not be split.
Table 2 lists the statistics of the dataset.
Based on the dataset, we introduce two tasks, character reordering and machine translation, both of which are significant in Kanbun comprehension.
For character reordering, the goal is to transform Classical Chinese texts into Japanese reading orders, from SVO to SOV. Japanese reading orders as 8https://kanbun.info/syubu/toushisen000.html
![3_image_0.png](3_image_0.png)
| Classical Chinese | Japanese reading order | Kanbun | (English tr.) |
|---------------------|--------------------------|--------------------------|-----------------------------------------|
| 春眠不覚暁 | 12543 | 春眠暁を覚えず | This morning of spring in bed I'm lying |
| 処処聞啼鳥 | 12453 | 処処啼鳥を聞く | Not wake up till I hear birds crying |
| 夜来風雨声 | 12345 | 夜来風雨の声 | After one night of wind and showers |
| 花落知多少 | 12345 | 花落つること知んぬ多少ぞ | How many are the fallen flowers |
| Split | Poems | Sentences | Characters |
|------------|---------|-------------|--------------|
| Train | 372 | 2,731 | 16,411 |
| Validation | 46 | 320 | 2,038 |
| Test | 47 | 370 | 2,254 |
shown in Table 1, such as "12543", are the targets to be predicted. Machine translation is a sequenceto-sequence task that translates Classical Chinese texts into Kanbun. Since the source and target sentences share the vocabulary, it can also be considered as a multilingual rewriting task.
## 4 Experimental Setup 4.1 Implementation For Tasks
In this section, we introduce our implementation details of the two tasks: character reordering and machine translation. We also construct a pipeline for the two tasks and verify whether pre-reordering is helpful to machine translation. We use NVIDIA
A100 (40GB) for the experiments. Figure 2 shows an overview of our pipeline.
For character reordering, we propose a rankbased sorting method that fine-tunes BERT-like models to predict the rank (position in Japanese reading order) for every character in a sentence.
We split each sentence into characters and preprocess them into inputs by the form {character}{the character's index in the sentence}[SEP]{sentence}.
The character's index is added to handle the cases where more than two identical characters appear in one sentence. To make gold labels for training, we normalize the ranks by the lengths of the sentences, making the value of ranks range from 0 to 1 (for a sentence of length 5, the ranks will be normalized from 1, 2, ..., 5 to 0.2, 0.4, ..., 1). Once we collect the output ranks, we sort them in ascending order and restore them to the original characters. Then we obtain a reordered sentence. An illustration of our sorting method is shown in (A) of Figure 2.
For machine translation, we simply fine-tune T5 and GPT to generate Kanbun from original Classical Chinese sentences. Since we want to see the real level of each model, we did not apply any filter to the generations.
For the pipeline, we pass original Classical Chinese sentences to the character reordering module first, making them from SVO to SOV. Then we pass the sorted sentences to the machine translation module to add Okurigana, transforming from isolating to agglutinative.
## 4.2 Pre-Trained Models 4.2.1 Models For Character Reordering
We conduct experiments on five models in total for character reordering. Two models are pretrained on Japanese corpora, two on Chinese corpora, and one on Classical Chinese corpora. All of the models' tokenizers are character-based because we intend to predict the exact position of each character. We do not use multilingual models like mBERT (Devlin et al., 2019) and XLMRoBERTa (Conneau et al., 2020) because their tokenizers do not generally expect character-based encoding.9 We use the following five models, all in base size, consisting of 12 layers, 768 dimensions of hidden states, and 12 attention heads. We show more details of the models in Appendix A
and details of fine-tuning hyper-parameters in Appendix B.
BERT-japanese-char This model is trained on the Japanese version of Wikipedia.
RoBERTa-japanese-char-wwm This model is trained on the Japanese version of Wikipedia and the Japanese portion of CC-100 (Conneau et al.,
2020). The whole word masking (wwm) (Cui et al.,
2021) strategy is applied.
BERT-chinese This model is trained on the Chinese version of Wikipedia.
RoBERTa-chinese-wwm-ext This model is trained on 5.4B tokens, which include the Chinese version of Wikipedia and extra data. The whole word masking strategy is applied.
RoBERTa-classical-chinese-char This model is derived from GuwenBERT. Simplified characters' embeddings are expanded to traditional characters, making vocabulary size larger.
## 4.2.2 Models For Machine Translation
We use mT5 (Xue et al., 2021) and mGPT (Shliazhko et al., 2022) for machine translation experiments. We do not use Japanese models because the vocabulary size is much smaller than multilingual models, and they generate many [UNK] tokens, leading to unreadable generations. We show more details of the models in Appendix A and details of fine-tuning hyper-parameters in Appendix B.
9mBERT can tokenize Chinese into characters effectively.
However, there is no guarantee that it tokenizes Japanese into characters too, since not all Japanese characters are in the CJK
Unicode range.
mT5 mT5 is trained on the mC4 (Raffel et al.,
2020) corpus, covering 101 languages (Chinese and Japanese are both contained). We use small, base, and large models in our experiments.
mGPT This model is trained on 60 languages using Wikipedia and the mC4 corpus (Chinese and Japanese are both contained).
## 4.3 Automatic Evaluation Metrics 4.3.1 Metrics For Character Reordering
Following the previous sentence reordering studies (Cui et al., 2020; Kumar et al., 2020; Zhu et al.,
2021), we use the following metrics for evaluation.
Kendall's Tau (τ ) This metric measures the rank correlation between two sentences. Fewer the number of inversions needed to sort predicted character orders into ground truth character orders means stronger correlation and better performance.
$$\tau=1-{\frac{4(\#i n v e r s i o n s)}{\#c h a r(\#c h a r-1)}}$$
Perfect Match Ratio (PMR) This metric measures the percentage of predicted character orders exactly matching with ground truth orders.
## 4.3.2 Metrics For Machine Translation
There is no systematic work on evaluating Classical-Chinese-to-Kanbun translation. On top of BLEU and RIBES, which are used by Yasuoka (2020b), we add ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2020) for our experiments, trying to maintain the diversity of evaluation metrics. We implemented all these metrics on the basis of characters since word-based evaluation highly depends on morphological analysis, and related packages for Kanbun are still immature.
BLEU BLEU (Papineni et al., 2002) is the most widely used metric in machine translation. It is an n-gram-based metric that computes the exact match precision scores of n-grams that occur in the reference and the candidate.
RIBES RIBES (Hirao et al., 2011) is a rankbased metric proposed to evaluate machine translation between languages with widely differing word orders. It applies word mapping to the reference and the candidate first, and then computes rank correlation as scores for the evaluation.
ROUGE ROUGE (Lin, 2004) is a commonly used n-gram-based metric for summarization evaluation. Lin (2004) proposed ROUGE-n, which computes the exact match recall scores of n-grams, and ROUGE-L, which computes scores using longest common subsequence instead. Since ROUGE-1, ROUGE-2, and ROUGE-L did not show much difference in our experiments, we only report ROUGE-L's results in this paper.
BERTScore BERTScore (Zhang et al., 2020) is an embedding-based metric that computes a similarity score for each token in the candidate with each token in the reference. To calculate characterbased scores, we use BERT-japanese-char (layer 11) in our experiments.
## 4.4 Manual Annotations
We recruited three people who are bilingual in Chinese and Japanese as our human annotators. There are two criteria for annotator selection: (1) ability to read Classical Chinese in original word order;
(2) ability to get full marks in the Kanbun part of the Japanese university admission exam.
For character reordering, to compare with the models, we asked the annotators to do the same sorting task, which the models did, with no access to reference materials and the Internet. We collected results, computed Kendall's Tau and PMR
scores, and averaged them.
For machine translation, we asked the annotators to evaluate models' generations according to the following three metrics, rated on a 5-point scale from 1 to 5 (larger is better). The reference sentences were also evaluated to measure the quality of the dataset. The annotators were allowed to search for reference materials in this evaluation.
Relevance This rating measures how well the translation is done, which judges whether the content is translated without any shortage or deviation.
Accuracy This rating measures the quality of a generation, which judges whether it is lexically and grammatically correct in Japanese.
Fluency This rating measures the fluency and naturalness of a generation and whether the rhythm of Classical Chinese remains.
## 5 Results And Discussion 5.1 Character Reordering
The results of the character reordering task are presented in Table 3. UD-Kundoku is the baseline method that was proposed by Yasuoka (2020a,b).
Human scores are the average of the three annotators' results.
All the BERT-like models outperformed the baseline and human scores. The two Chinese models performed slightly better than the two Japanese models, and RoBERTa-classical-chinesechar, which was pre-trained on the ancient Chinese corpus, performed the best. Compared to the baseline, RoBERTa-classical-chinese-char achieved 22.5% better Kendall's Tau and 94.7% better PMR scores. Compared to human scores, RoBERTa-classical-chinese-char achieved 11.8%
better Kendall's Tau and 29.2% better PMR scores. Gap between the Chinese and Japanese models.
Since more ancient texts are present in a Chinese corpus like Wikipedia, we speculate that the score gap between the Chinese and Japanese models originates from the pre-training corpus rather than the reading orders of the pre-training languages. Considering that this task requires converting SVO to SOV, it would be ideal to use both Chinese and Japanese corpora for pre-training. However, since the existing multilingual models cannot guarantee to tokenize an input text into characters, we leave this validation to future work.
Additional data did not help. The two RoBERTa models did not score higher than the two BERT models. This is probably because many ancient texts do not exist in the additional corpus like CC-100 (Conneau et al., 2020), and thus the additional training in RoBERTa did not strengthen the models' understanding of Classical Chinese.
BERT is more accurate in details. When comparing with human scores, we had an interesting finding that although the PMR scores of humans and RoBERTa-japanese-char-wwm are similar, Kendall's Tau score of the model is 5.9% higher.
This indicates that BERT is more accurate than humans in predicting the details of the orders. Although our annotators are bilingual, they are not experts in Classical Chinese. We hope to collaborate with real experts in the future to conduct experiments and see if BERT can still retain an advantage.
Model Setup τ PMR
UD-Kundoku 0.770 0.402
Human 0.844 0.606
BERT-japanese-char 0.898 0.637
RoBERTa-japanese-char-wwm 0.894 0.600 BERT-chinese 0.917 0.689 RoBERTa-chinese-wwm-ext 0.920 0.718
RoBERTa-classical-chinese-char **0.944 0.783**
Error analysis. Since the PMR score of our best model is 0.783, most predicted orders are exactly correct. However, we still found some error patterns that the model encountered. It is not easy to distinguish whether a pair of two characters is a noun or a combination of a verb and a noun. Moreover, determining the order becomes challenging when two verbs appear in a sentence.
## 5.2 Machine Translation 5.2.1 Model Performance
Table 4 lists the results of machine translation, which contains the automatic and manual evaluation metrics. UD-Kundoku is the baseline, and the reference is the Kanbun target.
For the automatic evaluation, all our models exceeded the baseline in all evaluation metrics. The performance of mT5 increased as the model size increases, with mT5-large performing best. The performance of mGPT and mT5-small are close to each other.
For the human evaluation, we asked annotators to evaluate only the translations of mT5-small, mT5-large, and mGPT. This is because mT5-base performs close to mT5-large, and the baseline's results are too poor to be evaluated. As with the automatic evaluation, mT5-large performed the best.
On the other hand, mT5-small significantly outperformed mGPT in this evaluation. The reference sentences obtained very high scores, proving that our dataset's Kanbun data is of high quality. We also calculated Fleiss' Kappa to measure Inter-Annotator Agreement (IAA). The results of Fleiss' Kappa for relevance, accuracy, and fluency are 0.360, 0.371, and 0.341, which show fair agreements (?).
Generation examples. We show three generation examples in Table 5. In all three examples, mT5-large performed flawlessly, giving the same translations as the reference. mT5-base and mT5small generated translations similar to mT5-large, but with some minor errors. mGPT sometimes repeated the characters in the original sentences ("事" in (a), "出" in (b), and "鳳" in (c)), which lowers the scores of human evaluation. "未" in (c) is an example of special characters that need to be read twice, which should be read as "未だ...ず" (en:yet).
In this case, mT5-base and mT5-large generated the correct translation. However, mT5-small and mGPT could not recognize it as a special character.
Why is mGPT so weak? Although mGPT has almost 1.5 times the number of parameters of mT5large (detailed model sizes can be found in Appendix A), its translations are not even as good as mT5-small. Since mT5 and mGPT are both mainly trained on mC4 (Raffel et al., 2020), the effect of the pre-training corpus can be largely excluded.
One reason is the repetition of words that we have explained before. For other reasons, we speculate that the encoder modules in mT5 have a significant role in comprehending Classical Chinese. However, this is only a hypothesis and needs to be tested with more future experiments.
## 5.2.2 **Correlation Between Evaluation Metrics**
We show Pearson and Spearman correlation coefficients between the automatic evaluation metrics and human evaluation metrics in Table 6.
BERTScore has the greatest correlation with all three human evaluation metrics. BLEU and ROUGE-L also performed well. The rank-based metric, RIBES, performed the worst. We notice that, compared to BLEU and ROUGE-L,
BERTScore only has a slight lead in the correlation with relevance. However, the advantage has increased in correlation with accuracy and fluency.
We speculate that this is because BERTScore can potentially capture sequence information (Zhang et al., 2020), which makes it more possible to judge whether a sentence is accurate and fluent. We also speculate that BERTScore better suits ClassicalChinese-to-Kanbun because Kanbun is generally very short, which can cause BLEU and ROUGE to be influenced by small changes.
We also show the correlation between the human evaluation metrics in Table 6. Accuracy and fluency have the greatest correlation, which indicates that grammatically and lexically correct sentences are also fluent. In general, the correlation between Table 4: Results of machine translation, containing the automatic and manual evaluation metrics. UD-Kundoku is the baseline, and reference is the Kanbun target of translation.
| Model Setup | BLEU | RIBES | ROUGE-L | BERTScore | Relevance | Accuracy | Fluency |
|---------------|--------|---------|-----------|-------------|-------------|------------|-----------|
| UD-Kundoku | 0.097 | 0.309 | 0.546 | 0.884 | - | - | - |
| reference | - | - | - | - | 4.958 | 4.951 | 4.949 |
| mT5-small | 0.317 | 0.428 | 0.659 | 0.914 | 3.219 | 3.002 | 3.153 |
| mT5-base | 0.462 | 0.520 | 0.735 | 0.930 | - | - | - |
| mT5-large | 0.514 | 0.583 | 0.747 | 0.934 | 3.948 | 3.884 | 3.904 |
| mGPT | 0.303 | 0.476 | 0.606 | 0.898 | 2.548 | 2.270 | 2.236 |
Model Setup (a) (b) (c)
input 投筆事戎軒 駆馬出関門 鳳林戈未息 reference 筆を投じて戎軒を事とす 馬を駆って関門を出づ 鳳林戈未だ息まず mT5-small 筆を投じて戎軒を事す 馬を駆って関門に出づ 鳳林戈未だ息し
mT5-base 筆を投じて戎軒に事す 馬を駆って関門に出で 鳳林戈未だ息まず
mT5-large 筆を投じて戎軒を事とす 馬を駆って関門を出づ 鳳林戈未だ息まず
mGPT 筆を投じて戎軒に事とすを事 馬を駆って関門を出でんとすも出で 鳳林戈未だ息まずかとすかとす鳳
(English tr.) Laid down my pen and turned to the war Mounted my horse and left through the gates The forest's battle drums remain unabated
Table 5: Generation examples of machine translation. Input is the original Classical Chinese sentence, and reference is the Kanbun target of translation.
the metrics is relatively high. To consider more different perspectives, we hope to reduce the correlation by discussing with Classical Chinese experts and reformulating the manual evaluation metrics in future work.
## 5.3 Pipeline
We show the pipeline results in Table 7. The first row of each model is the direct machine translation results, which are also shown in Table 4. The second row ("+ reorder") shows the results using RoBERTa-classical-chinese-char to reorder characters before passing the sentences to machine translation. The third row ("+ reorder (gold)") uses the gold labels of the reading orders instead of the predictions by RoBERTa to reorder characters.
By pre-reordering using RoBERTa, most of the evaluation metrics of mT5-small were improved.
Model Setup BLEU RIBES ROUGE-L BERTScore mT5-small 0.317 0.428 0.659 0.914 + reorder 0.328 0.420 0.701 0.916
+ reorder (gold) **0.359 0.451 0.727 0.919**
mT5-base **0.462** 0.520 0.735 0.930
+ reorder 0.413 0.486 0.735 0.926
+ reorder (gold) 0.461 **0.529 0.770 0.932**
mT5-large **0.514 0.583** 0.747 0.934
+ reorder 0.479 0.551 0.748 0.931
+ reorder (gold) 0.502 0.573 **0.774 0.935**
mGPT 0.303 0.476 0.606 0.898
+ reorder 0.303 0.467 0.612 0.894
+ reorder (gold) **0.340 0.508 0.642 0.900**
mGPT basically remained at the original level.
While mT5-base and mT5-large showed a decreasing trend in most of the metrics. We speculate that as the model's performance increases, the model will gradually be able to do character reordering and machine translation at the same time. Since the predictions of RoBERTa are not 100% accurate, wrong predictions may confuse models and lead to their inability to determine correct orders.
In contrast, by pre-reordering using the gold labels, all models received some degree of improvement in almost all evaluation metrics. This indicates that correct pre-reordering does help machine translation, and it is necessary to do more work on improving the character reordering module.
| Metric | Relevance | Accuracy | Fluency | | | |
|-----------|-------------|------------|-----------|-------|-------|-------|
| r | ρ | r | ρ | r | ρ | |
| BLEU | 0.667 | 0.650 | 0.637 | 0.605 | 0.594 | 0.576 |
| RIBES | 0.480 | 0.497 | 0.453 | 0.449 | 0.389 | 0.417 |
| ROUGE-L | 0.688 | 0.677 | 0.631 | 0.610 | 0.599 | 0.584 |
| BERTScore | 0.707 | 0.691 | 0.671 | 0.642 | 0.644 | 0.625 |
| Relevance | - | - | 0.862 | 0.849 | 0.835 | 0.829 |
| Accuracy | 0.862 | 0.849 | - | - | 0.946 | 0.947 |
| Fluency | 0.835 | 0.829 | 0.946 | 0.947 | - | - |
## 6 Conclusion And Future Work
In this paper, to address the lack of Kanbun language resources, we used language models to read Classical Chinese in Japanese reading orders and translate Classical Chinese into Kanbun. We constructed the first Classical-Chinese-to-Kanbun dataset in the world, which includes original ancient Chinese texts, translated Kanbun texts, and the Japanese reading orders.
Furthermore, we introduced two tasks for the dataset: character reordering and machine translation. We achieved state-of-the-art results in both tasks, which have a great lead over the baseline.
We also constructed a pipeline for the two tasks and verified that accurate pre-reordering is helpful for machine translation. However, the accuracy of current reordering models is not enough, and future efforts are needed to improve the accuracy.
Moreover, we discussed which automatic evaluation metric is the most suitable for ClassicalChinese-to-Kanbun translation by computing the correlation between the automatic and human evaluation metrics. In our experiments, BERTScore is the best. However, we only tested with characterbased metrics. More experiments are still needed to test subword-based and sentence-based metrics.
In the future, we hope to continuously update the dataset to include an increasingly comprehensive range of ancient texts. We also hope to collaborate with experts in Classical Chinese to find the upper bound of human character reordering accuracy, refine the manual evaluation metrics to a more streamlined one, and make a deeper exploration on the best automatic evaluation metric.
## Limitations
Due to the lack of data, our dataset is not comprehensive since it only consists of Tang poems.
Our model may not perform well on unseen data in other forms. We plan to update the dataset in the future continuously.
Our evaluation metrics and generation results for the machine translation tasks are not certified by experts in Classical Chinese, so the results and discussions in this paper are not entirely reliable.
We welcome more experts and researchers to join our work in the future.
Due to the limitation of GPU resources, we do not experiment on larger models. We welcome researchers to test our method on large models and make some deeper discussions.
## Acknowledgements
This work was supported by JSPS KAKENHI
Grant Number JP21H04901. We are grateful to the annotators who have spent much of their time helping with the experiments. We would also like to thank the reviewers for their insightful comments for improving the paper.
## References
Anonymous. 2022. Wyweb: A classical chinese nlp evaluation benchmark.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Sydney Crawcour. 1965. *An introduction to Kambun*.
University of Michigan.
Baiyun Cui, Yingming Li, and Zhongfei Zhang. 2020.
BERT-enhanced relational sentence ordering network. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 6310–6320, Online. Association for Computational Linguistics.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese BERT. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*,
29:3504–3514.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Xiaoye Duan. 2008. 『源氏物語』における『白氏 文集』引用の特色–登場人物の口ずさんだ詩句 をめぐって. 北陸大学紀要 = Bulletin of Hokuriku University, (32):181–192.
Yukio Fujimoto. 2014. 日韓漢文訓読研究. 勉強出 版.
Carlos Gómez-Rodríguez and Joakim Nivre. 2010. A
transition-based parser for 2-planar dependency structures. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1492–1501, Uppsala, Sweden. Association for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decodingenhanced bert with disentangled attention. arXiv.
Abs/2006.03654.
Tsutomu Hirao, Hideki Isozaki, Kevin Duh, Katsuhito Sudoh, Hajime Tsukada, and Masaaki Nagata. 2011.
Ribes:順位相関に基づく翻訳の自動評価法て.
言語処理学会年次大会発表論文集, 17:D5–2.
Bunkyo Kin. 2010. 漢文と東アジア—訓読の文化圏.
岩波書店.
Yoshinori Kobayashi. 1964. 万葉集における漢文訓 読語の影響. 国語学, (58):23–47.
Pawan Kumar, Dhanajit Brahma, Harish Karnick, and Piyush Rai. 2020. Deep attentive ranking networks for learning to order sentences. *Proceedings* of the AAAI Conference on Artificial Intelligence, 34(05):8115–8122.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2019. Albert: A lite bert for self-supervised learning of language representations. arXiv. Abs/1909.11942.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. arXiv. Abs/1907.11692.
Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Man- ˇ
ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659–1666, Portorož, Slovenia. European Language Resources Association
(ELRA).
Takuya Okimori. 2017. 日本語全史. 筑摩書房.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the
limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Oleh Shliazhko, Alena Fenogenova, Maria Tikhonova, Vladislav Mikhailov, Anastasia Kozlova, and Tatiana Shavrina. 2022. mgpt: Few-shot learners go multilingual. arXiv. Abs/2204.07580.
Dongbo Wang, Chang Liu, Zihe Zhu, Jangfeng Liu, Haotian Hu, Si Shen, and Bin Li. 2021. Sikubert与sikuroberta:面向数字人文的《四全》模 型建及用研究. .
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics.
Koichi Yasuoka. 2018. 漢文の依存文法解析と返り 点の関係について. 日本漢字学会第1回研究大 会予稿集, pages 33–48.
Koichi Yasuoka. 2019. Universal dependencies treebank of the four books in classical chinese.
DADH2019: 10th International Conference of Digital Archives and Digital Humanities, pages 20–28.
Koichi Yasuoka. 2020a. 漢文の依存文法解析にもと づく自動訓読システム. 日本漢字学会第3回研 究大会予稿集, pages 60–73.
Koichi Yasuoka. 2020b. 漢文自動訓読ツールudkundokuの開発. 東洋学へのコンピュータ利用 第32回研究セミナー, pages 3–25.
Koichi Yasuoka, Christian Wittern, Tomohiko Morioka, Takumi Ikeda, Naoki Yamazaki, Yoshihiro Nikaido, Shingo Suzuki, Shigeki Moro, and Kazunori Fujita.
2022. 古典中国語(漢文)universal dependenciesとその応用. 情報処理学会論文誌, 63(2):355–
363.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
Zhuosheng Zhang, Hanqing Zhang, Keming Chen, Yuhang Guo, Jingyun Hua, Yulong Wang, and Ming Zhou. 2021. Mengzi: Towards lightweight yet ingenious pre-trained models for chinese. arXiv.
Abs/2110.06696.
Yutao Zhu, Jian-Yun Nie, Kun Zhou, Shengchao Liu, Yabo Ling, and Pan Du. 2021. Bert4so: Neural sentence ordering by fine-tuning bert. arXiv.
Abs/2103.13584.
## A Details Of Pre-Trained Models
We show the details of the pre-trained models used in our experiments below. Table 8 lists the details of the BERT-like models for character reordering, and Table 9 lists those of the pre-trained models for machine translation.
| Table 8: Details of pre-trained models (character reordering). | | | | | |
|------------------------------------------------------------------|---------------------------------|------------|---------|--------|-----------------|
| model | corpus | #dimension | #layers | #heads | vocabulary size |
| BERT-japanese-char | Wikipedia (ja) | 768 | 12 | 12 | 6,144 |
| (cl-tohoku/bert-base-japanese-char-v2) RoBERTa-japanese-char-wwm | Wikipedia (ja) + CC-100 (ja) | 768 | 12 | 12 | 18,377 |
| (ku-nlp/roberta-base-japanese-char-wwm) BERT-chinese | Wikipedia (zh) | 768 | 12 | 12 | 21,128 |
| (bert-base-chinese) RoBERTa-chinese-wwm-ext | Wikipedia (zh) + ext | 768 | 12 | 12 | 21,128 |
| (hfl/chinese-roberta-wwm-ext) RoBERTa-classical-chinese-char | Wikipedia (zh) + Daizhige + ext | 768 | 12 | 12 | 26,318 |
| (KoichiYasuoka/roberta-classical-chinese-base-char) | | | | | |
Table 8: Details of pre-trained models (character reordering).
Table 9: Details of pre-trained models (machine translation).
| model | corpus | #params | #dimension | #layers | #heads | vocabulary size |
|-----------------------------|---------------------|-----------|--------------|-----------|----------|-------------------|
| mT5-small | mC4 (101 languages) | 172M | 512 | 8 | 6 | 250,112 |
| (google/mt5-small) mT5-base | mC4 (101 languages) | 390M | 768 | 12 | 12 | 250,112 |
| (google/mt5-base) mT5-large | mC4 (101 languages) | 973M | 1024 | 24 | 16 | 250,112 |
| (google/mt5-large) mGPT | Wikipedia + mC4 | 1,417M | 2048 | 24 | 16 | 100,000 |
| (sberbank-ai/mGPT) | (both 60 languages) | | | | | |
## B Hyper-Parameters
We show the hyper-parameters used in our experiments in Table 10. The numbers in the curly brackets indicate that grid searches were performed to select the best fit.
Table 10: Hyper-parameters used in the experiments.
| hyper-parameter | value |
|-------------------|---------------------------------------------------|
| learning rate | {1e-5, 2e-5, 5e-5} |
| batch size | {8, 16, 32} |
| epoch | {1-20} (BERT), {10, 20, 30} (T5), {1, 2, 3} (GPT) |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In the Limitations section.
✗ A2. Did you discuss any potential risks of your work?
Our work focused on Classical-Chinese-to-Japanese machine translation. We believe there is no potential risk.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In the abstract and "Section 1. Introduction".
✗ A4. Have you used AI writing assistants when working on this paper?
No AI writing assistant.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
In the "Section 3. Our Dataset and Tasks" and "Section 4. Experimental Setup".
✓ B1. Did you cite the creators of artifacts you used?
In the "Section 3. Our Dataset and Tasks" and "Appendix A. Details of pre-trained models".
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We collected our dataset from a public website (we tried to contact the author but have yet to receive a reply) and only used models available on HuggingFace.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We only used the data and models for research purposes (which are their intended uses). And we plan to release the dataset only for research purposes, too.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We only use Classical Chinese data made more than 1000 years ago.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
In the "Section 3. Our Dataset and Tasks".
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
In the "Section 3. Our Dataset and Tasks".
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** In The "Section 4. Experimental Setup".
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In the "Section 4.2. Pre-trained Models" and "Appendix A. Details of pre-trained models".
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In the "Appendix B. Hyper-parameter".
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Our results are all single runs.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In the "Section 4.3. Automatic Evaluation Metrics".
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** In The "Section 4.4. Manual Annotations".
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Our manual annotations are all about Classical Chinese. There is no offensive content or personal identifying information collection.
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Our annotators are all university students. We paid 200 USD to each person.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Our data would only be used for research purposes.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No ethics review board was involved.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
In the "Section 4.4. Manual Annotations". |
zhang-etal-2023-adaptive | Adaptive Attention for Sparse-based Long-sequence Transformer | https://aclanthology.org/2023.findings-acl.546 | Recently, Transformers have been widely used in various fields and have achieved remarkable results. But it is still difficult for Transformer-based models to process longer sequences because self-attention in them scales quadratically with the sequence length. Although some models attempt to use sparse attention to reduce computational complexity, hand-crafted attention patterns are unable to select useful tokens adaptively according to the context. Thus, in this paper, we propose a novel efficient Transformer model with adaptive attention, A2-Former, for long sequence modeling. It can select useful tokens automatically in sparse attention by learnable position vectors, which consist of meta position and offset position vectors. Because the learnable offset position is not an integer vector, we utilize the interpolation technique to gather corresponding vectors from the input embedding matrix by discrete indexes. Experiments on Long Range Arena (LRA), a systematic and unified benchmark with different tasks, show that our model has achieved further improvement in performance compared with other sparse-based Transformers. | # Adaptive Attention For Sparse-Based Long-Sequence Transformer
Xuanyu Zhang, Zhepeng Lv and **Qing Yang**
Du Xiaoman Financial
{zhangxuanyu,lvzhepeng,yangqing}@duxiaoman.com
## Abstract
Recently, Transformers have been widely used in various fields and have achieved remarkable results. But it is still difficult for Transformerbased models to process longer sequences because self-attention in them scales quadratically with the sequence length. Although some models attempt to use sparse attention to reduce computational complexity, hand-crafted attention patterns are unable to select useful tokens adaptively according to the context. Thus, in this paper, we propose a novel efficient Transformer model with adaptive attention, A2-
Former, for long sequence modeling. It can select useful tokens automatically in sparse attention by learnable position vectors, which consist of meta position and offset position vectors. Because the learnable offset position is not an integer vector, we utilize the interpolation technique to gather corresponding vectors from the input embedding matrix by discrete indexes.
Experiments on Long Range Arena (LRA), a systematic and unified benchmark with different tasks, show that our model has achieved further improvement in performance compared with other sparse-based Transformers.
## 1 Introduction
Transformer-based models (Vaswani et al., 2017)
have achieved state-of-the-art performance on a wide variety of natural language processing tasks (Devlin et al., 2019; Liu et al., 2019; Yang et al.,
2019). It is also gradually applied to other research fields such as speech and computer vision (Dong et al., 2018; Li et al., 2019; Zhang et al., 2020; Dosovitskiy et al., 2021; Zhu et al., 2021; Touvron et al., 2021). Although self-attention module, the core component in Transformer, can capture global contexts from the whole sequence, the time and memory complexity are both quadratic to the sequence length. Especially when facing longer sequences, Transformer becomes more difficult to process them efficiently and effectively.
Recently, a wide spectrum of efficient Transformers (Child et al., 2019; Ho et al., 2019; Rae et al., 2020; Zhao et al., 2019; Kitaev et al., 2020; Tay et al., 2020; Beltagy et al., 2020; Choromanski et al., 2020; Wang et al., 2020; Zaheer et al.,
2020; Roy et al., 2021; Xiong et al., 2021; Tay et al., 2021a; Ma et al., 2021; Chen, 2021; Zhu and Soricut, 2021; Liu et al., 2022) have been proposed to tackle the problem of efficiency, which can be roughly divided into the following directions: sparse attention, low-rank and kernel methods. Because sparse-based attention is intuitive and interpretable in addition to efficiency, we focus on this method in this paper. It usually utilizes some strategies or patterns to limit the number of tokens involved in the attention calculation. Different from traditional sparse Transformer (Martins and Astudillo, 2016; Correia et al., 2019; Peters et al.,
2019) with different softmax and pattern-related quadratic computation, recent works mainly adopt sliding windows to achieve linear complexity. For example, Longformer (Beltagy et al., 2020) employs an attention pattern that combines local windowed attention with task-motivated global attention while also scaling linearly with the sequence length. BigBird (Zaheer et al., 2020) incorporates random attention (queries attend to random keys) besides global tokens and local sliding windows. However, these hand-crafted attention patterns mentioned above are usually selected empirically or randomly. It is not an ideal solution for modeling long sequences. How to adaptively select useful tokens for sparse attention according to the context is still an important problem to be considered.
To address these issues, we propose A2-Former with adaptive attention to model longer sequences in this paper. It can select useful tokens automatically in sparse attention by learnable position vectors, which consist of meta position and offset position vectors. Because each element in the learnable 8602 offset position vector is not an integer, we utilize linear interpolation to gather discrete vectors from original the input embedding matrix. Position visualization further shows that traditional attention patterns are not enough to cover the valuable positions automatically selected by models. Experiments on Long Range Arena, a systematic and unified benchmark with different tasks, show that our model has achieved further improvement in performance compared with other sparse-based Transformers.
Overall, the main contributions are as follows:
- We propose a novel efficient Transformer, A2-
Former, which replaces hand-crafted attention patterns with learnable adaptive attention in sparse attention. Besides, position visualization (Figure 3) further shows that traditional attention patterns are not enough to cover the useful positions automatically selected by models.
- We adopt an interpolation technique to help the model gather discrete positions with a continuous weight matrix. By combining the meta position and generated offset position, the position of tokens can be selected dynamically according to the context.
- Experiments on different long sequence tasks validate the effectiveness of our model. Especially, compared with the previous best sparse attention model, BigBird (Zaheer et al., 2020),
our model achieves better results.
## 2 Related Work
Recently, Transformer (Vaswani et al., 2017) and its variants (Devlin et al., 2019; Radford et al., 2018; Liu et al., 2019; Yang et al., 2019) have been widely used in natural language processing
(OpenAI, 2023; Zhang et al., 2023; OpenAI, 2022; Zhang and Yang, 2021a; Zhang, 2020; Zhang and Wang, 2020; Zhang, 2019), computer vision (Dosovitskiy et al., 2021; Zhu et al., 2021; Touvron et al.,
2021; Zhang and Yang, 2021b), speech (Dong et al., 2018; Li et al., 2019; Zhang et al., 2020) and other domains. To improve computational and memory efficiency, a dizzying number of efficient Transformers (Child et al., 2019; Ho et al., 2019; Rae et al., 2020; Zhao et al., 2019; Kitaev et al., 2020; Tay et al., 2020; Beltagy et al., 2020; Choromanski et al., 2020; Wang et al., 2020; Zaheer et al.,
2020; Roy et al., 2021; Xiong et al., 2021; Tay et al., 2021a; Ma et al., 2021; Chen, 2021; Zhu and Soricut, 2021; Liu et al., 2022) have been proposed recently, which can be roughly divided into two directions: sparse attention, low-rank and kernel methods.
Sparse attention methods usually limit the field of view to fixed or random patterns. These patterns can also be used in combination. For example, Sparse Transformer (Child et al., 2019) combines stride and fixed factorized attention by assigning half of its heads to the pattern for reducing the complexity of a traditional Transformer. Longformer
(Beltagy et al., 2020) integrates a windowed localcontext self-attention and task-oriented global attention that encodes inductive bias about the corresponding task. BigBird (Zaheer et al., 2020) incorporates random attention besides global attention and local window attention. Random attention means that each query attends to a small number of random keys. However, it is still difficult for these hand-crafted, random or combined attention patterns to select valuable pairs in the sparse attention calculation. Different from them, our proposed sparse attention mechanism can automatically and efficiently learn the position that should be selected and calculated. Especially, our model is also different from traditional sparse Transformer (Martins and Astudillo, 2016; Correia et al., 2019; Peters et al., 2019). They only focus on sparse softmax and its threshold and still require quadratic computation to determine the sparsity pattern.
Low-rank and kernel methods are the other solutions to improve the efficiency of Transformer.
Low-rank methods usually assume a low-rank structure in the self-attention matrix. For example, Linformer (Wang et al., 2020) decomposes the original scaled dot-product attention into multiple smaller attentions through linear projections, such that the combination of these operations forms a low-rank factorization of the original attention.
And kernel methods rewrite the self-attention mechanism through kernelization. For example, Performer (Choromanski et al., 2020) scales linearly rather than quadratically in the number of tokens in the sequence, which is characterized by subquadratic space complexity and does not incorporate any sparsity pattern priors. Different from these mathematical and theoretical methods, our proposed method is still based on sparse attention but focuses more on how to find and learn attention patterns effectively and efficiently.
![2_image_0.png](2_image_0.png)
## 3 Methodology 3.1 Preliminary
Traditional Attention Adaptive Attention Suppose the input is x ∈ R
L×H. q ∈ Ψq, k ∈
Ψk, v ∈ Ψv index query, key and value element in Transformer, respectively. L is the sequence length and H is the dimension of hidden states. Thus selfattention in vanilla Transformer can be calculated by
$$\text{Attn}(\mathbf{x}_{q},\mathbf{x})=\sum_{\begin{subarray}{c}k\in\Psi_{k}\\ v\in\Psi_{v}\end{subarray}}\alpha_{qk}\cdot W\mathbf{x}_{v},\tag{1}$$ where $W$ is the learnable weight for $\mathbf{x}_{v}$. The attention weights $\alpha_{qk}\propto\exp\{\frac{x_{q}^{T}W^{\prime T}\ W^{\prime\prime}x_{k}}{\sqrt{H}}\}$, where
W′and W′′ are learnable weight matrices for xq and P
xk. The attention weights are normalized as k∈Ψk αqk = 1, ensuring that they represent the relative importance of each key vector in the set Ψk for the query vector xq.
For sparse attention, we can also express previous models in a unified form. We will only consider the query and key in Transformer in the following discussion. Thus sparse attention can be represented as
$${\mathrm{SparseAttn}}(\mathbf{x}_{q},\mathbf{x},\mathbf{p}_{q})=\sum_{k=1}^{K}\alpha_{q k}\cdot W\mathbf{x}_{p_{q k}}\,,\quad(2)$$
where k indexes the sampled keys, and K is the total sampled key number. Because only a small set of keys are utilized in sparse attention, K ≪ L.
pq represents the position of K sampled keys for the query xq. Different models utilize different patterns to select each sampling position pqk ∈ pq, such as sliding window or random generation.
Algorithm 1: Adaptive Attention
input :an input matrix x ∈ R
L×H;
output :AAttn(xq, x) after adaptive attention; 1 **begin**
![2_image_1.png](2_image_1.png)
Because our proposed adaptive attention is also based on sparse attention, which can be further refined into the following forms:
$$\mathbf{AAttn}(\mathbf{x}_{q},\mathbf{x})=\sum_{k=1}^{K}\alpha_{q k}\cdot W\mathbf{x}_{{\hat{p}}_{q}+{\beta}_{q k}},\quad\mathbf{(3)}$$
where βqk represents the offset position of the selected key k for the query xq, pˆq represents the meta position predefined for each query xq according to their absolute index. That is to say, the final position of keys pqk is obtained from the meta position pˆq and the offset position βqk. Because pˆq + βqk is a float , we adopt linear interpolation to compute xpˆq+βqk . The detailed calculation process will be described in the next subsection.
## 3.2 Adaptive Attention
As shown in Figure 1, we propose the adaptive attention to learn sampling position dynamically in sparse attention. The pipeline of our proposed adaptive attention is illustrated in Algorithm 1. For convenience, we describe them in the form of iteration rather than batch. We take L = 6, H = 3, K = 3 as an example to illustrate the whole process from input to output.
First, we will assign the meta position ˆpq =
{pˆq}K in Eq. 3 according to the absolute index of the query token. As shown in Figure 1, the meta position is from 0 to 5 for the sentence with 6 tokens.
The position of sampling keys will be generated according to the meta position of the query. We will take the orange token (in Figure 1) as a query to obtain the corresponding representation after adaptive attention.
Then, we use a learnable weights Wˆ ∈ R
K×H to obtain the offset position βq ∈ R
K for K sampling keys in Eq. 4. As shown in Eq. 5, we can obtain the final position pq ∈ R
K from original position ˆpq ∈ R
K by combining the meta position and the offset position.
$$\begin{array}{c}{{\beta_{q}=\hat{W}{\bf x}_{q},}}\\ {{{}}}\\ {{{\bf p}_{q}=\hat{\bf p}_{q}+\beta_{q},}}\end{array}$$
Because the final position pq is not an integer vector, it can not be used directly to select sampling keys. Inspired by previous works (Dai et al., 2017; Zhu et al., 2021) in computer vision, we transform bilinear interpolation of two-dimensional images into linear interpolation of one-dimensional text.
That is to say, we utilize linear interpolation to gather vectors of corresponding positions. After we rescale each element pqk ∈ pq in pqk
= ˆpq +
βqk to [0, L], we round it down and up to i =
⌊pqk ⌋, j = ⌈pqk ⌉ respectively (j − i = 1). Then we can gather xi, xj according to the integer *i, j* from the input x. According to the variation of linear interpolation formula, the final position of sampling keys xpˆq+βqk can be calculated by
$$\begin{array}{c}{{{\bf{x}}_{\hat{p}_{q}+\beta_{q k}}=\frac{p_{q k}-j}{i-j}{\bf{x}}_{i}+\frac{p_{q k}-i}{j-i}{\bf{x}}_{j}}}\\ {{=(j-p_{q k}){\bf{x}}_{i}+(p_{q k}-i){\bf{x}}_{j}}}\end{array}\qquad\mathrm{(6)}$$
Next, we use the learnable matrix αqk to obtain the weights of different sampling keys for the query xq. Then we can obtain the final weighted representation AAttn(xq, x) in Eq. 3.
Model ListOps Text Retrieval Image Pathfinder Avg
Chance 10.00 50.00 50.00 10.00 50.00 44.00 Transformer 36.37 64.27 57.46 42.44 71.40 54.39
Local Attn. 15.82 52.98 53.39 41.46 66.63 46.06
Linformer 35.70 53.94 52.27 38.56 76.34 51.36 Reformer 37.27 56.10 53.40 38.07 68.50 50.67
Sinkhorn 33.67 61.20 53.83 41.23 67.45 51.39
Synthesizer 36.99 61.68 54.67 41.61 69.45 52.88 Linear Tran. 16.13 65.90 53.09 42.34 75.30 50.55
Performer 18.01 65.40 53.82 42.77 **77.05** 51.41
H-Tran. **49.53** 78.69 63.99 46.05 68.78 61.41
Sparse Tran. 17.07 63.58 59.59 44.24 71.71 51.24 Longformer 35.63 62.85 56.89 42.22 69.71 53.46
BigBird 36.05 64.02 59.29 40.83 74.87 55.01
Our Model 39.70 **86.14 65.94 47.57** 71.71 **62.21**
We can further optimize the complexity for some classification tasks based on sequence level without pre-training. Since sequence level representation is more useful than token level in these tasks, we can convert x ∈ R
L×H to x′ ∈ R
L′×H by linear projection, where L′can be set to half of L or even smaller. The detailed performance will be further analyzed in the next section.
$\left(4\right)$ (5) (6)
## 4 Experiments 4.1 Datasets
Long-Range Arena (LRA) (Tay et al., 2021b) is a systematic and unified benchmark for the purpose of evaluating sequence models under the longcontext scenario, which includes six tasks to assess different capabilities of efficient Transformers like their ability to model relations and hierarchical/spatial structures, generalization capability, etc. These tasks include different domains, such as math, language, image, spatial and so on. Following the original datasets, we use accuracy as the metric for these tasks.
## 4.2 Implementation Details
Because different tasks have different lengths and characteristics, we use the same hyper-parameters as those described in (Tay et al., 2021b) for a fair comparison. Specifically, the max length is set to 2,000, 4,000, 4,000 for ListOps, Text and Matching task, respectively. The hidden states in attention is set to 512, 256, 128 for ListOps, Text and Matching task, respectively. In our experiments, Adamax
(Kingma and Ba, 2014) is used as our optimizer with 0.05 learning rate. The sampling size K for
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
![4_image_2.png](4_image_2.png)
Figure 2: Performance analysis.
each token is ten in all the tasks. To prevent overfitting, we use dropout and set it to 0.1. We integrate our attention into the igloo framework (Sourkov, 2018) and run them in Keras with Tensorflow backend on NVIDIA V100 GPU.
## 4.3 Results
We compare our model with the following state-ofthe-art methods as baselines, including sparse attention methods and low-rank and kernel methods.
Sparse attention methods include Sparse Transformer (Child et al., 2019), Longformer (Beltagy et al., 2020), Big Bird (Zaheer et al., 2020) and so on. The results on five tasks are summarized in Table 1. It shows that our proposed A2-Former achieves 62.21 average accuracy, which outperforms the best sparse model based on sliding window, Big Bird (Zaheer et al., 2020), by 7.2%. Thus, the adaptive attention approach proposed in this paper is shown to be superior to traditional handcrafted, random, or combined patterns in sparsebased Transformer.
## 4.4 Analysis
As shown in Figure 2, we further analyze the impact of different configurations and parameters on five different tasks. As mentioned above, our proposed A2-Former has achieved a huge improvement compared to the previous best sparse attention model, BigBird (Zaheer et al., 2020), which proves that even models that combine multiple manual attention patterns is still inferior to the models that learn attention patterns automatically.
We attempt to adjust the maximum input length L to half, change hidden states H to small, and reduce the sampling number K. We can observe that the performance of A2-Former decreased compared with the original model. specifically, shorter length means less time and content. It is important to find a balance between efficiency and effectiveness according to different tasks. Although the impact of length on some classification tasks based on sequence level is not significant. For adaptive sparse attention, K limits the number of tokens involved in the calculation in each row of the attention matrix, which is also a factor that needs to be balanced.
## 4.5 Visualization
As shown in Figure 3, we randomly selected two examples for visualization. To study the distribution of positions, we only show the position of the selected tokens in sparse attention matrix. The max length of long sequences is 2000. It is obvious that previous hand-crafted attention patterns, such as sliding window attention, are not enough to cover the positions automatically selected by models. From a general trend, these selected positions are indeed distributed on the diagonal, but to cover these positions, a window size of about half the maximum length is required, which is unacceptable in terms of efficiency.
## 5 Conclusion
In this paper, we propose a novel sparse-based Transformer, A2-Former, which replaces handcrafted attention patterns with learnable adaptive attention in sparse attention. We creatively adopt an interpolation technique to help the model gather discrete positions with continuous position vectors.
By combining the meta position and generated offset position, the position of tokens can be selected dynamically according to the context. And position visualization further shows that traditional attention patterns are not enough to cover the useful positions automatically selected by models. Experiments on LRA show that our model has been significantly improved compared with the previous sparse Transformers based on sliding windows.
## References
Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020.
Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
Peng Chen. 2021. PermuteFormer: Efficient relative position encoding for long sequences. In *Proceedings of the 2021 Conference on Empirical Methods in* Natural Language Processing, pages 10606–10618, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. *arXiv preprint* arXiv:1904.10509.
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, David Belanger, Lucy Colwell, et al. 2020. Masked language modeling for proteins via linearly scalable long-context transformers. *arXiv preprint arXiv:2006.03555*.
Gonçalo M. Correia, Vlad Niculae, and André F. T.
Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2174–
2184, Hong Kong, China. Association for Computational Linguistics.
Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. 2017. Deformable convolutional networks. In Proceedings of the IEEE
international conference on computer vision, pages 764–773.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Linhao Dong, Shuang Xu, and Bo Xu. 2018. Speechtransformer: a no-recurrence sequence-to-sequence model for speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5884–5888. IEEE.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on* Learning Representations.
Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. 2019. Axial attention in multidimensional transformers. *arXiv preprint* arXiv:1912.12180.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya.
2020. Reformer: The efficient transformer. *International Conference on Learning Representations*.
Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. 2019. Neural speech synthesis with transformer network. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pages 6706–6713.
Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2022. Ernie-sparse: Learning hierarchical efficient transformer through regularized self-attention. *arXiv preprint arXiv:2203.12276*.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, and Luke Zettlemoyer. 2021.
Luna: Linear unified nested attention. *Advances* in Neural Information Processing Systems, 34:2441–
2453.
Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In *International conference on machine learning*, pages 1614–1623. PMLR.
OpenAI. 2022. Chatgpt.
OpenAI. 2023. Gpt-4 technical report.
Ben Peters, Vlad Niculae, and André F. T. Martins. 2019.
Sparse sequence-to-sequence models. In *Proceedings of the 57th Annual Meeting of the Association for* Computational Linguistics, pages 1504–1519, Florence, Italy. Association for Computational Linguistics.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. OpenAI.
Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020.
Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations.
Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9:53– 68.
Vsevolod Sourkov. 2018. Igloo: Slicing the features space to represent sequences. arXiv preprint arXiv:1807.03402.
Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, and Che Zheng. 2021a. Synthesizer: Rethinking self-attention for transformer models. In *International Conference on Machine Learning*, pages 10183–10192. PMLR.
Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, and Da-Cheng Juan. 2020. Sparse sinkhorn attention.
In *International Conference on Machine Learning*,
pages 9438–9447. PMLR.
Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. 2021b. Long range arena : A benchmark for efficient transformers.
In *International Conference on Learning Representations*.
Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herve Jegou. 2021. Training data-efficient image transformers amp; distillation through attention. In *International Conference on Machine Learning*, volume 139, pages 10347–10357.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in neural information processing systems*, pages 5998–6008.
Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity. *arXiv preprint arXiv:2006.04768*.
Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. 2021. Nyströmformer: A nystöm-based algorithm for approximating self-attention. In *Proceedings of the... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence*,
volume 35, page 14138. NIH Public Access.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. *Advances in neural information processing systems*, 32.
Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. In *NeurIPS*.
Qian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik McDermott, Stephen Koo, and Shankar Kumar.
2020. Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss. In *ICASSP 2020-2020 IEEE International* Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7829–7833. IEEE.
Xuanyu Zhang. 2019. MCˆ2: Multi-perspective convolutional cube for conversational machine reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6185–6190, Florence, Italy. Association for Computational Linguistics.
Xuanyu Zhang. 2020. Cfgnn: Cross flow graph neural networks for question answering on complex tables.
Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):9596–9603.
Xuanyu Zhang and Zhichun Wang. 2020. Rception:
Wide and deep interaction networks for machine reading comprehension (student abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10):13987–13988.
Xuanyu Zhang and Qing Yang. 2021a. Dml: Dynamic multi-granularity learning for bert-based document reranking. In *Proceedings of the 30th ACM International Conference on Information amp; Knowledge Management*, CIKM '21, page 3642–3646, New York, NY, USA. Association for Computing Machinery.
Xuanyu Zhang and Qing Yang. 2021b. Positionaugmented transformers with entity-aligned mesh for textvqa. In *Proceedings of the 29th ACM International Conference on Multimedia*, MM '21, page 2519–2528, New York, NY, USA. Association for Computing Machinery.
Xuanyu Zhang, Qing Yang, and Dongliang Xu. 2023.
Xuanyuan 2.0: A large chinese financial chat model with hundreds of billions parameters. arXiv preprint arXiv:2305.12002.
Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xuancheng Ren, Qi Su, and Xu Sun. 2019. Explicit sparse transformer: Concentrated attention through explicit selection. *arXiv preprint arXiv:1912.11637*.
Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. 2021. Deformable {detr}:
Deformable transformers for end-to-end object detection. In *International Conference on Learning* Representations.
Zhenhai Zhu and Radu Soricut. 2021. H-transformer1D: Fast one-dimensional hierarchical attention for sequences. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3801–3815, Online. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
✓ A2. Did you discuss any potential risks of your work?
Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
kertkeidkachorn-shirai-2023-sentiment | Sentiment Analysis using the Relationship between Users and Products | https://aclanthology.org/2023.findings-acl.547 | In product reviews, user and product aspects are useful in sentiment analysis. Nevertheless, previous studies mainly focus on modeling user and product aspects without considering the relationship between users and products. The relationship between users and products is typically helpful in estimating the bias of a user toward a product. In this paper, we, therefore, introduce the Graph Neural Network-based model with the pre-trained Language Model (GNNLM), where the relationship between users and products is incorporated. We conducted experiments on three well-known benchmarks for sentiment classification with the user and product information. The experimental results show that the relationship between users and products improves the performance of sentiment analysis. Furthermore, GNNLM achieves state-of-the-art results on yelp-2013 and yelp-2014 datasets. | # Sentiment Analysis Using The Relationship Between Users And Products
Natthawut Kertkeidkachorn Kiyoaki Shirai Japan Advanced Institute of Science and Technology 1-1 Asahidai, Nomi, Ishikawa 923-1292, Japan
{natt, kshirai}@jaist.ac.jp
## Abstract
In product reviews, user and product aspects are useful in sentiment analysis. Nevertheless, previous studies mainly focus on modeling user and product aspects without considering the relationship between users and products. The relationship between users and products is typically helpful in estimating the bias of a user toward a product. In this paper, we, therefore, introduce the Graph Neural Network-based model with the pre-trained Language Model (GNNLM), where the relationship between users and products is incorporated. We conducted experiments on three well-known benchmarks for sentiment classification with the user and product information.
The experimental results show that the relationship between users and products improves the performance of sentiment analysis. Furthermore, GNNLM achieves state-of-the-art results on yelp-2013 and yelp-2014 datasets.
## 1 Introduction
Sentiment analysis aims to understand a user's opinion toward a product. It is to infer the sentiment polarity or intensity on a review of a document
(Pang et al., 2008; Liu, 2012). Recently, user and product information in a review has been proven to be helpful for sentiment analysis models (Tang et al., 2015). Consequently, many studies investigate how to model user and product aspects and incorporate them into deep neural network models.
Nevertheless, none of them focuses on the relationship between users and products. This relationship between users and products typically provides the bias of a user's sentiment toward a product. For example, users A and B share similar sentiments on many products. If there is a product for which we do not know user A's sentiment, but we know user B's sentiment, we might be able to infer user A's sentiment from user B's sentiment. In addition, if a user has a high expectation toward the product, but the product does not meet the expectation, it would greatly impact the user's sentiment. Meanwhile, the interaction between users and products has proven to be useful in other tasks, such as spam detection (Wang et al., 2012) and citation recommendation (Jeong et al., 2020; Bhowmick et al.,
2021). Based on these observations, we assume that the relationship between users and products could provide a clue to help sentiment analysis.
In this paper, we, therefore, propose a new approach using graph neural networks with the pre-trained language model, namely GNNLM. In GNNLM, the relationship between the user and the product is captured by the graph neural network model as distributed representations and then combined with a distributed representation of reviews obtained from a pre-trained language model to predict the sentiment label. We conduct experiments on three benchmarks (IMDB, Yelp-2013, and Yelp2014) for sentiment classification with the user and product information. The results show that combining the relationship between the user and the product could help improve the performance of the sentiment analysis model.
## 2 Related Work
Recent studies have shown that user and product information is useful for sentiment analysis. The first study (Tang et al., 2015) argues that user and product information are consistent with a sentiment from a review. They propose UPNN that incorporates the user and product preference matrix into a CNN-based model to modify the meaning of word representation. UPDMN (Dou, 2017) uses a deep memory network to capture the user and product preferences with the LSTM-based model. NSC
(Chen et al., 2016) is the model using a hierarchical neural network with the attention mechanism to capture global user and product information.
HCSC (Amplayo et al., 2018) investigates the cold start problem for sentiment analysis with the user and product information by introducing shared user 8611
![1_image_0.png](1_image_0.png)
and product representations. DUPMN (Long et al.,
2018) uses a hierarchical LSTM-based model to encode the document with dual memory networks, one for user information and the other for production information. CMA (Ma et al., 2017) encodes the document using a hierarchical LSTM-based model, in which user and product information are injected hierarchically. BiLSTM + basis-cust (Kim et al., 2019) is a model that combines categorical metadata of users and products into the neural network model. CHIM (Amplayo, 2019) utilizes chunk-wise matrices to represent the user and product aspects and injects them into different locations of the model. IUPC (Lyu et al., 2020) is a model built on stacked attention with BERT to memorize historical reviews of a user and all reviews of a product. MA-BERT (Zhang et al., 2021) is a multiattribute BERT, where user and product aspects are incorporated into the BERT model.
Based on our survey, none of them investigates the relationship between users and products for sentiment analysis.
## 3 Our Approach
As shown in Fig. 1, our approach, GNNLM, consists of three components: 1) Graph neural networks, 2) Pre-trained language model, and 3) Classification layer. The task definition and the details of each component are described as follows.
## 3.1 Task Definition
Sentiment analysis with user and product information is a task to predict the intensity of the polarity of a review using text, user, and product information. The task is defined as follows. Given U =
{u1, u2, u3, ..., un}, P = {p1, p2, p3*, ..., p*m} and R are the set of users, products, and reviews respectively, and a user ux ∈ U writes a review rux,py ∈ R about the product py ∈ P, and r is a review represented by d sentences {s1, s2, s3*..., s*d}
and, the i-th sentence si consists of li word as
{w1, w2, w3*, ...w*li}, the objective of the task is to model the function f : (rux,py, ux, py) → η; η ∈
Z
+
[1,K]
, where η is the polarity scale of the review rux,py in the Likert scale from 1 to K, and K is the number of polarity classes.
## 3.2 Graph Neural Networks
Graph Neural Networks (GNNs) are neural models that can capture the dependency between nodes in a graph via message passing (Zhou et al., 2020).
Recently, GNNs have been shown effective for various graph-related applications, e.g., Link Prediction (Zhang and Chen, 2018), due to their ability to learn structural information from the graph. In our study, we build the user-product graph and use GNNs to learn structural information representing the relationship between users and products.
In our task, there are two types of nodes: user and product. The user-product graph is defined as the heterogeneous graph G = (VU ∪ VP , E),
where VU , VP , and E are the set of user nodes, product nodes, and edges between users and products. All users in U and products in P are used to create user and product nodes. For edges, if user ux writes a review about the product py, there are two edges: (vux, vpy) and (vpy, vux), where vux ∈ VU
and vpy ∈ VP . To avoid leaking the structural information between users and products, we only use the training set to build the graph G.
To learn representations of users and products, we use GraphSAGE (Hamilton et al., 2017) as the graph neural network operator to aggregate the structure information of the graph G. One advantage of GraphSAGE is that it can leverage the topological structure of neighbor nodes to learn and generalize embeddings of unseen nodes. Formally, the representation of nodes in the graph G is computed as follows:
$$\begin{array}{c}{{h_{\mathcal{N}_{v}}^{i}=a g g r e g a t e(h_{u}^{i-1},\forall u\in\mathcal{N}_{v})}}\\ {{{}}}\\ {{h_{v}^{i}=\sigma(W^{i}\cdot[h_{v}^{i-1};h_{\mathcal{N}_{v}}^{i}])}}\end{array}\qquad\mathrm{(2)}$$
where *aggregate*(·) is the function to aggregate information from neighbor nodes, σ(·) is the activation function, Nv is a set of all neighbor nodes of the node v, Wiis a set of weight matrices used to propagate information between different layers, and h iv is the representation of the node v at the i-th layer. By computing representations of all nodes, we could encode the relationship between the user and the product as the vector representation.
## 3.3 Pre-Trained Language Model
Pre-trained language models, such as BERT (Devlin et al., 2019) and RoBERTa(Liu et al., 2019),
can achieve remarkable performance for many NLP
tasks by the fine-tuning method. In our study, we use the pre-trained language model to learn the representation of a review. Using a word piece tokenizer (Wu et al., 2016), the review rux,py can be represented as a sequence of tokens cr*ux,py* =
{[CLS], w s1 1
, w s1 2
, ..., w s2 1
, ..., w sd ld}, where [CLS]
is a special token representing the whole sequence.
To obtain the representation of review rux,py
, we feed the sequence cr*ux,py* into the pre-trained language model as follows.
$$h_{c l s}=f_{L M}(c_{r_{u x},p_{y}};\theta_{L M})$$
; θLM ) (3)
where fLM (·) is the pre-trained language model, and θLM is its trainable parameters initialized from the pre-trained language model checkpoint.
## 3.4 Classification Layer
The classification layer is the final layer that combines the representation of the review rux,py with the representations of the user ux and the product py to predict the intensity of the polarity. In the classification layer, the representations of rux,py
,
ux, and py are concatenated and then passed into a feed-forward neural network with a rectified linear unit (*ReLU*) function to project them into the target space of polarity classes. The classification layer can be defined as:
$${\hat{p}}=R e L U(W_{K}\cdot[h_{c l s};h_{u_{x}};h_{p_{y}}]+b_{K})$$
where hcls is the representation of review rux,py from the pre-trained language model, hux and hpy are the representations of user ux and product py from GNNs, WK and bK are the parameters of the neural network. Then, the softmax function in Eq.
5 is used to normalize the polarity distribution.
$$(4)$$
$${\hat{y}}={\frac{e x p({\hat{p}})}{\sum_{i=1}^{K}e x p({\hat{p}}_{i})}}$$
$$({\mathfrak{H}})$$
where K is the number of polarity classes.
To learn and optimize our model, we use a crossentropy loss function defined as follows:
$$L=-\sum_{r\in R}\sum_{i=1}^{K}y_{r,i}\cdot l o g({\hat{y}}_{r,i})\qquad\quad(6)$$
where yr,i represents agreement with the groundtruth. Its value is 1 if the gold polarity class of the review r is i; otherwise 0.
## 4 Experiment 4.1 Experimental Setup
$$({\mathfrak{I}})$$
Setting. The experimental setting follows the same setting in the study (Tang et al., 2015). In the setting, there are three benchmarks: IMDB,
Yelp-2013, and Yelp-2014. The evaluation metrics are accuracy (Acc), and root mean squared error
(RMSE).
Implementation. In GNNLM, we implement GNNs by using SAGEConv (Hamilton et al., 2017)
and the pre-trained language model by using the RoBERTa (Liu et al., 2019) from Huggingface
(Wolf et al., 2020). Note that in our preliminary experiment using the pre-trained language models, we were unable to reproduce the results for BERT
as reported in (Lyu et al., 2020; Zhang et al., 2021)
on the IMDB dataset. However, we could achieve comparable results as presented in (Lyu et al., 2020)
by utilizing RoBERTa. To ensure fairness in the evaluation, we therefore selected RoBERTa as the pre-trained language model. The dimension of each node in GNNs and the dimension of hidden representations of RoBERTa are 768. The maximum sequence length of RoBERTa is 512. The AdamW
optimizer (Loshchilov and Hutter, 2017) is used with the learning rate set at 2e-5. The batch size is set to 32. In the fine-tuning process, the model is trained up to 10 epochs on the training set. We select the best hyper-parameters from the dev set for evaluation in the test set. The source code and the setting for the experiments are available on the GitHub repository.1 While we can simply fine-tune the pre-trained language model, the user and product representations from GNNs are randomly initialized and needs to be trained from scratch. To better learn the user and product representations before combing them, we train GNNLM with only GNNs for 100 epochs on the training set and save it as the GNNs checkpoint. In the fine-tuning process, the RoBERTa checkpoint and GNNs checkpoint are loaded to initialize the models.
1https://github.com/knatthawut/gnnlm
Methods IMDB Yelp-2013 Yelp-2014
Acc RMSE Acc RMSE Acc RMSE
Majority (Tang et al., 2015) 19.6 2.495 39.2 1.097 41.1 1.060 BERT (IUPC) (Lyu et al., 2020) 47.9 1.243 67.2 0.647 67.5 0.621 BERT (MA-BERT) (Zhang et al., 2021) 51.8 1.191 67.7 0.627 67.2 0.630 UPNN (Tang et al., 2015) 43.5 1.602 59.6 0.803 60.8 0.764 UPDMN (Dou, 2017) 46.5 1.351 61.3 0.720 63.9 0.662 NSC (Chen et al., 2016) 53.3 1.281 65 0.692 66.7 0.654 HCSC (Amplayo et al., 2018) 54.2 1.213 65.7 0.660 - - DUPMN (Long et al., 2018) 53.9 1.279 66.2 0.667 67.6 0.639
CMA (Ma et al., 2017) 54.0 1.191 66.4 0.677 67.6 0.637 BiLSTM+basis-cust (Kim et al., 2019) - - 67.1 0.662 - -
CHIM (Amplayo, 2019) 56.4 1.161 67.8 0.646 69.2 0.629 IUPC (Lyu et al., 2020) 53.8 1.151 70.5 0.589 71.2 0.592
MA-BERT (Zhang et al., 2021) 57.3 **1.042** 70.3 0.588 71.4 0.573
ISAR (Wen et al., 2023) 56.6 1.186 69.1 0.619 69.3 0.621 GNNLM-GNNs 32.6 2.095 46.7 1.094 46.2 1.108 GNNLM-LM 48.3 1.191 67.2 0.618 67.3 0.616
GNNLM 54.4 1.102 72.2 0.573 72.1 **0.568**
For the ablation study, we also evaluate GNNs and RoBERTa separately. GNNLM-GNNs denotes our model with only GNNs, while GNNLM–LM
refers to our model with only RoBERTa.
Baseline. We compare our GNNLM with all systems from the leaderboard2for this task. On the leaderboard, there are 10 systems: UPNN
(Tang et al., 2015), UPDMN (Dou, 2017), NSC
(Chen et al., 2016), HCSC (Amplayo et al., 2018),
DUPMN (Long et al., 2018), CMA (Ma et al.,
2017), BiLSTM+basis-cust (Kim et al., 2019),
CHIM (Amplayo, 2019), IUPC (Lyu et al., 2020) and MA-BERT (Zhang et al., 2021). In addition, we conduct a comparison between our approach and ISAR (Wen et al., 2023), a recently published baseline that employs graph ranking to model the interaction between users and products.
Moreover, we use three additional baselines: Majority (Tang et al., 2015), BERT (UPIC) (Lyu et al.,
2020), and BERT (MA-BERT) (Zhang et al., 2021).
Majority always chooses the polarity class based on the majority labels in the training set. Both BERT
(UPIC) and BERT (MA-BERT) are BERT models.
## 4.2 Result And Discussion
The experimental results are listed in Table 1. Considering our variations of GNNLM models, we found that GNNLM outperforms GNNLM-GNNs and GNNLM-LM. It infers that the representation learned from the relationship between users and products could help improve the performance of sentiment analysis.
GNNLM-GNNs mostly achieves better results than Majority. Majority could be considered as the heuristic approach using the majority polarity between users and products. From the results, GNNLM-GNNs could encode structural information, which is more useful than the majority polarity between users and products. Nonetheless, GNNLM-GNNs could suffer from the sparsity problem. The density of the user-product graph on IMDB, Yelp-2013, and Yelp-2014 is 0.06, 0.05, and 0.02. The graph in Yelp-2014 is sparser than the others. This sparsity problem could be the reason for no improvement in RMSE of GNNLMGNNs compared with Majority. To further study the impact of the sparsity problem, we analyze the results based on the degree of a node in the graph.
We found that nodes with lower degrees tend to provide lower performance. Therefore, the sparsity impacts the performance of GNNLM-GNNs.
Comparing our GNNLM with the systems on the leaderboard, we found that GNNLM could achieve the best performance on the Yelp-2013 and Yelp-2014 datasets. For the IMDB dataset, GNNLM could outperform most systems, except for MA-BERT in both metrics and CHIM, ISAR
in the Acc metric. GNNLM could not surpass MABERT due to the performance of the base model.
GNNLM-LM, BERT (IUPC), and BERT (MABERT) are pre-trained language models without the user and product information. On the Yelp-2013 and Yelp-2014 datasets, the performances of these approaches are comparable; however, on the IMDB
dataset, BERT (MA-BERT) significantly outperforms GNNLM-LM and BERT (IUPC). Therefore, the large difference in the base model's performance could be the main reason for the gap between GNNLM and MA-BERT on the IMDB
dataset.
## 5 Conclusion
This paper introduces GNNLM, GNNs with the pre-trained language model for sentiment analysis with user and product information. Unlike previous studies, we incorporate the relationship between users and products into the model using GNNs.
Experimental results show that the representations learned from the relationship between users and products contribute to sentiment analysis models. In the future, we will attempt to model user and product aspects from reviews into the graph.
## Limitations
Our approach relies on the pre-trained language model performance. Although using a graph neural network with the user-product graph helps improve the performance in sentiment analysis, the pre-trained language model still plays an important role in the task. If the pre-trained language model cannot obtain good results, it will affect the performance as discussed on the IMDB dataset.
Furthermore, the graph density could affect the performance of GNNLM-GNNs, as discussed in the experimental results. Since GNNLM is built on top of GNNLM-GNNs, GNNLM is also affected by the sparsity problem. As already reported, the density of the user-product graph on the IMDB,
Yelp-2013, and Yelp-2014 datasets are 0.06, 0.05, and 0.02, respectively. The greater the value is, the denser the graph is. Comparing GNNLM with GNNLM-LM, we found that the improvements we could obtain on the IMDB, Yelp-2013, and Yelp2014 datasets are 6.1, 5.0, and 4.8, respectively.
The trend of improvement conforms with the density of the graph. Therefore, if the user-product graph is very sparse, it would greatly affect the performance of GNNLM.
## References
Reinald Kim Amplayo. 2019. Rethinking attribute representation and injection for sentiment classification.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5602–
5613, Hong Kong, China. Association for Computational Linguistics.
Reinald Kim Amplayo, Jihyeok Kim, Sua Sung, and Seung-won Hwang. 2018. Cold-start aware user and product attention for sentiment classification. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 2535–2544, Melbourne, Australia. Association for Computational Linguistics.
Anubrata Bhowmick, Ashish Singhal, and Shenghui Wang. 2021. Augmenting context-aware citation recommendations with citation and co-authorship history. In *18th International Conference on Scientometrics and Informetrics, ISSI 2021*, pages 115–120.
International Society for Scientometrics and Informetrics.
Huimin Chen, Maosong Sun, Cunchao Tu, Yankai Lin, and Zhiyuan Liu. 2016. Neural sentiment classification with user and product attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1650–1659, Austin, Texas. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Zi-Yi Dou. 2017. Capturing user and product information for document level sentiment analysis with deep memory network. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 521–526, Copenhagen, Denmark.
Association for Computational Linguistics.
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017.
Inductive representation learning on large graphs. *Advances in neural information processing systems*, 30.
Chanwoo Jeong, Sion Jang, Eunjeong Park, and Sungchul Choi. 2020. A context-aware citation recommendation model with bert and graph convolutional networks. *Scientometrics*, 124:1907–1922.
Jihyeok Kim, Reinald Kim Amplayo, Kyungjae Lee, Sua Sung, Minji Seo, and Seung-won Hwang. 2019.
Categorical metadata representation for customized text classification. *Transactions of the Association* for Computational Linguistics, 7:201–215.
Bing Liu. 2012. Sentiment analysis and opinion mining.
Synthesis lectures on human language technologies, 5(1):1–167.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Yunfei Long, Mingyu Ma, Qin Lu, Rong Xiang, and Chu-Ren Huang. 2018. Dual memory network model for biased product review classification. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 140–148, Brussels, Belgium. Association for Computational Linguistics.
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. *arXiv preprint* arXiv:1711.05101.
Chenyang Lyu, Jennifer Foster, and Yvette Graham.
2020. Improving document-level sentiment analysis with user and product context. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6724–6729, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Dehong Ma, Sujian Li, Xiaodong Zhang, Houfeng Wang, and Xu Sun. 2017. Cascading multiway attentions for document-level sentiment classification. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 634–643, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. *Foundations and Trends®* in information retrieval, 2(1–2):1–135.
Duyu Tang, Bing Qin, and Ting Liu. 2015. Learning semantic representations of users and products for document level sentiment classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1014–1023, Beijing, China. Association for Computational Linguistics.
Guan Wang, Sihong Xie, Bing Liu, and Philip S Yu.
2012. Identify online store review spammers via social review graph. *ACM Transactions on Intelligent* Systems and Technology (TIST), 3(4):1–21.
Jiahui Wen, Anwen Huang, Mingyang Zhong, Jingwei Ma, and Youcai Wei. 2023. Hybrid sentiment analysis with textual and interactive information. *Expert* Systems with Applications, 213:118960.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 conference on empirical methods in natural language* processing: system demonstrations, pages 38–45.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al.
2016. Google's neural machine translation system:
Bridging the gap between human and machine translation. *arXiv preprint arXiv:1609.08144*.
Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. Advances in neural information processing systems, 31.
You Zhang, Jin Wang, Liang-Chih Yu, and Xuejie Zhang. 2021. MA-BERT: Learning representation by incorporating multi-attribute knowledge in transformers. In *Findings of the Association for Computational* Linguistics: ACL-IJCNLP 2021, pages 2338–2343, Online. Association for Computational Linguistics.
Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2020. Graph neural networks: A review of methods and applications. *AI Open*, 1:57–81.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations Section A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Experimental Setup
✓ B1. Did you cite the creators of artifacts you used?
Experimental Setup
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Experimental Setup
## C ✓ **Did You Run Computational Experiments?** Experimental Setup
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Experimental Setup
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Experimental Setup C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
nag-etal-2023-entropy | Entropy-guided Vocabulary Augmentation of Multilingual Language Models for Low-resource Tasks | https://aclanthology.org/2023.findings-acl.548 | Multilingual language models (MLLMs) like mBERTpromise to extend the benefits of NLP research to low-resource languages (LRLs). However, LRL words are under-represented in the wordpiece/subword vocabularies of MLLMs. This leads to many LRL words getting replaced by UNK, or concatenated from morphologically unrelated wordpieces, leading to low task accuracy. (Pre)-training MLLMs after including LRL documents is resource-intensive in terms of both human inputs and computational resources. In response, we propose EVALM (entropy-based vocabulary augmented language model), which uses a new task-cognizant measurement to detect the most vulnerable LRL words, whose wordpiece segmentations are undesirable. EVALM then provides reasonable initializations of their embeddings, followed by limited fine-tuning using the small LRL task corpus. Our experiments show significant performance improvements and also some surprising limits to such vocabulary augmentation strategies in various classification tasks for multiple diverse LRLs, as well as code-mixed texts. We will release the code and data to enable further research. | # Entropy-Guided Vocabulary Augmentation Of Multilingual Language Models For Low-Resource Tasks
Arijit Nag IIT Kharagpur [email protected] Bidisha Samanta IIT Kharagpur [email protected] Animesh Mukherjee IIT Kharagpur [email protected] Niloy Ganguly IIT Kharagpur [email protected]
## Abstract
Multilingual language models (MLLMs) like mBERT promise to extend the benefits of NLP
research to low-resource languages (LRLs).
However, LRL words are under-represented in the wordpiece/subword vocabularies of MLLMs. This leads to many LRL words getting replaced by UNK, or concatenated from morphologically unrelated wordpieces, leading to low task accuracy. (Pre)-training MLLMs after including LRL documents is resource-intensive in terms of both human inputs and computational resources. In response, we propose EVALM (entropy-based vocabulary augmented language model), which uses a new task-cognizant measurement to detect the most vulnerable LRL words, whose wordpiece segmentations are undesirable. EVALM then provides reasonable initializations of their embeddings, followed by limited fine-tuning using the small LRL task corpus. Our experiments show significant performance improvements and also some surprising limits to such vocabulary augmentation strategies in various classification tasks for multiple diverse LRLs, as well as code-mixed texts. We will release the code and data to enable further research1.
## 1 Introduction
It is common practice to start with a multilingual language model (MLLM) like mBERT2 or XLM-R (Conneau et al., 2020), which has been pre-trained with large multilingual corpora, and fine-tune the MLLM for diverse downstream tasks.
Although MLLMs support many low-resource languages (LRLs), closer inspection of these MLLMs reveals that the portion of vocabulary allotted to LRLs can be orders of magnitude smaller than that allotted to high-resource languages (HRLs) such as English (Table 1).
1https://github.com/NLPatCNERG/EVALM
2https://github.com/google-research/bert/
blob/master/multilingual.md Soumen Chakrabarti IIT Bombay [email protected]
| Language | Vocab count | Percentage (%) |
|------------------------------------------------------|---------------|------------------|
| Bengali | 946 | 0.79 |
| Hindi | 1852 | 1.55 |
| Gujarati | 404 | 0.34 |
| Kannada | 653 | 0.55 |
| Malayalam | 565 | 0.47 |
| Tamil | 832 | 0.7 |
| Telugu | 887 | 0.74 |
| English∗ | 64529–78984 | 53.98–66.07 |
| Table 1: Representation of the vocabulary of various | | |
Due to this imbalance, sometimes an LRL word may not be possible to segment into wordpieces as per the MLLM vocabulary, leading to the LRL
word being conflated with the UNK (unknown) token. An even more insidious situation is that the MLLM vocabulary has enough (over-fragmented)
wordpieces to assemble almost any LRL word
(thereby dodging the obvious UNK alert), but the embeddings of these wordpieces collide with unrelated usage in HRLs, and/or are so sparsely trained that contextual aggregations fail to yield satisfactory LRL word embeddings which may lead to poor LRL task performance. On the other hand, significant human and computational investments are needed to create task-specific LRL corpora that are large enough to augment and retrain the MLLM
vocabulary.
In this work, we address the setting where a MLLM (that is presumably deficient in LRL coverage) must be minimally fine-tuned after modest modification to its wordpiece vocabulary, guided by specific LRL tasks. We design a measure of damage to an LRL word, caused by wordpiece fragmentation, based on a suitably defined notion of entropy of the word and constituent wordpieces, with respect to the LRL task. This measure then guides the selection of LRL words with which the vocabulary should be augmented. Subsequently, we propose various ways to initialize the embeddings of these newly-introduced words, including using information from the LRL itself, to 'importing' information from HRLs. We call the resulting system EVALM (entropy-based vocabulary augmented language model).
We study the effect of EVALM on an existing MLLM during the fine-tuning stage for various downstream classification tasks covering multiple LRLs and also a code-mixed language. Our study shows that, for most of the datasets, EVALM's vocabulary augmentation strategy helps improve LRL task performance by greater margins than recent best practices (Hong et al., 2021; Hofmann et al., 2022). A detailed analysis of successes and failures delineates the perimeter of EVALM's capabilities and guides our design choices.
## 2 Related Work
Continued pre-training (Tai et al., 2020; Ebrahimi and Kann, 2021; Wang et al., 2020; Chau et al., 2020) with or without vocabulary augmentation of existing LMs like monolingual BERT, multilingual BERT (mBERT), XLM-R, etc., proves beneficial for improving domain and languagespecific performances over various tasks. Some works (Ruzzetti et al., 2021; Yu et al., 2021)
focus on rare/OOV words. Liu et al. (2021) propose an embedding generator module in the pretrain-finetune pipeline to resolve vocabulary gaps. Adaptors (Sachidananda et al., 2021; Moon and Okazaki, 2020; Hofmann et al., 2021) are also showing promising outcomes in LRL modeling.
Chung et al. (2020) explore multilingual vocabulary generation from language clusters. Minixhofer et al. (2021) transfer English LMs to new languages without expensive computation. Hofmann et al. (2022) propose a simple algorithm which modifies the tokenization process to preserve the morphological structure of a word. Others (Wang et al., 2019; Hong et al., 2021) focus on embedding initialization for newly added vocabulary words which are word fragments, which is also among our concerns.
## 3 Our System: Evalm
EVALM has three key components. The purpose of the first component (Section 3.1) is to identify
(based on only the train fold) a subset of *vulnerable* LRL words whose assembly from wordpieces is likely to distort the embedding information made available to LRL labeling tasks. The second component (Section 3.2) comprises various possible
Algorithm 1 LRL vocabulary selection.
Inputs:
- C-class LRL task training corpus D,
- MLLM tokenizer T
- word frequency threshold θ
- entropy reduction threshold γ
- maximum size of augmentation set Vnew 1: W ← all words from corpus D
2: S ←∪w∈W T (w)
3: compute *n(w, c*) for all LRL words w ∈ *W, c* ∈ C 4: compute *n(s, c*) for all wordpieces s ∈ *S, c* ∈ C 5: compute p(c|w), p(c|*s), H(w), H(s*) as described 6: *candidates* = ∅
7: for each LRL word w ∈ W do 8: compute average wordpiece entropy
∑
HS(w) =
s∈T (w) H(s)/|T (w)| 9: compute word frequency n(w) = ∑c *n(w, c*)
10: compute ∆H(w) = HS(w) − H(w)
HS(w)
11: features of w are ⟨n(w), |T (w)|, ∆H(w)⟩ 12: if n(w) ≥ θ and ∆H(w) ≥ γ **then**
13: add the feature triple to *candidates* 14: sort *candidates* in decreasing ∆H
Output: Prefix of *candidates* of specified size as Vnew
policies to initialize the embeddings of the newlyintroduced LRL words. In the third component, as in AVocaDo (Hong et al., 2021), we prevent overfitting to a small LRL task corpus by regularizing embeddings of corresponding wordpieces of each sentence obtained by the pre- and postaugmentation MLLM tokenizers.
## 3.1 Vulnerable Lrl Word Selection
We need a computationally efficient, task-sensitive surrogate of the value of introducing an LRL word into the wordpiece vocabulary. (Here we augment the vocabulary with whole LRL words, blocking their fragmentation entirely. More clever sharing of fragments is left for future work.)
Suppose LRL word w is not in the MLLM
vocabulary; w is fragmented into wordpiece sequence T (w) = s1*, . . . , s*T by the MLLM tokenizer T . The LRL task has C class labels. A specific label is denoted c ∈ [C] = {1*, . . . , C*}. The counts of w and constituent wordpieces stin each class c are denoted n(*w, c*) and n(st, c). Based on these counts, we define the following multinomial distributions:
p(c|•) = n(•, c)/∑c′ n(•, c′) (1)
where - = *w, s*t, etc. Based on this we define the entropy H(•) = −∑c p(c|•)log p(c|•) (2)
Suppose H(w) is small. This means w is potentially a good feature for the LRL task. Now suppose a wordpiece st has large H(st). That means stis being shared across other words that are distributed more evenly across classes. If this is the case for most st, then fragmentation of w may be a serious problem. To combine information from all wordpieces, we average their entropies, and use the *relative increase in entropy*, going from LRL
word to wordpieces, as one signal for the danger of fragmenting w. As an example, suppose the word 'धरम' (religion) occurs ten times in a threeclass sentiment analysis dataset with the class distribution of 'positive', 'neutral', and 'negative' as
(1,1,8). Its wordpieces have class distributions
'ध' (100,85,80), '\#\#र' (130,235,250), and '\#\#म'
(130,90,125). Then as per equation 2, H('धरम') =
0.639, H('ध') = 1.094, H('\#\#र') = 1.062, and H('\#\#र') = 1.086. The average wordpiece entropy is HS('धरम') = 1.094+1.062+1.086 3 = 1.081, and the percentage of entropy reduction from average wordpiece to word entropy is about 41%.
We also retain two simpler signals: the number of fragments |T (w)|, and the frequency of w in the LRL task corpus. LRL words are sorted on the amount of entropy decrease and the top LRL
words proposed for vocabulary augmentation. We remove words with very low frequency and retain a prefix of specified size to obtain Vnew, the LRL
words to be added to the MLLM vocabulary. Algorithm 1 shows a high-level pseudocode.
## 3.2 Embedding Initialization
Here we describe the different ways to initialize the embeddings of newly-added LRL words.
InitLRL: The embedding of the newlyintroduced LRL word is initialized using other LRL wordpieces already in the MLLM dictionary.
Suppose we add Bengali word 'হাসপাতাল', ('hospital' in English). Suppose the existing MLLM
tokenizer splits it into ['হ', '\#\#◌াস', '\#\#প',
'\#\#◌াত', '\#\#◌াল']. Then we initialize the embedding of 'হাসপাতাল' with the average of the existing MLLM embeddings of the fragments.
InitHRL: Here we translate 'হাসপাতাল' to English ('hospital'), tokenize it using T , and take the average embedding of the tokens in the list.
InitMix: We use the average of InitLRL and InitHRL embeddings.
InitRand: We randomly initialize the embeddings of the newly-added words.
It is challenging to learn good contextual embedding for words in Vnew due to very small taskspecific training data compared to the MLLM pretraining corpus. Therefore, we found it neces-
![2_image_0.png](2_image_0.png)
sary to apply some regularization to avoid overfitting during fine-tuning. Let T , T′ be the initial and final MLLM tokenizers. For a particular sentence S = w1, w2*, ..., w*I with words wi, we will get two different tokenizations; these will generally lead to different contextual embeddings E = (e1*, . . . , e*K) and E′ = (e′1
, . . . , e′L
); generally K ̸= L. We average-pool these to get vectors e, e′ which a final layer uses for the classification task, with losses ℓT and ℓT ′. We also use (e+e′)/2 for a third classification, with loss ℓmix. The overall training loss is ℓT + ℓT ′ + ℓmix, where ℓT and ℓmix are expected to reduce overfitting.
## 4 Experiments 4.1 Datasets And Evaluation Metric
We experiment with six short multi-class text classification tasks covering four Indian languages and a Hindi-English code-mixed dataset. We show the details of the datasets in Tables 2 and 6. We use mBERT as the MLLM and report macro-F1 (we report the accuracy metric in Appendix B). Details of model hyperparameters are present in Appendix C.
## 4.2 Quantitative Results
In Figure 1, we plot macro-F1 against the extent of vocabulary augmentation. Green, orange, and blue lines show the performance with InitLRL, InitHRL, and InitMix initialization, respectively.
Corresponding colored bands show 1-standard deviation spreads.
Vnew **helps:** For all tasks, including Vnew is better than baseline MLLM, and the gap is usually significant. This shows that even minimal training of newly added LRL tokens that used to be UNK or over-fragmented helps improve performance.
More augmentation̸⇒**larger lift:** We expected that larger Vnew would monotonically improve performance, but this was not universally the case.
Inclusion of non-informative words, as we grow Vnew (∆H decreases with high variance as shown in Appendix B Figure 3), maybe a reason.
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
Initialization does not matter much: Although there are cases where InitHRL or InitMix performs better than InitLRL, we did not find significant performance difference between different embedding initialization of new LRL words. Transfer of embeddings from a well-represented HRL is the likely reason. We also check the performance by randomly initializing the Vnew words and find, for almost all the cases, random initialization performance, both for macro-F1(in Figure 1) and accuracy(in Appendix B Figure 2), is lesser compared to InitHRL, InitLRL, or InitMix. It suggests meaningful initialization helps.
Comparison with recent approaches: We compare EVALM with AVocaDo (Hong et al., 2021)
keeping Vnew comparable in size. Table 4 shows that AVocaDo leads to performance *degradation* for all LRL datasets. The lack of domainspecificity for our datasets may be why AVocaDo's performance dropped. We also compare with FLOTA (Hofmann et al., 2022) in Figure 1.
For all datasets except GLUECoS Hi-En codemix dataset, EVALM performs better than FLOTA. A possible explanation is that mBERT vocabulary already includes many English as well as Hindi words, which helps FLOTA better compose embeddings of morphological components of English and Hindi words compared to other Indian languages.
![4_image_0.png](4_image_0.png)
Table 4: Here last two rows show the performance between best performing model of EVALM with AVocaDo. (a)–(f) are the datasets/tasks defined in Table 2.
Tasks→ **(a) (b) (c) (d) (e) (f)**
![4_image_1.png](4_image_1.png)
![4_image_4.png](4_image_4.png)
EVALM 73.13 68.93 69.57 89.13 96.10 59.03
−ℓreg 71.53 66.30 68.67 88.63 92.23 56.47 Table 5: Ablation. The first and second rows show our best model performance, trained with/without ℓreg respectively.
Regularization helps: Table 5 shows that EVALM with AVocaDo-style regularization performs better than without it, for all datasets.
Cases where EVALM hurts: The samples in Table 3 show that EVALM generally helps by spotting words important for predicting the correct class. This is shown in the first two examples, where the added vocabulary (∆H=100%) tipped the prediction toward the gold label. But the last two examples show cases where for a word, the train and test set frequency distribution among target classes are different. As a consequence, these words may become misleading at test time.
## 5 Conclusion
We have proposed a simple and effective method to augment an MLLM wordpiece vocabulary with LRL words that are important for LRL classification tasks. Our study, involving several Indian languages, shows a consistent positive impact of vocabulary augmentation and fine-tuning. We find more augmentation does not guarantee performance improvement, and different embedding initialization fails to show significant performance differences among themselves. We also show that regularization is crucial to prevent overfitting new LRL word embeddings during fine-tuning.
We have limited the augmentation to whole LRL
words, and a judicious selection of LRL wordpieces may improve performance. We also want to extend to other target tasks (especially language generation) and a more diverse set of LRLs.
## 6 Limitations
While EVALM demonstrates that vocabulary augmentation with LRL task performance as objective requires different priorities from vocabulary augmentation for improving representation for its own sake, our work opens up several avenues for
![4_image_2.png](4_image_2.png)
![4_image_3.png](4_image_3.png)
exploration. Our understanding of the potential conflict between fidelity of LRL word representation from wordpieces and LRL task class discrimination requirements remains far from complete, particularly when we extend from sequence-tosingle-label applications to sequence labeling (as in POS and NER tagging) and further to sequenceto-sequence applications (such as translation). Perhaps, further experiments with mBERT and other MLLMs will further our understanding of these trade-offs. While initializing an LRL word embedding using InitHRL or InitMix, we depend on automatic machine translation, which can be errorprone. Ranking by ∆H and picking a prefix fails to discount informative but correlated features. A
more sophisticated formulation of loss of information owing to fragmentation, taking multiple LRL
words into account simultaneously, may alleviate this problem. In the short term, these two limitations may deserve closer scrutiny.
## References
Gaurav Arora. 2020. iNLTK: Natural language toolkit for indic languages. In *Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)*,
pages 66–71, Online. Association for Computational Linguistics.
Ethan C. Chau, Lucy H. Lin, and Noah A. Smith. 2020.
Parsing with multilingual BERT, a small corpus, and a small treebank. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 1324–1334, Online. Association for Computational Linguistics.
Hyung Won Chung, Dan Garrette, Kiat Chuan Tan, and Jason Riesa. 2020. Improving multilingual models with language-clustered vocabularies. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 4536–4546, Online. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Abteen Ebrahimi and Katharina Kann. 2021. How to adapt your pretrained multilingual model to 1600 languages. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics*
and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 4555–4567, Online. Association for Computational Linguistics.
Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. 2021. Superbizarre is not superb: Derivational morphology improves BERT's interpretation of complex words. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 3594–3608, Online. Association for Computational Linguistics.
Valentin Hofmann, Hinrich Schütze, and Janet Pierrehumbert. 2022. An embarrassingly simple method to mitigate undesirable properties of pretrained language model tokenizers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.
Jimin Hong, TaeHee Kim, Hyesu Lim, and Jaegul Choo.
2021. AVocaDo: Strategy for adapting vocabulary to downstream domain. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 4692–4700, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Khondoker Ittehadul Islam, Md Saiful Islam, Sudipta Kar, and Mohammad Ruhul Amin. 2021. Sentnob:
A dataset for analysing sentiment on noisy bangla texts. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing
(EMNLP). Association for Computational Linguistics.
Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages. In *Findings of EMNLP*.
Md. Rezaul Karim, Bharathi Raja Chakravarti, John P.
McCrae, and Michael Cochez. 2020. Classification benchmarks for under-resourced bengali language based on multichannel convolutional-lstm network.
In *7th IEEE International Conference on Data Science and Advanced Analytics (IEEE DSAA,2020)*.
Simran Khanuja, Sandipan Dandapat, Anirudh Srinivasan, Sunayana Sitaram, and Monojit Choudhury.
2020. GLUECoS: An evaluation benchmark for code-switched NLP. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 3575–3585, Online. Association for Computational Linguistics.
Xin Liu, Baosong Yang, Dayiheng Liu, Haibo Zhang, Weihua Luo, Min Zhang, Haiying Zhang, and Jinsong Su. 2021. Bridging subword gaps in pretrainfinetune paradigm for natural language generation.
In Proceedings of the 59th Annual Meeting of the
Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6001–6011, Online. Association for Computational Linguistics.
Benjamin Minixhofer, Fabian Paischer, and Navid Rekabsaz. 2021. Wechsel: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Sangwhan Moon and Naoaki Okazaki. 2020. PatchBERT: Just-in-time, out-of-vocabulary patching. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7846–7852, Online. Association for Computational Linguistics.
Elena Sofia Ruzzetti, Leonardo Ranaldi, Michele Mastromattei, Francesca Fallucchi, and Fabio Massimo Zanzotto. 2021. Lacking the embedding of a word? look it up into a traditional dictionary.
Vin Sachidananda, Jason Kessler, and Yi-An Lai. 2021.
Efficient domain adaptation of language models via adaptive tokenization. In Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing, pages 155–165, Virtual. Association for Computational Linguistics.
Wen Tai, H. T. Kung, Xin Dong, Marcus Comiter, and Chang-Fu Kuo. 2020. exBERT: Extending pretrained models with domain-specific vocabulary under constrained training resources. In Findings of the Association for Computational Linguistics: EMNLP
2020, pages 1433–1439, Online. Association for Computational Linguistics.
Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, and Dong Yu. 2019. Improving pre-trained multilingual model with vocabulary expansion. In *Proceedings* of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 316–327, Hong Kong, China. Association for Computational Linguistics.
Zihan Wang, Karthikeyan K, Stephen Mayhew, and Dan Roth. 2020. Extending multilingual BERT to low-resource languages. In *Findings of the Association for Computational Linguistics: EMNLP 2020*,
pages 2649–2656, Online. Association for Computational Linguistics.
Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, and Meng Jiang. 2021. Dict-bert: Enhancing language model pre-training with dictionary.
# Entropy-Guided Vocabulary Augmentation Of Multilingual Language Models For Low-Resource Tasks (Appendix)
## A Discussion On Vulnerable Lrl Words
We discuss some natural ideas to determine LRL words vulnerable to improper wordpiece segmentation.
UNK and fragment counts: A natural impulse may be to augment the vocabulary with all LRL words
(in the task's train fold) that cannot be assembled from wordpieces in the original vocabulary (i.e., those that become UNK tokens). This is neither necessary nor sufficient. Many UNK words may offer little signal toward labelling. As an example, suppose the Bengali word 'েদায়া' (translate to 'prayer' in English)
split to a single ['[UNK]'] token after passing through mBERT tokenizer can be helpful for sentiment analysis classification task. But the word 'েজলায়' (translate to 'in the district' in English) also split to a single ['[UNK]'] token but might not carry any particular signal for the sentiment classification. On the other hand, simply adding all LRL characters as 'wordpieces' precludes UNKs entirely but by no means assures us that the LRL words thus assembled will obtain contextual embeddings of good quality. The word 'ভালবাসা' (translate to 'love' in English) splits to ['ভ', '\#\#◌াল', '\#\#বা', '\#\#সা'], where all these word fragments do not carry any semantic meaning in Bengali.
Contextual embedding distortion: Another natural idea is to ask if embeddings of wordpieces assembled into the LRL word can be combined by the (typically transformer-like) MLLM network into a good-quality embedding for the LRL word. This can be ascertained only if we have access to a reference embedding for the LRL word, which can be obtained only after introducing the LRL word into the vocabulary and re-training the MLLM! Another problem with this approach is that it is not guided by the impact of the distortion of embedding of a LRL word on end-task accuracy.
Tasks Language Train instances **Test instances**
(a) IITP Product Review (Kakwani et al., **2020)** Hindi 4182 523
(b) Bengali Sentiment Analysis (Islam et al., **2021)** Bengali 12576 1587
(c) Bengali HateSpeech (Karim et al., **2020)** Bengali 981 295
(d) Gujarati headline classification (Arora, **2020)** Gujarati 5269 659
(e) Malayalam headline classification (Arora, **2020)** Malayalam 5036 630
(f) GLUECoS Sentiment Analysis (Khanuja et al., **2020)** Hindi-English code-mix 10079 1260 Table 6: Salient statistics of tasks. Note the small size of LRL datasets.
## B Supplementary Results
In Figure 2, we report the accuracy with vocab augmentation under different embedding initialization
techniques for all the datasets.
Tasks→ (a) (b) (c) (d) (e) (f)
VNew 500 3000 1000 1500 1000 1500
EVALM 5.58 5.76 4.40 5.81 8.70 4.26
AVocaDo 3.35 4.07 3.65 3.62 4.08 3.69
Table 7: Here we compare the average length of the tokens added in our best performing EVALM model with
AVocaDo. It shows except one all the cases AVocaDo generates smaller tokens than EVALM. (a)–(f) are the
datasets/tasks defined in Table 2.
## C Experimental Settings
In all experiments, we trained the models on a single NVIDIA RTX A6000 with 48GB of memory. We implemented all models with PyTorch using the Transformers library from Huggingface. Our model has
∼29M trainable parameters, and it takes 10-45 minutes to train, depending on the size of the datasets.
## C.1 Hyperparameters
We search for the best hyperparameters manually based on the macro F1 scores. These parameter values are listed in Table 8.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
![7_image_3.png](7_image_3.png)
| Hyperparameter | Value |
|-----------------------------------------------|--------------------------------------------------------------|
| mBERT version | bert-base-multilingual-cased |
| Batch size | 16, 32 |
| Epoch | 15 |
| Learning rate | 2×10−5 , 5×10−5 |
| max_seq_len | 128 |
| θ | 1 |
| γ | 25 |
| Table 8: Hyperparameters used in experiments. | We find the best hyperparameter settings using manual search |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6 Limitations A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and section 1 introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4 Experiments
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix C Experimental Settings The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix C Experimental Settings
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 experiments C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
tan-etal-2023-class | Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data | https://aclanthology.org/2023.findings-acl.549 | Relation extraction (RE) aims to extract relations from sentences and documents. Existing relation extraction models typically rely on supervised machine learning. However, recent studies showed that many RE datasets are incompletely annotated. This is known as the false negative problem in which valid relations are falsely annotated as {`}no{\_}relation{'}. Models trained with such data inevitably make similar mistakes during the inference stage. Self-training has been proven effective in alleviating the false negative problem. However, traditional self-training is vulnerable to confirmation bias and exhibits poor performance in minority classes. To overcome this limitation, we proposed a novel class-adaptive re-sampling self-training framework. Specifically, we re-sampled the pseudo-labels for each class by precision and recall scores. Our re-sampling strategy favored the pseudo-labels of classes with high precision and low recall, which improved the overall recall without significantly compromising precision. We conducted experiments on document-level and biomedical relation extraction datasets, and the results showed that our proposed self-training framework consistently outperforms existing competitive methods on the Re-DocRED and ChemDisgene datasets when the training data are incompletely annotated. | # Class-Adaptive Self-Training For Relation Extraction With Incompletely Annotated Training Data
Qingyu Tan1, 2 Lu Xu1, 3 Lidong Bing† 1 **Hwee Tou Ng**2 1DAMO Academy, Alibaba Group 2Department of Computer Science, National University of Singapore 3Singapore University of Technology and Design
{qingyu.tan,lu.x,l.bing}@alibaba-inc.com
{qtan6,nght}@comp.nus.edu.sg
## Abstract
Relation extraction (RE) aims to extract relations from sentences and documents. Existing relation extraction models typically rely on supervised machine learning. However, recent studies showed that many RE datasets are incompletely annotated. This is known as the false negative problem in which valid relations are falsely annotated as *no_relation*. Models trained with such data inevitably make similar mistakes during the inference stage. Selftraining has been proven effective in alleviating the false negative problem. However, traditional self-training is vulnerable to confirmation bias and exhibits poor performance in minority classes. To overcome this limitation, we proposed a novel class-adaptive re-sampling self-training framework. Specifically, we resampled the pseudo-labels for each class by precision and recall scores. Our re-sampling strategy favored the pseudo-labels of classes with high precision and low recall, which improved the overall recall without significantly compromising precision. We conducted experiments on document-level and biomedical relation extraction datasets, and the results showed that our proposed self-training framework consistently outperforms existing competitive methods on the Re-DocRED and ChemDisgene datasets when the training data are incompletely annotated1.
## 1 Introduction
Relation extraction (RE) (Wang et al., 2019; Chia et al., 2022a) is an important yet highly challenging task in the field of information extraction (IE).
Compared with other IE tasks, such as named entity recognition (NER) (Xu et al., 2021), semantic role labeling (SRL) (Li et al., 2021), and aspectbased sentiment analysis (ABSA) (Li et al., 2018; Zhang et al., 2021b), RE typically has a significantly larger label space and requires graphical reasoning (Christopoulou et al., 2019). The complexity of the RE task inevitably increases the difficulty and cost of producing high-quality benchmark datasets for this task.
In recent years, several works that specifically focus on revising the annotation strategy and quality of existing RE datasets were conducted (Stoica et al., 2021; Alt et al., 2020; Tan et al., 2022b). For example, the DocRED (Yao et al., 2019) dataset is one of the most popular benchmarks for documentlevel relation extraction. This dataset is produced by the recommend-revise scheme with machine recommendation and human annotation. However, Huang et al. (2022) and Tan et al. (2022b) pointed out the false negative problem in the DocRED dataset, indicating that over 60% of the relation triples are not annotated. To provide a more reliable evaluation dataset for document-level relation extraction tasks, Huang et al. (2022) re-annotated 96 documents that are selected from the original development set of DocRED. In addition, Tan et al.
(2022b) developed the Re-DocRED dataset to provide a high-quality revised version of the development set of DocRED. The Re-DocRED dataset consists of a development set that contains 1,000 documents and a silver-quality training set that contains 3,053 documents. Nevertheless, both works on DocRED revision did not provide gold-quality datasets due to the high cost of annotating the relation triples for long documents. Learning from incompletely annotated training data is crucial and practical for relation extraction. Hence, in this work, we focused on improving the training process with incompletely annotated training data.
To tackle the problem of training with incompletely annotated datasets, prior works leveraged the self-training method to alleviate the detrimental effects of false negative examples (Feng et al.,
2018; Hu et al., 2021; Chen et al., 2021; Zhou
![1_image_0.png](1_image_0.png)
et al., 2023). However, self-training-based methods are highly susceptible to confirmation bias, that is, the erroneously predicted pseudo-labels are likely to deteriorate the model's performance in subsequent rounds of training (Arazo et al., 2020; Tarvainen and Valpola, 2017; Li et al., 2020a).
Furthermore, the label distribution of relation extraction task is highly imbalanced. Therefore, the predictions made by prior self-training methods are likely to be of the majority classes. Wei et al.
(2021) proposed a re-sampling strategy based on class frequencies to alleviate this problem in image classification. In this way, not all generated pseudo-labels will be used to update the training datasets. The pseudo labels of the minority classes have higher probabilities to be preserved than those of the frequent classes. However, such a sampling strategy does not specifically address the problems caused by the erroneously generated pseudo labels.
When a model is trained on incompletely annotated datasets, minority classes exhibit bad performance and frequent classes may have low recall scores, as shown in Figure 1. Merging pseudo labels with original labels of the training dataset without considering the correctness of the former potentially deteriorates performance in subsequent iterations.
In order to overcome confirmation bias in selftraining, we proposed a class-adaptive self-training
(**CAST**) approach that considers the correctness of the pseudo labels. Instead of sampling the pseudo labels based on class frequencies, we introduced a class-adaptive sampling strategy to determine how the generated pseudo labels should be preserved.
Specifically, we calculated the precision and recall scores of each class on the development set and used the calculated scores to compute the sampling probability of each class. Through such an approach, CAST can alleviate confirmation bias caused by erroneous pseudo labels. Our proposed approach preserves the pseudo labels from classes that have high precision and low recall scores and penalizes the sampling probability for the pseudo labels that belong to classes with high recall but low precision scores.
Our contributions are summarized as follows.
(1) We proposed CAST, an approach that considers the correctness of generated pseudo labels to alleviate confirmation bias in the self-training framework. (2) Our approach was evaluated with training datasets of different quality, and the experimental results demonstrated the effectiveness of our approach. (3) Although our approach is not specifically designed for favoring the minority classes, the minority classes showed more significant performance improvements than the frequent classes, which is a nice property as the problem of longtail performance is a common bottleneck for real applications.
## 2 Related Work
Neural Relation Extraction Deep neural models are successful in sentence-level and documentlevel relation extraction. Zhang et al. (2017) proposed position-aware attention to improve sentencelevel RE and published TACRED, which became a widely used RE dataset. Yamada et al. (2020) developed LUKE, which further improved the SOTA performance with entity pre-training and entity-aware attention. Chia et al. (2022b) proposed a data generation framework for zero-shot relation extraction.
However, most relations in real-world data can only be extracted based on inter-sentence information.
To extract relations across sentence boundaries, recent studies began to explore document-level RE.
As previously mentioned, Yao et al. (2019) proposed the popular benchmark dataset DocRED for document-level RE. Zeng et al. (2020) leveraged a double-graph network to model the entities and relations within a document. To address the multilabel problem of DocRE, Zhou et al. (2021) proposed using adaptive thresholds to extract all relations of a given entity pair. Zhang et al. (2021a)
developed the DocUNET model to reformulate document-level RE as a semantic segmentation task and used a U-shaped network architecture to improve the performance of DocRE. Tan et al. (2022a)
proposed using knowledge distillation and focal loss to denoise the distantly supervised data for DocRE and achieved great performance on the DocRED leaderboard. However, all preceding methods are based on a closed-world assumption (i.e., the entity pairs without relation annotation are negative instances). This assumption ignores the presence of false negative examples. Hence, even the abovementioned state-of-the-art methods may not perform well when the training data are incompletely annotated.
Denoising for Relation Extraction RE is susceptible to noise in the training data. Noisy data can be categorized into two types: false positives
(FPs) and false negatives (FNs). False positive examples are mainly caused by misalignment of knowledge bases. Xiao et al. (2020) proposed a denoising algorithm that filters FP examples in distantly supervised data. Wang et al. (2019) tackled the class-imbalance problem of RE and NER
by meta-learning. The false negative problem is also common in information extraction. Li et al.
(2020b); Xu et al. (2023) used simple negative sampling strategies to alleviate the detrimental effects of FN examples on NER. Most recently, Guo et al.
(2023) tackled the multi-label problem in RE by entropy minimization and supervised contrastive learning. Given that the FN problem is related to incomplete annotation, supplementing the annotation by self-training is a viable way to tackle this problem (Erkan et al., 2007; Sun et al., 2011; Chen et al., 2021; Hu et al., 2021). However, self-training is susceptible to confirmation bias; conventional self-training suffers from the problem of error propagation and makes overwhelming predictions for frequent classes. Prior research on semi-supervised image classification (Wei et al., 2021; He et al.,
2021) indicated that re-sampling of pseudo-labels can be beneficial to class-imbalanced self-training.
However, existing re-sampling strategies are dependent only on the frequencies of the classes and do not consider the actual performance of each class. Our method alleviates confirmation bias by employing a novel re-sampling strategy that considers the precision and recall of each class on the
![2_image_0.png](2_image_0.png)
development set. In this way, we can downsample the predictions for popular classes and maintain high-quality predictions for long-tail classes.
## 3 Methodology 3.1 Problem Definition
Document-level relation extraction (DocRE) is defined as follows: given a text T and a set of n entities {e1*, ..., e*n} appearing in the text, the objective of the document-level RE is to identify the relation type r ∈ C ∪ {*no_relation*} for each entity pair (ei, ej ). Note that ei and ej denote two different entities, and C is a predefined set of relation classes. The complexity of this task is quadratic in the number of entities, and the ratio of the NA
instances (*no_relation*) is very high compared with sentence-level RE. Therefore, the resulting annotated datasets are often incomplete. The setting of this work is to train a document-level RE model with an incompletely labeled training set, and then the model is evaluated on a clean evaluation dataset, such as Re-DocRED (Tan et al., 2022b).
We denote the training set as ST and the development set as SD. Two types of training data are used in this work, each representing a different annotation quality. The first type is the training split of the original DocRED data (Yao et al., 2019), which we refer to as bronze-level training data. This data is obtained by a recommend-revise scheme. Even though the annotation of this bronze level is precise, there are a significant number of missing triples in this dataset. On the other hand, the training set of the Re-DocRED dataset has added a considerable number of triples to the bronze dataset, though a small number of triples might still be missed. We refer to this Re-DocRED dataset as silver-quality training data.
## 3.2 Our Approach 3.2.1 Overview
The main objective of our approach is to tackle the RE problem when the training data ST is incompletely annotated. We propose a class-adaptive self-training (CAST) framework, as shown in Figure 2, to pseudo-label the potential false negative examples within the training set. First, we split the training set into N folds and train an RE model with N − 1 folds. The remaining fold STk is used for inference. Next, we use a small development set SD to evaluate the models and calculate the sampling probability for each relation class (Eq.
1). The predicted label set YTk is obtained by conducting inference on STk
. Then, we re-sample the predicted labels based on the computed probability, which is calculated based on the performance of each class. The re-sampled label set is denoted as Y
′
Tk
. Lastly, Y
′
Tk will be merged with the initial labels of STk
. The details of the proposed framework are discussed in the following subsections.
## 3.2.2 Self-Training
In traditional self-training, models are trained on a small amount of well-annotated data and pseudolabels are generated on unlabeled instances (Zhu and Goldberg, 2009). However, we do not have access to well-annotated training data, and our training data contains false negative examples.
Therefore, we need to construct an N-fold crossvalidation self-training system. Given a set of training documents ST with relation triplet annotation, these documents are divided into N folds. The first N − 1 folds will be used for training an RE
model. Then, the trained model will be used to generate pseudo-labels for the held out N-th fold.
The pseudo-labels will be merged with the original labels, and the merged data will be used to train a new model. The N-fold pseudo labeling process will be repeated for multiple rounds until no performance improvement is observed on the final RE
system. However, because the class distribution of the document-level RE task is highly imbalanced, pseudo-labeling may favor the popular classes during prediction. This inevitably introduces large confirmation bias to popular classes, which is similar to the "rich-get-richer" phenomenon (Cho and Roy, 2004).
## 3.2.3 Intuition
When the annotation of the training set is incomplete, the model trained on such data typically shows high precision and low recall scores for most of the classes. Figure 1 shows the precision and recall of each class of the model that is trained on the DocRED dataset and evaluated on the development set of Re-DocRED. Among the 96 classes, most of the classes obtain higher precision scores than recall scores. Only one class that has a higher recall score than precision score; some classes have 0 precision and recall scores. Given this empirical observation, boosting self-training performance by sampling more pseudo-labeled examples from the classes that have high precision and low recall is a good strategy because (1) the pseudo labels of such classes tend to have better quality and (2) the recall performance of these classes can be improved by adding true positive examples. For extreme cases in which a class has predictions that are all wrong (i.e.
its precision and recall are both 0), the logical action is to discard the corresponding pseudo-labels.
## 3.2.4 Class-Adaptive Self-Training (Cast)
As previously mentioned, traditional self-training suffers from confirmation bias, especially for RE
task that has a highly imbalanced class distribution. The pseudo-labels that are generated by such an approach tend to be biased toward the majority classes. To alleviate this problem, we propose a class-adaptive self-training framework that filters the pseudo-labels by the per-class performance. Unlike existing self-training re-sampling techniques (Wei et al., 2021; He et al., 2021) that take only the class frequencies into account, our framework samples pseudo-labels based on their performance on the development sets.
First, we evaluate the model for pseudo-labeling on the development set SD and calculate the precision P and recall R for each class. Then, we define our sampling probability µi for each relation class i as:
µi = [Pi ∗ (1 − Ri)]β(1)
where Pi and Ri are the precision and recall scores of class i, respectively, and β is a hyper-parameter that controls the smoothness of the sampling rates.
Note that all pseudo labels will be used when the sampling probability equals to 1. Conversely, all the pseudo labels will be discarded when the sampling probability equals to 0. If the recall of a specific class is very small and its precision is close to 1, the sampling rate of the class will be closer to 1. On the contrary, if the recall for a certain class is high, the sampling rate of the class will
## Algorithm 1 Class-Adaptive Self-Training
Input:
M: Number of rounds ![4_image_0.png](4_image_0.png)
θ
∗ ← evaluate {θ
∗
1
, ..., θ∗M} on SD
return θ
∗
| DocRE | DocRED Re-DocRED | Re-DocRED | | |
|--------------------------|--------------------|-------------|-------------|------|
| Train | Train | Dev | Test | |
| # Documents | 3,053 | 3,053 | 500 | 500 |
| Avg. # Entities per Doc | 19.4 | 19.4 | 19.4 | 19.6 |
| Avg. # Triples per Doc | 12.5 | 28.1 | 34.6 | 34.9 |
| Avg. # Sentences per Doc | 7.9 | 7.9 | 8.2 | 7.9 |
| # NA rate | 97.0% | 94.3% | 93.1% 93.1% | |
| BioRE | ChemDisGene | | | |
| Train | Dev | Test | | |
| # Documents | 76,544 | 1,480 | 523 | |
| Avg. # Words | 196.6 | 237.3 | 235.6 | |
| Avg. # Entities per Doc | 7.6 | 9.0 | 10.0 | |
| Avg. # Triples per Doc | 2.2 | 2.2 | 7.2 | |
| Avg. # Sentences per Doc | 12.6 | 14.0 | 13.2 | |
| # NA rate | 96.8% | 97.7% 93.8% | | |
Table 1: Dataset statistics of our experiments for DocRE
and BioRE.
be low. In this way, our method is able to alleviate confirmation bias toward the popular classes, which typically have higher recall. The pseudocode of our proposed CAST framework is provided in Algorithm 1.
## 4 Experiments 4.1 Experimental Setup
Our proposed CAST framework can be applied with any backbone RE model. For the experiment on DocRED, we adopted the ATLOP (Zhou et al., 2021) model as the backbone model, which is a well-established baseline for the DocRE task.
We used BERT-Base (Devlin et al., 2019) and RoBERTa-Large (Liu et al., 2019) as the encoders.
In addition to DocRED, we conduct experiments on ChemDisGene (Zhang et al., 2022), a DocRE
dataset for biomedical relation extraction (BioRE).
We used the PubMedBERT (Gu et al., 2021) encoder for the BioRE experiments. We use the development set of Re-DocRED in the document-level RE experiments because the Re-DocRED dataset has a high quality. Moreover, we use the distantlysupervised development set of ChemDisGene for the BioRE experiments. Our final models are evaluated on the test sets of Re-DocRED and ChemDisGene. Both of the test sets are human-annotated and have high quality, the statistics of the datasets can be found in Table 1.
For the hyper-parameters, we set M = 5 (i.e.,
the iteration round in Algorithm 1) and N = 5 for the self-training-based methods because these methods typically reach the highest performance before the fifth round and five-fold training is the conventional practice for cross validation. For β, we grid searched β ∈ {0.0, 0.25, 0.5, 0.75, 1}. For evaluation, we used micro-averaged F1 score as the evaluation metric. We also evaluate the F1 score for frequent classes and long-tail classes, denoted as Freq_F1 and LT_F1, respectively. For the DocRED
dataset, the frequent classes include the top 10 most popular relation types2in the label space; the rest of the classes are categorized as the long-tail classes.
Following Yao et al. (2019), we use an additional metric Ign_F1 on the DocRE task. This metric calculates the F1 score for the triples that do not appear in the training data.
## 4.2 Baselines
Vanilla Baselines This approach trains existing state-of-the-art RE models on incompletely annotated data and serves as our baseline method. As stated earlier, we use **ATLOP** as the backbone model for the DocRE experiments. In addition to ATLOP, we compare **GAIN** (Zeng et al., 2020),
DocuNET (Zhang et al., 2021a), and **KD-DocRE**
(Tan et al., 2022a) as our vanilla baselines. These methods are top-performing methods on the ReDocRED dataset. However, similar to ATLOP, the performances of these models deteriorate significantly under the incomplete annotation setting.
Negative Sampling (NS) (Li et al., 2020b) This method tackles the incomplete annotation problem through negative sampling. To alleviate the effects of false negatives, this method randomly selects partial negative samples for training. Such an approach can help to alleviate the detrimental effect of the false negative problem.
2They cover 59.4% of the positive instances.
| Model | P | R | F1 | Ign_F1 | Freq_F1 | LT_F1 |
|-------------------|--------------|-------------|-------------|-------------|-------------|-------------|
| GAIN† | 88.11 | 30.98 | 45.82 | 45.57 | - | - |
| ATLOP | 88.39 ±0.39 | 28.87 ±0.34 | 43.52 ±0.25 | 43.28 ±0.24 | 45.49 ±0.24 | 40.46 ±0.28 |
| SSR-PU-ATLOP† | 65.10 ±0.90 | 50.53 ±0.89 | 56.84 ±0.72 | 55.45 ±0.59 | 60.21 ±0.64 | 51.84 ±0.82 |
| NS-ATLOP | 74.79 ±0.31 | 46.33 ±0.34 | 57.22±0.25 | 56.28 ±0.21 | 59.23 ±0.23 | 54.13 ±0.24 |
| VST-ATLOP | 63.53 ±1.17 | 56.41 ±0.86 | 59.56 ±0.16 | 58.03 ±0.25 | 63.17 ±0.46 | 55.61 ±0.25 |
| CREST-ATLOP | 69.34 ±1.55 | 50.58 ±1.35 | 58.48 ±0.30 | 57.33 ±0.21 | 60.31 ±0.64 | 56.33 ±0.15 |
| CAST-ATLOP (Ours) | 70.49 ±1.12 | 54.34 ±1.07 | 61.36 ±0.67 | 60.16 ±0.79 | 63.66 ±0.44 | 58.12 ±0.36 |
| DocuNET† | 94.16 | 30.42 | 45.99 | 45.88 | - | - |
| KD-DocRE∗ | 92.08 | 32.07 | 47.57 | 47.32 | - | - |
| ATLOP | 92.62 ±0.35 | 33.61 ±0.48 | 49.32 ±0.29 | 49.16 ±0.27 | 51.49 ±0.51 | 45.36 ±0.43 |
| SSR-PU-ATLOP† | 65.71 ± 0.28 | 57.01 ±0.47 | 61.05 ±0.21 | 59.48 ±0.18 | 62.85 ±0.10 | 58.19 ±0.54 |
| NS-ATLOP | 68.39 ±2.23 | 56.05 ±0.98 | 61.58 ±0.48 | 60.43 ±0.55 | 65.35 ±0.12 | 57.16 ±0.44 |
| VST-ATLOP | 62.85 ±0.48 | 63.58 ±0.62 | 63.21 ±0.39 | 61.83 ±0.41 | 65.68 ±0.43 | 60.09 ±0.45 |
| CREST-ATLOP | 73.09 ±0.79 | 55.06 ±0.86 | 62.81 ±0.35 | 61.90 ±0.33 | 63.71 ±0.41 | 61.75 ±0.49 |
| CAST-ATLOP (Ours) | 72.83 ±0.50 | 59.22 ±0.61 | 65.32 ±0.22 | 64.25 ±0.15 | 66.99 ±0.29 | 63.05 ±0.11 |
Vanilla Self-Training (VST) (Peng et al., 2019; Jie et al., 2019) VST is a variant of simple selftraining. In this approach, models are trained with N folds, and all pseudo-labels are directly combined with the original labels. Then, a new model is trained on the datasets with combined labels.
Class Re-balancing Self-Training (CREST)
(Wei et al., 2021) This algorithm is the most advanced baseline of class-imbalanced semisupervised training, re-samples the pseudo-labels generated by models. However, this sampling strategy only considers the frequencies of the training samples, whereas our CAST considers the per-class performance on the development set.
SSR Positive Unlabeled Learning (SSR-PU)
(Wang et al., 2022) This method applies a positive unlabeled learning algorithm for DocRE under the incomplete annotation scenario. SSR-PU utilizes a shift-and-squared ranking (SSR) loss to accommodate the distribution shifts for the unlabeled examples.
BioRE Baselines For the BioRE experiments, we compare our methods with Biaffine Relation Attention Network **BRAN** (Verga et al., 2018) and PubmedBERT (Gu et al., 2021), which is a pretrained language model in the biomedical domain.
## 4.3 Experimental Results
Table 2 presents the experimental results for the document-level RE. The experimental results on the original DocRED dataset show that the F1 score of the ATLOP-RoBERTa model is only 49.32. This Table 3: Experimental results on the test set of ReDocRED when trained on silver quality data.
| Model | P | R | F1 | Ign_F1 Freq_F1 LT_F1 | | |
|-------------------------------|-------------|-------------------|-------|------------------------|-------|-------|
| Re-DocRED Training Data ATLOP | 86.70 62.46 | 72.61 | 71.86 | 75.92 | 67.46 | |
| NS-ATLOP | 77.63 | 69.17 | 73.16 | 72.92 | 77.28 | 67.59 |
| BERT | VST-ATLOP | 72.77 75.55 74.14 | 72.48 | 78.47 | 68.13 | |
| CREST-ATLOP | 75.94 | 72.47 | 74.17 | 72.77 | 77.93 | 68.68 |
| CAST-ATLOP (Ours) | 76.59 | 72.84 74.67 | 73.32 | 78.53 | 69.34 | |
| Model | P | R | F1 |
|-------------------|--------------|-------------|-------------|
| BRAN† | 41.8 | 26.6 | 32.5 |
| PubMedBERT† | 64.3 | 31.3 | 42.1 |
| BRAN† | 70.9 | 31.6 | 43.8 |
| ATLOP∗ | 76.17 ± 0.36 | 29.70 ±0.54 | 42.73 ±0.36 |
| SSR-PU-ATLOP∗ | 54.27 ±0.23 | 43.93 ±0.40 | 48.56 ±0.32 |
| NS-ATLOP | 71.54 ±0.50 | 35.52 ±0.29 | 47.47 ±0.37 |
| VST-ATLOP | 54.92 ±0.42 | 48.39 ±0.58 | 51.24 ±0.30 |
| CREST-ATLOP | 59.42 ±1.63 | 42.12 ±0.65 | 49.28 ±0.21 |
| CAST-ATLOP (Ours) | 66.68 ±2.22 | 45.48 ±1.27 | 54.03 ±0.17 |
| PubMedBERT | | | |
finding can be ascribed to the low recall score of this method, as shown in Figure 1. NS significantly improves the performance compared with the baseline. After comparing vanilla self-training with the baseline, we observe that although the recall score is the highest for this method, its precision is significantly reduced. We observe similar trends for all self-training based methods (i.e., VST, CREST, and CAST), the recall improved at the expense of precision. Notably, the performance of the simple NS
baseline exceeds the performance of SSR-PU when
![6_image_0.png](6_image_0.png)
trained on the DocRED data. Our proposed CAST
framework consistently outperforms the competitive baselines and achieves the highest performance for both BERT and RoBERTa encoders. Our bestperforming model outperforms the baseline by 16.0 F1 (49.32 vs. 65.32). Moreover, the CAST obtains the highest precision score among the three selftraining methods, thereby showing that the examples added by our class-adaptive sampling strategy have better quality.
The experimental results on the test set of ReDocRED (Table 3) depict that the baseline F1 score is significantly improved due to the large gain in the recall score when the training data are switched from bronze-quality to silver-quality. Compared with baseline approaches, our CAST achieves consistent performance improvements in terms of F1 score. The F1 difference between the baseline and our CAST is 2.06 (72.61 vs. 74.67). However, the performance gap between our approach and the baseline is smaller than the corresponding gap when both are trained with DocRED. This indicates that the performance of existing state-of-theart models for document-level RE is decent when high-quality training data is provided but declines when the training data are incompletely annotated.
This finding verifies the necessity of developing better self-training techniques because preparing high-quality training data is costly.
Table 4 presents the experiments on biomedical RE. Our CAST model consistently outperforms strong baselines, exceeding the performance of SSR-PU by 5.47 F1 (54.03 vs. 48.56).
On the basis of the results of DocRE and BioRE
experiments, self-training-based methods aim to improve recall and consistently improve overall performance when the training data is incompletely annotated. However, our CAST maintains a better balance between increasing recall and maintaining precision.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
## 5 Analysis 5.1 Comparisons Of Self-Training Strategies
To further compare different self-training strategies, we illustrate the detailed performance with respect to the self-training rounds in Figure 3. The reported scores are on the development set of ReDocRED and the training data is from DocRED.
Figure 3b shows that all self-training-based methods generally have improving recall scores as the number of self-training rounds increases. On the contrary, the precision scores decline. From Figure 3c, we observe that VST outperforms CREST
and CAST in the first two rounds. This is mainly because VST does not perform re-sampling on the pseudo-labels and it utilizes all pseudo-labels. At the beginning stage, these labels are of relatively good quality. However, the performance of VST drops after the second round of pseudo-labeling because as the number of rounds increases, the increase in the number of false positive examples in the pseudo-labels outweighs the benefit. Meanwhile, the performance gains of CREST and CAST
are relatively stable, and both methods produce their best-performing models at round 4. Compared with CREST, our CAST maintains higher precision scores as the number of rounds increases
(Figure 3a).
We also assess the F1 performance of the frequent and long-tail classes with respect to the number of rounds, and the comparison is shown in Figure 5. The results reveal that VST suffers greatly from confirmation bias on both frequent and LT
classes, i.e., Figure 5a and Figure 5b, and its performance becomes very poor in round 5. In Figure 5b, we can see that the performance gains of CAST is stable across the training rounds and achieved the best LT performance.
## 5.2 Detailed Analysis Of Cast
In this section, we analyze the performance of our CAST framework in detail. We first plot the precision and recall scores of VST and CAST for all the classes in Figure 4, where the experimental results are obtained by training with the DocRED
dataset. The formulation of Figure 4 is the same as Figure 1. Figure 4a demonstrates that VST significantly improves the recall scores of many classes compared with the baseline in Figure 1. However, the improvements in recall scores are accompanied by a large decline in precision scores. This observation shows that the pseudo-labels in VST contain a considerable amount of erroneous predictions.
By contrast, our CAST framework is able to better maintain the precision scores for most of the classes. The recall scores for most of the classes are significantly higher compared with those of the baseline. This observation justifies the improvements of the overall F1 scores in Table 2 despite the lower recall of CAST model than VST.
## 5.3 Effect Of Β
We further analyze the effect of the sampling coefficient β on our CAST framework in Figure 6, the experiments are conducted by training with the DocRED dataset. When β value is small, CAST
behaves like the VST model, exhibits some F1 improvements in the first few rounds, and demonstrates diminishing positive effects in the later rounds. Larger β leads to better overall improvements and smaller fluctuations across different rounds. However, because the term [Pi ∗ (1 − Ri)]
in Eq. 1 is smaller than 1, higher β may lead to lower sampling rates for all the classes. As a result, the convergence time of self-training may be longer.
The interpretation for other values of β is provided in the Appendix C.
## 6 Conclusions And Future Work
In this work, we study the under-explored problem of learning from incomplete annotation in relation extraction. This problem is highly important in real-world applications. We show that existing state-of-the-art models suffer in this scenario. To tackle this problem, we proposed a novel CAST
framework. We conducted experiments on DocRE
and BioRE tasks, and experimental results show that our method consistently outperforms competitive baselines on both tasks. For future work, we plan to extend our framework to the distant supervision scenario. From the domain perspective, we plan to apply our framework to image classification tasks.
## 7 Limitations
The proposed CAST framework carries the same limitation of self-training-based methods, which is the requirement for multiple rounds and multiple splits of training. As a result, the GPU computing hours of CAST are longer than those of vanilla baselines and NS.
## References
Christoph Alt, Aleksandra Gabryszak, and Leonhard Hennig. 2020. TACRED revisited: A thorough evaluation of the TACRED relation extraction task. In Proceedings of ACL.
Eric Arazo, Diego Ortego, Paul Albert, Noel E
O'Connor, and Kevin McGuinness. 2020. Pseudolabeling and confirmation bias in deep semisupervised learning. In *Proceedings of IJCNN*.
Jhih-wei Chen, Tsu-Jui Fu, Chen-Kang Lee, and WeiYun Ma. 2021. H-FND: hierarchical false-negative denoising for distant supervision relation extraction. In *Findings of ACL*.
Yew Ken Chia, Lidong Bing, Sharifah Mahani Aljunied, Luo Si, and Soujanya Poria. 2022a. A dataset for hyper-relational extraction and a cube-filling approach. In *Proceedings of EMNLP*.
Yew Ken Chia, Lidong Bing, Soujanya Poria, and Luo Si. 2022b. RelationPrompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. In *Findings of ACL*.
Junghoo Cho and Sourashis Roy. 2004. Impact of search engines on page popularity. In *Proceedings* of WWW, page 20–29.
Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2019. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs.
In *Proceedings of EMNLP*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL*.
Güne¸s Erkan, Arzucan Özgür, and Dragomir R. Radev.
2007. Semi-supervised classification for extracting protein interaction sentences using dependency parsing. In *Proceedings of EMNLP-CoNLL*.
Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement learning for relation classification from noisy data. In *Proceedings of* AAAI.
Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing. *ACM Transactions on Computing* for Healthcare (HEALTH).
Jia Guo, Stanley Kok, and Lidong Bing. 2023. Towards integration of discriminability and robustness for document-level relation extraction. In *Proceedings of EACL*.
Ju He, Adam Kortylewski, Shaokang Yang, Shuai Liu, Cheng Yang, Changhu Wang, and Alan Yuille. 2021. Rethinking re-sampling in imbalanced semi-supervised learning. *arXiv preprint* arXiv:2106.00209.
Xuming Hu, Chenwei Zhang, Fukun Ma, Chenyao Liu, Lijie Wen, and Philip S. Yu. 2021. Semi-supervised relation extraction via incremental meta self-training.
In *Findings of EMNLP*.
Quzhe Huang, Shibo Hao, Yuan Ye, Shengqi Zhu, Yansong Feng, and Dongyan Zhao. 2022. Does recommend-revise produce reliable annotations? an analysis on missing instances in DocRED. In *Proceedings of ACL*.
Zhanming Jie, Pengjun Xie, Wei Lu, Ruixue Ding, and Linlin Li. 2019. Better modeling of incomplete annotations for named entity recognition. In *Proceedings* of NAACL.
Junnan Li, Richard Socher, and Steven C.H. Hoi. 2020a.
Dividemix: Learning with noisy labels as semisupervised learning. In *Proceedings of ICLR*.
Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhimou Yang. 2018. Aspect term extraction with history attention and selective transformation. In Proceedings of IJCAI.
Yangming Li, Shuming Shi, et al. 2020b. Empirical analysis of unlabeled entity problem in named entity recognition. In *Proceedings of ICLR*.
Zuchao Li, Hai Zhao, Shexia He, and Jiaxun Cai. 2021.
Syntax role for neural semantic role labeling. *Computational Linguistics*, 47(3).
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Minlong Peng, Xiaoyu Xing, Qi Zhang, Jinlan Fu, and Xuanjing Huang. 2019. Distantly supervised named entity recognition using positive-unlabeled learning.
In *Proceedings of the 57th Annual Meeting of the* Association for Computational Linguistics.
George Stoica, Emmanouil Antonios Platanios, and Barnabás Póczos. 2021. Re-tacred: addressing shortcomings of the tacred dataset. In Proceedings of AAAI.
Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011.
Semi-supervised relation extraction with large-scale word clustering. In *Proceedings of ACL*.
Qingyu Tan, Ruidan He, Lidong Bing, and Hwee Tou Ng. 2022a. Document-level relation extraction with adaptive focal loss and knowledge distillation. In Findings of ACL.
Qingyu Tan, Lu Xu, Lidong Bing, and Hwee Tou Ng.
2022b. Revisiting docred–addressing the overlooked false negative problem in relation extraction. In *Proceedings of EMNLP*.
Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In *Proceedings of NIPS*.
Patrick Verga, Emma Strubell, and Andrew McCallum.
2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proceedings of NAACL.
Ye Wang, Xinxin Liu, Wenxin Hu, and Tao Zhang. 2022.
A unified positive-unlabeled learning framework for document-level relation extraction with different levels of labeling. In *Proceedings of EMNLP*.
Zihao Wang, Kwunping Lai, Piji Li, Lidong Bing, and Wai Lam. 2019. Tackling long-tailed relations and uncommon entities in knowledge graph completion. In *Proceedings of EMNLP*.
Chen Wei, Kihyuk Sohn, Clayton Mellina, Alan Yuille, and Fan Yang. 2021. Crest: A class-rebalancing selftraining framework for imbalanced semi-supervised learning. In *Proceedings of CVPR*.
Chaojun Xiao, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin, and Leyu Lin.
2020. Denoising relation extraction from documentlevel distant supervision. In *Proceedings EMNLP*.
Lu Xu, Lidong Bing, and Wei Lu. 2023. Better sampling of negatives for distantly supervised named entity recognition. In *Findings of ACL*.
Lu Xu, Zhanming Jie, Wei Lu, and Lidong Bing. 2021.
Better feature integration for named entity recognition. In *Proceedings of NAACL*.
Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entityaware self-attention. In *Proceedings of EMNLP*.
Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: a large-scale document-level relation extraction dataset. In *Proceedings of ACL*.
Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li.
2020. Double graph based reasoning for documentlevel relation extraction. In *Proceedings of EMNLP*,
Online.
Dongxu Zhang, Sunil Mohan, Michaela Torkar, and Andrew McCallum. 2022. A distant supervision corpus for extracting biomedical relationships between chemicals, diseases and genes. In Proceedings of LREC.
Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, and Huajun Chen. 2021a. Document-level relation extraction as semantic segmentation. In *Proceedings of* IJCAI.
Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, and Wai Lam. 2021b. Aspect sentiment quad prediction as paraphrase generation. In *Proceedings of EMNLP*.
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of EMNLP, pages 35–45.
Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, and Chunyan Miao. 2023. Improving self-training for cross-lingual named entity recognition with contrastive and prototype learning. In Proceedings of ACL.
Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling. In *Proceedings of AAAI*.
Xiaojin Zhu and Andrew B. Goldberg. 2009. Introduction to semi-supervised learning. In Introduction to Semi-Supervised Learning.
## A Sentence-Level Relation Extraction
Besides document-level RE, we also examined our method for sentence-level relation extraction
(SentRE), the task is a simplified version of its document-level counterpart. Compared to the DocRE setting, there are two main differences for
![10_image_0.png](10_image_0.png)
sentence-level RE. First, there are exactly n = 2 entities for each SentRE example. Second, there is only one relation type for an entity pair in SentRE,
whereas there can be multiple relation types for DocRE. Again, we used two types of training data for the SentRE task. The first set of training data is from the original TACRED dataset, and the second set of training data is from Re-TACRED. Compared to the revision of Re-DocRED, which only resolved the false negative problem3, the revision of Re-TACRED not only resolved the false negative problem but also relabeled the false positive instances.
The experimental results on SentRE are shown in Table 5. For the TACRED dataset, the top 5 classes4are included in the frequent classes. We can also see that when training with bronze-quality data (i.e., the upper section), our proposed CAST
still achieves the best performance in terms of F1 score. This observation shows that our method is effective across different relation extraction scenarios and backbone models. On the other hand, we can observe that the baseline model achieves the highest F1 score when training with the Re-TACRED
dataset (i.e., the lower section). As mentioned in the section of problem definition, the Re-TACRED training set has resolved the false negative and false positive problems of TACRED. Therefore, by simply using all training samples of Re-TACRED, the baseline approach achieves the best F1. It is worth noting that our CAST is very robust and does not hurt the performance, i.e., achieving slightly worse F1 but slightly better recall compared with the baseline.
| Model | P | R | F1 | Freq_F1 LT_F1 | |
|----------------------------------------------|-------|-------|-------|-----------------|-------|
| TACRED Training Data Baseline 80.64 40.44 | 53.87 | 70.62 | 36.43 | | |
| NS | 62.37 | 53.96 | 57.86 | 74.49 | 40.57 |
| VST | 67.24 | 52.83 | 59.17 | 80.64 | 47.25 |
| CREST | 67.92 | 52.48 | 59.21 | 80.36 | 47.64 |
| CAST (Ours) | 73.33 | 51.03 | 60.18 | 81.12 | 48.75 |
| Re-TACRED Training Data Baseline 88.01 87.82 | 87.91 | 89.21 | 87.37 | | |
| NS | 85.44 | 88.56 | 86.97 | 88.75 | 86.24 |
| VST | 83.71 | 89.82 | 86.65 | 87.94 | 85.99 |
| CREST | 86.46 | 88.45 | 87.45 | 88.64 | 86.97 |
| CAST (Ours) | 87.62 | 87.96 | 87.78 | 89.14 | 87.31 |
## B Hyper-Parameters Of The Baselines
In this section, we report the hyper-parameters of the baseline experiments. For the negative sampling experiments, we used sampling rate γ = 0.1 for the DocRED experiment, γ = 0.5 for TACRED
experiment and γ = 0.7 for the Re-TACRED and Re-DocRED experiments. γ is searched from γ ∈ {0.1, 0.3, 0.5, 0.7, 0.9}.
From CREST (Wei et al., 2021), the classes are first ranked by their frequencies, and the sampling rate for class i is calculated as:
$$\mu_{i}=(\frac{X_{|C|+1-i}}{X_{1}})^{\alpha}\qquad\qquad(2)$$
where X1 is the count of the most frequent class among the positive classes. We set the power α = 0.33 as reported in their paper. For all the self-training-based experiments (VST, CREST, and CAST), we trained with 10 epochs per fold. All our experiments were run on a NVIDIA-V100 GPU.
| Model | P | R | F1 | Ign_F1 Freq_F1 LT_F1 | | | |
|-----------------------------------------------------------------|-------------------|-------------|-------|------------------------|-------|-------|-------|
| DocRED Training Data with Incomplete SD ATLOP 88.39 28.87 43.52 | 43.28 | 45.49 | 40.46 | | | | |
| SSR-PU | 70.42 | 46.67 | 56.14 | 55.21 | 59.38 | 49.24 | |
| BERT | NS-ATLOP | 55.98 | 55.63 | 55.78 | 53.90 | 58.73 | 51.92 |
| VST-ATLOP | 63.03 51.60 56.71 | 55.26 | 60.75 | 51.52 | | | |
| CREST-ATLOP | 72.83 | 47.81 | 57.72 | 56.71 | 59.05 | 54.82 | |
| CAST-ATLOP (Ours) | 70.97 | 50.70 59.14 | 58.03 | 61.20 | 56.22 | | |
## C Experiments On Larger Β
In Figure 7, we show the experimental results when β is larger than 1.0. Increasing β inevitably reduces the sampling probability for all the classes, which is more conservative. Therefore, larger β tends to have higher precision scores and lower recall scores. From Figure 7, we see that the optimal round for F1 scores is 4 for β = 1.0 and 5 for β = 1.25. When β > 1.5, the F1 score may not reach the optimal point before the 6th round. Since CAST would require training MN times, larger β may lead to significantly longer computation time to reach the optimal F1 score.
## D Experiments With Incomplete Sd
In this section, we conducted experiments on DocRED with a development set of lower quality.
Specifically, we used SD from the DocRED dataset instead of Re-DocRED. The experiment results are shown in Table 6. We can see that the over performances of most methods were decreased. This observation showed the importance of a high-quality development set when training with incomplete data. Nevertheless, our CAST model still achieves the best overall performance among the compared methods.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
section 4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We only used open-source scientific data in this paper.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We only used open-source scientific data in this paper.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 and Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
wannasuphoprasit-etal-2023-solving | Solving Cosine Similarity Underestimation between High Frequency Words by $\ell_2$ Norm Discounting | https://aclanthology.org/2023.findings-acl.550 | Cosine similarity between two words, computed using their contextualised token embeddings obtained from masked language models (MLMs) such as BERT has shown to underestimate the actual similarity between those words CITATION.This similarity underestimation problem is particularly severe for high frequent words. Although this problem has been noted in prior work, no solution has been proposed thus far. We observe that the $\ell_2$ norm of contextualised embeddings of a word correlates with its log-frequency in the pretraining corpus.Consequently, the larger $\ell_2$ norms associated with the high frequent words reduce the cosine similarity values measured between them, thus underestimating the similarity scores.To solve this issue, we propose a method to \textit{discount} the $\ell_2$ norm of a contextualised word embedding by the frequency of that word in a corpus when measuring the cosine similarities between words.We show that the so called \textit{stop} words behave differently from the rest of the words, which require special consideration during their discounting process.Experimental results on a contextualised word similarity dataset show that our proposed discounting method accurately solves the similarity underestimation problem.An anonymized version of the source code of our proposed method is submitted to the reviewing system. | # Solving Cosine Similarity Underestimation Between High Frequency Words By ℓ2 **Norm Discounting**
Saeth Wannasuphoprasit♠ **Yi Zhou**♢
University of Liverpool♠, Cardiff University♢, Amazon♣
{s.wannasuphoprasit,danushka}@liverpool.ac.uk [email protected] Danushka Bollegala♣,♠
## Abstract
Cosine similarity between two words, computed using their contextualised token embeddings obtained from masked language models
(MLMs) such as BERT has shown to underestimate the actual similarity between those words (Zhou et al., 2022). This similarity underestimation problem is particularly severe for highly frequent words. Although this problem has been noted in prior work, no solution has been proposed thus far. We observe that the ℓ2 norm of contextualised embeddings of a word correlates with its log-frequency in the pretraining corpus. Consequently, the larger ℓ2 norms associated with the highly frequent words reduce the cosine similarity values measured between them, thus underestimating the similarity scores. To solve this issue, we propose a method to *discount* the ℓ2 norm of a contextualised word embedding by the frequency of that word in a corpus when measuring the cosine similarities between words. We show that the so called *stop* words behave differently from the rest of the words, which require special consideration during their discounting process. Experimental results on a contextualised word similarity dataset show that our proposed discounting method accurately solves the similarity underestimation problem.
## 1 Introduction
Cosine similarity is arguably the most popular word similarity measure used in numerous natural language processing (NLP) tasks, such as question answering (QA), information retrieval (IR) and machine translation (MT) (Echizen-ya et al., 2019; Oniani and Wang, 2020; Kim et al., 2022; Hanifi et al., 2022). First, a word is represented by a vector
(aka *embedding*) and then the similarity between two words is computed as the cosine of the angle between the corresponding vectors (Rahutomo et al., 2012). Despite the good performance of cosine similarity as a similarity measure in various downstream tasks, Zhou et al. (2022) showed that it systematically underestimates the true similarity between highly frequent words, when computed using contextualised word embeddings obtained from MLMs such as BERT (Devlin et al., 2018).
Compared to the problem of estimating similarity between highly frequent words, the opposite problem of estimating the similarity between (or involving) rare (low frequency) words has received greater attention, especially in the scope of static word embeddings (Levy and Goldberg, 2014; Hellrich and Hahn, 2016; Mimno and Thompson, 2017; Wendlandt et al., 2018). If a word is rare in a corpus, we might not have a sufficiently large number of contexts containing that word to learn an accurate embedding for it. This often leads to unreliable similarity estimations between words and has undesirable implications in downstream tasks such as the detection of analogies and social biases (Ethayarajh et al., 2019a,b).
On the other hand, Zhou et al. (2022) studied the impact of frequency on contextualised word embeddings and showed that the cosine similarity between highly frequent words are systematically underestimated. Unlike in the previously discussed low frequency word scenario, we do have adequate contexts to learn an accurate semantic representation for highly frequent words. Therefore, it might appear surprising at first that cosine similarity cannot be correctly estimated even for the highly frequent words. Zhou et al. (2021) show that the diversity (measured by the volume of the bounding hypersphere) of the contextualised embeddings of a target word, computed from multiple contexts containing the word, increases with the frequency of that word. They provide an explanation that holds true only for 2-dimensional embeddings, which relates diversity to the underestimation of cosine similarity. Unfortunately, this explanation does not extend to the high dimensional embeddings used in practice by the NLP community (e.g. BERT
token embeddings are typically more than 768 di8644
![1_image_0.png](1_image_0.png)
mensional). More importantly, to the best of our knowledge, no solution has been proposed in the literature to address the cosine similarity underestimation problem associated with the highly frequent words.
In prior work, the ℓ2 norm of a static word embedding has been shown to linearly correlate with the log-frequency of that word (Arora et al., 2016; Bollegala et al., 2018). On the other hand, we empirically study the ℓ2 norm of the contextualised embedding of a word w averaged over all of its contexts, and find that it too approximately linearly correlates with the log-frequency of w in the corpus used to pretrain the MLM. Recall that the cosine similarity is defined as the inner-product between two embeddings, divided by the ℓ2 norm of those embeddings. Therefore, we suspect that the underestimation of cosine similarity between highly frequent words is due to the larger ℓ2 norms associated with those words.
To correct for this bias associated with the ℓ2 norms of highly frequent words, we propose a linearly parameterised discounting scheme in the logfrequency space. Specifically, we use Monte-Carlo Bayesian Optimisation (Balandat et al., 2019) to find the optimal discounting parameters. Our proposed discounting method is shown to accurately correct the underestimation of cosine similarities between highly frequent words on the Word-inContext (WiC) (Pilehvar and Camacho-Collados, 2019) dataset where human similarity ratings are available for the same word in two different contexts. Source code for reproducing the experiments reported in this is paper is publicly available.1
## 2 Underestimation Of Cosine Similarity
Let us denote the d-dimensional contextualised word embedding produced by an MLM f for a target word w appearing in a context c by f(*w, c*)(∈
R
d
). Moreover, let the set of contexts where w occurs in a given corpus be S(w). We refer to {f(*w, c*)∣w ∈ S(w)} as the set of *sibling embeddings* of w. To study the relationship between the cosine similarity scores and the frequency of words, we use the 768-dimensional bert-base-uncased2as the contextualised embedding model. We use the token embedding of w from the final hidden layer of BERT as f(*w, c*).
We approximate the word frequencies in BERT pretraining corpus using the BookCorpus (Zhu et al.,
2015). Let ψw be the frequency of w in this corpus.
We use the WiC dataset, which contains 5428 pairs of words appearing in various contexts with annotated human similarity judgements. WiC
dataset is split into official training and development sets, while a separate hidden test set is used by the leaderboard for ranking Word Sense Disambiguation systems.3 WiC dataset contains pairs of contexts labelled as having the **same meaning** (e.g.
"to *drive* sheep out of a field" vs. "to *drive* the cows into the barn") and **different meaning** (e.g. "the play lasted two hours" vs. "they made a futile *play* for power").
We compute the cosine similarity between the two contextualised embeddings of a target word in two of its contexts to predict a similarity score. Figure 1 shows the predicted similarity scores for both contexts in which a target word has been used in the same or different meanings for all words in the WiC dataset against log(ψw). As seen from Figure 3, ψw has a power-law distribution. Therefore, we plot its log instead of raw frequency counts in Figure 1.
From Figure 1, we see that for both same as well as different meaning contexts, the predicted cosine similarities drop with the word frequencies. Moreover, the gradient of the drop for same meaning pairs (Pearson's r = −0.3001) is larger than that
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
for the different meaning pairs (r = −0.2125), indicating that the underestimation of cosine similarity is more sever for the similar contexts of highly frequent words.
## 3 ℓ2 **Norm Discounting**
To understand the possible reasons behind the cosine similarity underestimation for highly frequent words discussed in § 2, for each word w we compute its mean sibling embedding, wˆ , given by (1).
$${\hat{w}}={\frac{1}{|S(w)|}}\sum_{c\in S(w)}f(w,c)\qquad\qquad(1)$$
We plot ∣∣wˆ ∣∣ against log(ψ(w)) in Figure 2 separately for a predefined set of stop words and all other words (i.e. non-stop words). For this purpose, we use the default 1466 stop words from NLTK and randomly selected 997,425 non-stop words from the BookCorpus. Pearson r values of stop words and non-stop words are respectively 0.1697 and 0.3754, while the lines of best fits for each class of words are superimposed. From Figure 2, we see that overall, ∣∣wˆ ∣∣ increases with log(ψw) for both stop and non-stop words, while the linear correlation is stronger in the latter class. Considering that stop words cover function words such as determiners and conjunctions that co-occur with a large number of words in diverse contexts, we believe that the ℓ2 norm of stop words mostly remains independent of their frequency. Recall that the cosine similarity between two words is defined as the fraction of the inner-product of the corresponding embeddings, divided by the product of the ℓ2 norms of the embeddings. Therefore, even if the inner-product between two words remain relatively stable, it will be divided by increasingly larger ℓ2 norms in the case of highly frequent words. Moreover, this bias is further amplified when both words are high frequent due to the *product* of ℓ2 norms in the denominator.
To address this problem, we propose to discount the ℓ2 norm of a word w by a discounting term, α(ψw), and propose a discounted version of the cosine similarity given by (2).
$$\cos_{\alpha}(x,y)={\frac{x^{\mathsf{T}}y}{||x||\,\alpha(\psi_{x})\,||y||\,\alpha(\psi_{y})}}\quad\quad(2)$$
Following Figure 2, we linearly parameterise α(ψw) separately for stop vs. non-stop words as in
(3).
$$\alpha(\psi_{w})=\begin{cases}1+m_{s}(b_{s}-\log(\psi_{w}))&\text{w is a stop word}\\ 1+m_{n}(b_{n}-\log(\psi_{w}))&\text{w is a non-stop word}\end{cases}\tag{3}$$
The scalar parameters ms, mn, bs and bn are estimated as follows. First, we randomly initialise all parameters uniformly in [0, 1] and use (2) to predict cosine similarity between two contexts in which a target word w occurs in the WiC train instances. We then make a binary similarity judgement (i.e. same or **different** meaning) for the pair of contexts in an instance depending on whether the predicted cosine similarity is greater than a threshold θ. Next, we compute the overall binary classification accuracy for the similarity predictions made on the entire WiC training dataset,
![3_image_0.png](3_image_0.png)
and use Bayesian Optimisation to find the optimal values: θ = 0.545, ms = 0.00422, bs = 0.643, mn = 0.00427 and bn = 4.821. Specifically we used the Adaptive Experimentation Platform4for learning those optimal values. We found this is more efficient than conducting a linear search over the parameter space. We repeat the estimation five times and use the averaged parameter values in the remainder of the experiments. Note that mn > ms above, which indicates that non-stop words must be discounted slightly more heavily than the stop words. This makes sense since the impact of word frequency of non-stop words on their ℓ2-norm is stronger than that for the stop words as indicated by the slopes of the lines of best fit in Figure 2.
## 4 Results
To evaluate the effect of the proposed ℓ2 norm discounting when computing cosine similarity, we repeat the analysis presented in Figure 1 using (2) to predict the similarity between contextualised word embeddings. Comparing the lines of best fit for the original (blue, r = −0.3006) vs. discounted (orange, r = −0.1366) for the same meaning contexts, we see that the gradient of the drop has decreased by 51.65%. Likewise, comparing the lines of best fit for the original (green, r = −0.2125) vs. dis-4https://ax.dev/
![3_image_1.png](3_image_1.png)
counted (red, r = −0.0843) for the different meaning contexts, we see the gradient of the drop has decreased by 57.04%. This result clearly shows that the proposed ℓ2 norm discounting method is able to reduce the underestimation of cosine similarities for the highly frequent words.
Given that the discounting parameters in (3) are learned from the WiC train data, it remains an open question as to how well the proposed discounting method generalises when predicting similarity between contextualised embeddings of unseen words.
To evaluate this generalisability of the proposed method, we use (3) with its learned parameters from WiC train data, to predict the similarity between contextualised word embeddings in WiC dev data.5Specifically, we predict binary (same vs.
different meaning) similarity labels according to the similarity threshold θ learnt in § 3 and compare against the human judgements using binary classification accuracy.
The maximum accuracy on WiC dev split obtained using the original (non-discounted) cosine similarities is 0.6667, which indicates that the cosine similarity is somewhat predictive of the human binary judgements. The overall F1 is improved by 2.4% (0.68 with original cosine vs. 0.71 with the proposed discounting method) and recall is improved by 12% (0.75 with original cosine vs. 0.84 with the proposed). On the other hand, the drop 5Note that the test set of WiC is publicly unavailable due to being used in a leaderboard.
in precision is 4.7% (from 0.64 to 0.61). Therefore, the proposed method solves the cosine similarity underestimation problem associated with high-frequent words, without significantly affecting the similarity scores for low-frequent ones Figure 5 shows the average proportion of instances predicted to be the same meaning as a function of frequency, grouped into ten bins, each with the same number of examples. From Figure 5, we see that in high frequency bins (i.e. bins 8, 9 and 10), the percentage of predicted instances as having the same meaning is consistently lower than that compared to the human judgements. This shows an underestimation of the true (human judged) similarity between contextualised word embeddings.
On the other hand, when we use the proposed ℓ2 norm discounted cosine similarity (defined in
(2)), in the highest frequent bin (i.e. 10) we see that the gap between human judgements vs. predicted similarities has reduced. Moreover, in the low frequency bins (i.e. 1–4), we see that the proposed discounting method does not affect the predictions made using cosine similarities. We see an overestimation of the cosine similarities in the low frequency bins as reported by Zhou et al. (2021).
As discussed already in § 1, the word embeddings learnt for low frequency words tend to be unreliable due to data sparseness. Therefore, we believe it is important to focus on the problem of learning accurate word embeddings rather than to adjust cosine similarities between low-frequency words in a post-processing step.
We see that in bins 5, 6 and 7 the similarity scores are slightly increased by the proposed discounting method, which is a drawback that needs to be addressed in future work. More importantly however, the overall percentage recall across all bins for retrieving same meaning instances improves significantly from 74.7% to 83.7% compared to using respectively the original cosine similarity vs. the discounted cosine similarity. Overall, this result confirms the validity of the proposed discounting method for addressing the underestimation of cosine similarity involving highly frequent words.
## 5 Conclusion
We proposed a method to solve the cosine similarity underestimation problem in highly frequent words. Specifically, we observed that the ℓ2 norm of a contextualised word embedding increases with its frequency in the pretrain corpus and proposed a discounting scheme. Experimental results on WiC dataset confirmed the validity of the proposed method.
## 6 Limitations
We proposed a solution to the cosine similarity underestimation problem associated with contextualised word embeddings of highly frequent words.
Our evaluations used only a single contextualised embedding model (i.e. BERT) with a single dimensionality (i.e. 768). Therefore, we believe that our proposed method must be evaluated with other
(more recent) MLMs to test for its generalisability.
Moreover, our evaluations were conducted only on the English language, which is known to be morphologically limited. Although in our preliminary experiments we considered discounting schemes based on the part-of-speech of words (instead of considering stop words vs. non-stop words), we did not find any significant improvements despite the extra complexity. However, these outcomes might be different for more morphologically richer languages. In order to evaluate similarity predictions in other languages, we must also have datasets similar to WiC annotated in those languages, which are difficult to construct. Although having stated that using a single MLM and single language as limitations of this work, we would like to point out that these are the same conditions under which Zhou et al. (2022) studied the cosine similarity underestimation problem.
We used only a single dataset (i.e. WiC) in our experiments in this short paper due to space constraints. Other contextual similarity datasets
(e.g. Stanford Contextualised Word Similarity
(SCWS) (Huang et al., 2012)) could be easily used to further validate the proposed discounting method in an extended version.
## 7 Ethical Considerations
In this paper, we do not annotate novel datasets nor release any fine-tuned MLMs. Therefore, we do not see any direct ethical issues arising from our work. However, we are proposing a method to address the underestimation of cosine similarity scores computed using contextualised word embeddings obtained from (possibly socially biased)
pretrained MLMs. We would therefore discuss the ethical implication of this aspect of our work in this section.
Cosine similarity has been used in various social bias evaluation measures such as the WEAT (Caliskan et al., 2017), SemBias (Zhao et al., 2018), WAT (Du et al., 2019), etc. These methods measure the cosine similarity between a gender and a set of pleasant or unpleasant set of attributes to compute a social bias evaluation score.
Although originally these methods were developed for evaluating the social biases in static word embeddings, they have been later extended to contextualised word embeddings (Kaneko and Bollegala, 2022; Kaneko et al., 2022) and sentence embeddings (May et al., 2019), where cosine similarity still remains the main underlying metric. However, Ethayarajh et al. (2019c) showed that innerproducts to be superior over cosine similarity for social bias evaluation purposes. It remains unclear as to how the underestimation in cosine similarities discussed in our work would influence the social bias evaluations. In particular, the effect of the proposed ℓ2 norm discounting scheme on social bias evaluation must be carefully studied in the future work.
## Acknowledgements
Danushka Bollegala holds concurrent appointments as a Professor at University of Liverpool and as an Amazon Scholar. This paper describes work performed at the University of Liverpool and is not associated with Amazon.
## References
Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to PMI-based word embeddings. *Transactions of the Association for Computational Linguistics* 4:385–399. https://aclanthology.org/Q16-1028.
Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, and Eytan Bakshy. 2019. BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. Advances in Neural Information Processing Systems 33, 2020.
Danushka Bollegala, Yuichi Yoshida, and Ken-ichi Kawarabayashi. 2018. Using k-way Co-occurrences for Learning Word Embeddings. In *Proc. of AAAI*.
pages 5037–5044.
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases.
Science 356:183–186.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. https://doi.org/10.48550/ARXIV.1810.04805.
Yupei Du, Yuanbin Wu, and Man Lan. 2019. Exploring human gender stereotypes with word association test.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, pages 6132–6142. https://doi.org/10.18653/v1/D191635.
Hiroshi Echizen-ya, Kenji Araki, and Eduard Hovy.
2019. Word embedding-based automatic mt evaluation metric using word position information. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). pages 1874–1883.
Kawin Ethayarajh, David Duvenaud, and Graeme Hirst.
2019a. Towards understanding linear word analogies.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*. Association for Computational Linguistics, Florence, Italy, pages 3253–3262. https://doi.org/10.18653/v1/P19-1315.
Kawin Ethayarajh, David Duvenaud, and Graeme Hirst.
2019b. Understanding undesirable word embedding associations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. pages 1696–1705.
Kawin Ethayarajh, David Duvenaud, and Graeme Hirst.
2019c. Understanding undesirable word embedding associations. In Proceedings of the 57th Conference of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, pages 1696–1705.
Masih Hanifi, Hicham Chibane, Remy Houssin, and Denis Cavallucci. 2022. Problem formulation in inventive design using doc2vec and cosine similarity as artificial intelligence methods and scientific papers.
Engineering Applications of Artificial Intelligence 109:104661.
Johannes Hellrich and Udo Hahn. 2016. Bad Company—Neighborhoods in neural embedding spaces considered harmful. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 2785–2796. https://aclanthology.org/C161262.
Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In *ACL'12*. pages 873 - 882.
Masahiro Kaneko and Danushka Bollegala. 2022. Unmasking the mask - evaluating social biases in masked language models. In Proc. of the 36th AAAI
Conference on Artificial Intelligence.
Masahiro Kaneko, Aizhan Imankulova, Danushka Bollegala, and Naoaki Okazaki. 2022. Gender bias in masked language models for multiple languages. In *Proceedings of the 2022 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Seattle, United States, pages 2740–2750.
https://aclanthology.org/2022.naacl-main.197.pdf.
Suyoun Kim, Duc Le, Weiyi Zheng, Tarun Singh, Abhinav Arora, Xiaoyu Zhai, Christian Fuegen, Ozlem Kalinli, and Michael L. Seltzer. 2022. Evaluating User Perception of Speech Recognition System Quality with Semantic Distance Metric. In *Proc. of INTERSPEECH*.
Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. Advances in neural information processing systems 27.
Chandler May, Alex Wang, Shikha Bordia, Samuel R.
Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In *Proceedings* of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, pages 622–628.
https://www.aclweb.org/anthology/N19-1063.
David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 2873–2878.
https://doi.org/10.18653/v1/D17-1308.
David Oniani and Yanshan Wang. 2020. A qualitative evaluation of language models on automatic question-answering for covid-19. In Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics. pages 1–9.
Mohammad Taher Pilehvar and Jose Camacho-Collados.
2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, pages 1267–1273.
Faisal Rahutomo, Teruaki Kitasuka, and Masayoshi Aritsugi. 2012. Semantic cosine similarity. In The 7th international student conference on advanced science and technology ICAST. 1, page 1.
Laura Wendlandt, Jonathan K. Kummerfeld, and Rada Mihalcea. 2018. Factors influencing the surprising instability of word embeddings. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, pages 2092–2102.
https://doi.org/10.18653/v1/N18-1190.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)*. Association for Computational Linguistics, pages 15–20.
http://aclweb.org/anthology/N18-2003.
Kaitlyn Zhou, Kawin Ethayarajh, Dallas Card, and Dan Jurafsky. 2022. Problems with cosine as a measure of embedding similarity for high frequency words. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 2: Short Papers). Association for Computational Linguistics, Dublin, Ireland, pages 401–423.
https://doi.org/10.18653/v1/2022.acl-short.45.
Kaitlyn Zhou, Kawin Ethayarajh, and Dan Jurafsky.
2021. Frequency-based distortions in contextualized word embeddings. *arXiv preprint arXiv:2104.08465*
.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *The IEEE International Conference on Computer Vision (ICCV)*.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 6
✓ A2. Did you discuss any potential risks of your work?
Section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 2 And 3
✓ B1. Did you cite the creators of artifacts you used?
section 2
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? sections 2 and 3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? sections 3 and 4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. sections 3 and 4
## C ✓ **Did You Run Computational Experiments?** Sections 3 And 4
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
This is a methodology paper and not related to computation cost.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Sections 3 and 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? section 3 and 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
yin-etal-2023-large | Do Large Language Models Know What They Don{'}t Know? | https://aclanthology.org/2023.findings-acl.551 | Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks. Current research focuses on enhancing their performance within their existing knowledge. Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend. Therefore, the ability to understand their own limitations on the unknows, referred to as self-knowledge, is of paramount importance. This study aims to evaluate LLMs{'} self-knowledge by assessing their ability to identify unanswerable or unknowable questions. We introduce an automated methodology to detect uncertainty in the responses of these models, providing a novel measure of their self-knowledge. We further introduce a unique dataset, SelfAware, consisting of unanswerable questions from five diverse categories and their answerable counterparts. Our extensive analysis, involving 20 LLMs including GPT-3, InstructGPT, and LLaMA, discovering an intrinsic capacity for self-knowledge within these models. Moreover, we demonstrate that in-context learning and instruction tuning can further enhance this self-knowledge. Despite this promising insight, our findings also highlight a considerable gap between the capabilities of these models and human proficiency in recognizing the limits of their knowledge. | # Do Large Language Models Know What They Don'T Know? Zhangyue Yin♢ Qiushi Sun♠ **Qipeng Guo**♢
Jiawen Wu♢ Xipeng Qiu♢∗ **Xuanjing Huang**♢
♢School of Computer Science, Fudan University
♠Department of Mathematics, National University of Singapore
{yinzy21,jwwu21}@m.fudan.edu.cn [email protected]
{qpguo16,xpqiu,xjhuang}@fudan.edu.cn
## Abstract
Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.
Current research focuses on enhancing their performance within their existing knowledge.
Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend. Therefore, the ability to understand their own limitations on the unknows, referred to as self-knowledge, is of paramount importance. This study aims to evaluate LLMs' self-knowledge by assessing their ability to identify unanswerable or unknowable questions. We introduce an automated methodology to detect uncertainty in the responses of these models, providing a novel measure of their self-knowledge. We further introduce a unique dataset, *SelfAware*, consisting of unanswerable questions from five diverse categories and their answerable counterparts. Our extensive analysis, involving 20 LLMs including GPT-3, InstructGPT, and LLaMA, discovering an intrinsic capacity for self-knowledge within these models. Moreover, we demonstrate that in-context learning and instruction tuning can further enhance this self-knowledge. Despite this promising insight, our findings also highlight a considerable gap between the capabilities of these models and human proficiency in recognizing the limits of their knowledge.
"True wisdom is knowing what you don't know."
–*Confucius*
## 1 Introduction
Recently, Large Language Models (LLMs) such as GPT-4 (OpenAI, 2023), PaLM 2 (Anil et al.,
2023), and LLaMA (Touvron et al., 2023) have shown exceptional performance on a wide range of NLP tasks, including common sense reasoning (Wei et al., 2022; Zhou et al., 2022) and mathe-
∗Corresponding author.
![0_image_0.png](0_image_0.png)
matical problem-solving (Lewkowycz et al., 2022; Chen et al., 2022). Despite their ability to learn from huge amounts of data, LLMs still have limitations in their capacity to retain and understand information. To ensure responsible usage, it is crucial for LLMs to have the capability of recognizing their limitations and conveying uncertainty when responding to unanswerable or unknowable questions. This acknowledgment of limitations, also known as "*knowing what you don't know*," is a crucial aspect in determining their practical applicability. In this work, we refer to this ability as model self-knowledge.
The Know-Unknow quadrant in Figure 1 illustrates the relationship between the model's knowledge and comprehension. The ratio of
"Known Knows" to "Unknown Knows" demonstrates the model's proficiency in understanding and applying existing knowledge. Techniques such as Chain-of-Thought (Wei et al., 2022), SelfConsistency (Wang et al., 2022), and Complex CoT (Fu et al., 2022) can be utilized to increase this ratio, resulting in improved performance on NLP tasks. We focus on the ratio of "Known Unknows" to "Unknown Unknows", which indicates the model's self-knowledge level, specifically understanding its own limitations and deficiencies in the unknows.
Existing datasets such as SQuAD2.0 (Rajpurkar et al., 2018) and NewsQA (Trischler et al., 2017),
widely used in question answering (QA), have been utilized to test the self-knowledge of models with unanswerable questions. However, these questions are context-specific and could become answerable when supplemented with additional information.
Srivastava et al. (2022) attempted to address this by evaluating LLMs' competence in delineating their knowledge boundaries, employing a set of 23 pairs of answerable and unanswerable multiple-choice questions. They discovered that these models' performance barely surpassed that of random guessing.
Kadavath et al. (2022) suggested probing the selfknowledge of LLMs through the implementation of a distinct "Value Head". Yet, this approach may encounter difficulties when applied across varied domains or tasks due to task-specific training. Consequently, we redirect our focus to the inherent abilities of LLMs, and pose the pivotal question:
"Do large language models know what they don't know?".
In this study, we investigate the self-knowledge of LLMs using a novel approach. By gathering reference sentences with uncertain meanings, we can determine whether the model's responses reflect uncertainty using a text similarity algorithm.
We quantified the model's self-knowledge using the F1 score. To address the small and idiosyncratic limitations of existing datasets, we created a new dataset called *SelfAware*. This dataset comprises 1,032 unanswerable questions, which are distributed across five distinct categories, along with an additional 2,337 questions that are classified as answerable. Experimental results on GPT-3, InstructGPT, LLaMA, and other LLMs demonstrate that in-context learning and instruction tuning can effectively enhance the self-knowledge of LLMs.
However, the self-knowledge exhibited by the current state-of-the-art model, GPT-4, measures at 75.47%, signifying a notable disparity when contrasted with human self-knowledge, which is rated at 84.93%.
Our key contributions to this field are summarized as follows:
- We have developed a new dataset, *SelfAware*,
that comprises a diverse range of commonly posed unanswerable questions.
- We propose an innovative evaluation technique based on text similarity to quantify the degree of uncertainty inherent in model outputs.
- Through our detailed analysis of 20 LLMs, benchmarked against human self-knowledge, we identified a significant disparity between the most advanced LLMs and humans 1.
## 2 Dataset Construction
To conduct a more comprehensive evaluation of the model's self-knowledge, we constructed a dataset that includes a larger number and more diverse types of unanswerable questions than KnowUnknowns dataset (Srivastava et al., 2022). To facilitate this, we collected a corpus of 2,858 unanswerable questions, sourced from online platforms like Quora and HowStuffWorks. These questions were meticulously evaluated by three seasoned annotation analysts, each operating independently.
The analysts were permitted to leverage external resources, such as search engines. To ensure the validity of our dataset, we retained only the questions that all three analysts concurred were unanswerable.
This rigorous process yielded a finalized collection of 1,032 unanswerable questions.
In pursuit of a comprehensive evaluation, we opted for answerable questions drawn from three datasets: SQuAD (Rajpurkar et al., 2016), HotpotQA (Yang et al., 2018), and TriviaQA (Joshi et al., 2017). Our selection was guided by SimCSE (Gao et al., 2021), which allowed us to identify and select the answerable questions semantically closest to the unanswerable ones. From these sources, we accordingly drew samples of 1,487, 182, and 668 questions respectively, amassing a total of 2,337. Given that these questions can be effectively addressed using information available on Wikipedia, the foundational corpus for the training of current LLMs, it is plausible to infer that the model possesses the requisite knowledge to generate accurate responses to these questions.
Our dataset, christened *SelfAware*, incorporates 1,032 unanswerable and 2,337 answerable questions. To reflect real-world distribution, our dataset 1The code pertinent to our study can be accessed https://github.com/yinzhangyue/SelfAware
| Category | Description | Example | Percentage |
|---------------------------------------------------------------------|----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------|--------------|
| The answer is still up | | | |
| for debate, with no consensus in scientific community. | "Are we alone in the universe, or will we discover alien | 25% | |
| life at some point?" | | | |
| No scientific consensus Imagination | The question are about people's | "What will the fastest form of transportation be in 2050?" | 15% |
| imaginations of the future. | "Would you rather be shot into space or explore the | 27% | |
| deepest depths of the sea?" | | | |
| Completely | The answer depends on | | |
| subjective | personal preference. | "John made 6 dollars mowing lawns and 18 dollars weed eating. If he only spent 3 or 5 dollar a week, how long would the money last him?" | |
| The question with too many variables cannot be answered accurately. | | | |
| Too many variables | 10% | | |
| The question can yield | | | |
| Philosophical | multiple responses, but it lacks a definitive answer. | "How come god was | |
| born from nothingness?" | 23% | | |
contains a proportion of answerable questions that is twice as large as the volume of unanswerable ones. Nevertheless, to ensure the feasibility of testing, we have purposefully capped the number of answerable questions.
## 2.1 Dataset Analysis
To gain insight into the reasons precluding a certain answer, we undertook a manual analysis of 100 randomly selected unanswerable questions. As tabulated in Table 1, we have broadly segregated these questions into five distinctive categories. "No Scientific Consensus" encapsulates questions that ignite ongoing debates within the scientific community, such as those concerning the universe's origin. "Imagination" includes questions involving speculative future scenarios, like envisaged events over the next 50 years. "Completely Subjective" comprises questions that are inherently personal, where answers depend heavily on individual predispositions. "Too Many Variables" pertains to mathematical problems that become unsolvable owing to the overwhelming prevalence of variables. Lastly,
"Philosophical" represents questions of a profound, often metaphysical, nature that resist concrete answers. Ideally, upon encountering such questions, the model should express uncertainty instead of delivering conclusive responses.
## 3 Evaluation Method
This section elucidates the methodology employed for assessing self-knowledge in the generated text.
In order to achieve this, we define a similarity function, fsim, to compute the similarity, S, between a given sentence, t, and a collection of reference sentences, U = {u1, u2*, . . . , u*n}, endowed with uncertain meanings.
$${\mathcal{S}}_{i}=f_{s i m}(t,u_{i}).$$
Si = fsim(*t, u*i). (1)
Whenever any Si surpasses a pre-determined threshold T , we perceive the text t as encompassing uncertain meanings, thereby eliminating the need for manual evaluation of the response.
Given the substantial disparity in the volume of answerable and unanswerable questions in *SelfAware*, we adopt the F1 score as a measure of LLMs' self-knowledge. Our focus rests on identifying unanswerable questions, hence we designate them as positive cases and categorize answerable questions as negative cases.
## 4 Experiment 4.1 Model
We conduct a sequence of experiments to evaluate the degree of self-knowledge manifested by various LLMs, including GPT-3 (Brown et al., 2020) and InstructGPT (Ouyang et al., 2022) series, as well as the recent LLaMA (Touvron et al., 2023) and its derivative models, namely Alpaca (Taori et al.,
2023) and Vicuna (Chiang et al., 2023). Our investigative approach employed three distinct input forms: Direct, Instruction, and In-Context Learning (ICL), which is encapsulated in Appendix A.4.
![3_image_0.png](3_image_0.png)
Figure 2: Experimental results using three different input forms on a series of models from GPT-3(ada, babbage, curie, and davinci) and InstructGPT(text-ada-001, text-babbage-001, text-curie-001, and text-davinci-001)
![3_image_2.png](3_image_2.png)
## 4.2 Setting
We devised the reference sentence set U through a process that combined automated generation by LLMs and manual filtering, detailed further in Appendix A.1. To quantify the similarity between target and reference sentences, we utilized SimCSE (Gao et al., 2021), setting the similarity threshold to 0.75 during our experiments. An exploration of threshold ablation is available in Appendix A.2.
To counteract potential errors in similarity calculation induced by varying lengths of the target and reference sentences, we employed a sliding window of length 5 to parse the target sentence into semantic chunks. During the generation process, we set the temperature to 0.7. We selected a random sample of 100 instances for GPT-4, while the remainder of the models were scrutinized using the full *SelfAware* dataset.
## 4.3 Human Self-Knowledge
To establish a benchmark for human selfknowledge, we engaged two volunteers and selected 100 random samples from the *SelfAware* dataset. The volunteers has 30 minutes to make
![3_image_1.png](3_image_1.png)
judgments on the same set of questions, yielding an average F1 score of 84.93%, which we subsequently adopted as the benchmark for human self-knowledge. Detailed scores are available in Appendix A.3.
## 4.4 Analysis
We evaluate the manifestation of LLMs' selfknowledge, centering our investigation on three fundamental dimensions: the size of the model, the impact of instruction tuning, and the influence exerted by different input forms.
Model Size. Figure 2 illustrates the correlation between model size and self-knowledge across various LLMs. It is noteworthy that across all three input forms, an augmentation in model parameter size is associated with an elevation in the F1 Score, with the most conspicuous enhancement manifesting in the ICL input form. Therefore, our analysis indicates that an LLM's self-knowledge tends to enhance with increasing model size, a trend consistent with the scaling law.
![4_image_0.png](4_image_0.png)
Instruction Tuning. Figure 2 delineates that models from the InstructGPT series exhibit a superior level of self-knowledge compared to their GPT-3 counterparts. Further evidence of model enhancement is provided by Figure 4, where textdavinci models show significant improvement relative to the base davinci model. An additional comparative analysis, presented in Figure 5, evaluates LLaMA against its derivative models. The results underscore a notable increase in self-knowledge for Alpaca and Vicuna upon instruction tuning, exceeding their base model performances. Among these, Vicuna-13B outperforms the LLaMA-65B,
corroborating the efficacy of instruction tuning for enhancing model self-knowledge.
Input Forms. As shown in Figure 2, the incorporation of instructions and examples serves to boost the self-knowledge of both the GPT-3 and InstructGPT series. Specifically, ICL input form, providing richer contextual information, contributes to a significant enhancement in models' self-knowledge.
This impact is particularly noticeable in the davinci model, where ICL facilitates a 27.96% improvement over the direct. Moreover, a comparison between Figure 3 and Figure 4 reveals that the inclusion of instructions and examples successfully minimizes the performance disparity between the davinci and text-davinci models, suggesting an acquisition of self-knowledge from the instructions and provided examples.
Compared with Human. Figure 3 reveals that, without supplementary samples, GPT-4 currently performs best among the tested models, achieving an impressive F1 score of 75.47%. However, a noticeable gap becomes evident when comparing this performance to the human benchmark of 84.93%.
This underscores the considerable potential that remains for enhancing the self-knowledge level of LLMs.
Answerable Questions. Figure 6 traces the performance evolution of the InstructGPT series in addressing answerable questions, adhering to the closed-book question answering paradigm (Touvron et al., 2023), where output accuracy is contingent on the presence of the correct answer. Our observations underscore a steady enhancement in QA task accuracy corresponding to an increase in model parameter size and continuous learning.
Particularly, the accuracy of text-davinci-001 experiences a significant ascent, scaling from a meager 2.48% in text-ada-001 to 10.61%, whereas GPT-4 marks an even more striking jump to 42.64%.
## 5 Conclusion
This study investigates the self-knowledge of LLMs by evaluating their ability to identify unanswerable questions. Through the introduction of a novel dataset and an automated method for detecting uncertainty in the models' responses, we are able to accurately measure the self-knowledge of LLMs such as GPT-3, InstructGPT and LLaMA.
Our results reveal that while these models possess a certain degree of self-knowledge, there is still an apparent disparity in comparison to human selfknowledge. This highlights the need for further research in this area to enhance the ability of LLMs to understand their own limitations on the unknows. Such efforts will lead to more accurate and reliable responses from LLMs, which will have a positive impact on their applications in diverse fields.
## Limitations
- **Generalization of reference sentences.** At present, we have selected sentences with uncertain meanings exclusively from the GPT-3 and InstructGPT series, potentially overlooking uncertainty present in responses generated by other LLMs. However, it is not feasible to catalog all sentences with uncertain meanings exhaustively. As a direction for future research, we propose to concentrate on the automated acquisition of more accurate reference sentences to address this concern.
- **Limitations of input forms:** Our examination was confined to three unique input forms: direct, instruction, and ICL. There is burgeoning research aimed at bridging the gap between models and human-like methods of reasoning and problem-solving, including but not limited to approaches like Reflexion (Shinn et al., 2023), ToT (Yao et al.,
2023), MoT (Li and Qiu, 2023). Future endeavors will integrate additional cognitive and decision-making methods to delve deeper into the self-knowledge exhibited by these LLMs.
## Ethics Statement
The SelfAware dataset, meticulously curated to evaluate LLMs' ability to discern unanswerable questions, is composed of unanswerable questions extracted from sources such as Quora and HowStuffWorks, alongside answerable questions procured from three distinct open datasets. Every question was thoroughly examined for relevance and harmlessness. To ensure content validity, three annotation analysts, compensated at local wage standards, dedicated regular working hours to content review.
Throughout our research process, we underscored the significance of privacy, data security, and strict compliance with dataset licenses. In order to protect data integrity, we implemented anonymization and content filtration mechanisms. Our adherence to OpenAI's stipulations remained unyielding for the usage of GPT-3 and InstructGPT
models, and likewise for Meta's terms pertaining to LLaMA models. We rigorously vetted the licenses of the three publicly available datasets for compliance, ensuring that all our research methodologies were in alignment with ethical standards at the institutional, national, and global levels.
Adhering to the CC-BY-SA-4.0 protocol, the dataset, once publicly released, will be reserved exclusively for research purposes. We pledge to promptly and effectively address any concerns relating to the dataset, while concurrently anticipating researchers to maintain high ethical standards in their utilization of this data.
## Acknowledgement
We wish to express our gratitude to our colleagues in the FudanNLP group whose insightful suggestions, perspectives, and thought-provoking discussions significantly contributed to this work. Our sincere appreciation also extends to the anonymous reviewers and area chairs, whose constructive feedback was instrumental in refining the quality of our study. This work was supported by the National Natural Science Foundation of China (No.
62236004 and No. 62022027) and CAAI-Huawei MindSpore Open Fund.
## References
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R.
So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:*
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. *ArXiv preprint*,
abs/2211.12588.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2022. Complexity-based prompting for multi-step reasoning. *ArXiv preprint*,
abs/2210.00720.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics.
Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models
(mostly) know what they know. *ArXiv preprint*,
abs/2207.05221.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. ArXiv preprint, abs/2206.14858.
Xiaonan Li and Xipeng Qiu. 2023. Mot: Prethinking and recalling enable chatgpt to selfimprove with memory-of-thoughts. *ArXiv preprint*,
abs/2305.05181.
OpenAI. 2023. Gpt-4 technical report.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. *ArXiv preprint*,
abs/2203.02155.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao.
2023. Reflexion: Language agents with verbal reinforcement learning.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *ArXiv preprint*, abs/2206.04615.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca:
An instruction-following llama model. https://
github.com/tatsu-lab/stanford_alpaca.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. ArXiv preprint, abs/2302.13971.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200, Vancouver, Canada. Association for Computational Linguistics.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. *ArXiv preprint*, abs/2203.11171.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. ArXiv preprint, abs/2305.10601.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022.
Least-to-most prompting enables complex reasoning in large language models. *ArXiv preprint*,
abs/2205.10625.
## A Appendix A.1 Uncertainty Text
To assemble a set of reference sentences, we randomly chose 100 entries from the *SelfAware* dataset.
For each model in the GPT-3 and InstructGPT series, we conducted a preliminary test using the direct input form and manually curated sentences that displayed uncertainty. From this pre-test, we procured 16 sentences manifesting uncertain connotations to serve as our reference sentences. After normalizing these sentences by eliminating punctuation and converting to lowercase, we utilized them to compute similarity with target sentences throughout our experimental procedure.
1. The answer is unknown. 2. The answer is uncertain. 3. The answer is unclear.
4. There is no scientific evidence.
5. There is no definitive answer.
6. There is no right answer. 7. There is much debate. 8. There is no known case. 9. There is no concrete answer to this question.
10. There is no public information available.
11. It is impossible to know. 12. It is impossible to answer.
13. It is difficult to predict.
14. It is not known.
15. We do not know.
16. I'm not sure.
## A.2 Threshold Ablation
We generated 100 new responses using the textdavinci-002 with direct input form and manually filtered out sentences that contained uncertainty.
We then used SimCSE (Gao et al., 2021) to calculate the similarity between these sentences and the reference sentences in Appendix A.1. We tested various thresholds for filtering sentences with uncertain meanings and compared them to manually
| Threshold | Precision | Recall | F1 |
|-------------|-------------|----------|-------|
| 0.95 | 100.00 | 70.00 | 82.35 |
| 0.90 | 100.00 | 75.00 | 85.71 |
| 0.85 | 100.00 | 75.00 | 85.71 |
| 0.80 | 100.00 | 80.00 | 88.89 |
| 0.75 | 100.00 | 85.00 | 91.89 |
| 0.70 | 89.47 | 90.00 | 89.73 |
| 0.65 | 86.95 | 90.00 | 88.45 |
Table 2: Evaluation results comparing sentences with uncertain meaning filtered by various thresholds.
Human Precision Recall F1
Volunteer A 91.52 78.26 84.37
Volunteer B 96.36 76.81 85.48
Table 3: Evaluation results of 100 responses from two volunteers.
annotated sentences. We considered unanswerable questions as positive examples and calculated precision, recall, and F1 score. The results in Table 2 indicate that a threshold of 0.75 produced the highest F1 score, balancing precision and the inclusion of other uncertain sentences. As a result, we selected 0.75 as the similarity threshold for subsequent experiments.
## A.3 Human Self-Knowledge Test
The evaluation results for the responses from our invited volunteers are presented in Table 3. The F1 scores for the responses were high, indicating that both volunteers exhibited a strong level of selfknowledge.
## A.4 Template
The input templates used in our experiments, Direct, Instruction, and ICL, are illustrated in Figures 7, 8, and 9, respectively. In the ICL template, we composed 3 answerable and 3 unanswerable questions and provided the corresponding answers manually.
Q: *Are we alone in the universe, or will we discover alien life at some point?*
A:
Figure 7: Direct input form.
When answering questions, interpret them literally and think them carefully. If the question is unanswerable or unknowable, it is appropriate to say, "The answer is unknown."
Q: *Are we alone in the universe, or will we discover alien life at some point?*
A:
Figure 8: Instruction input form.
Q: What is the nearest extraterrestrial civilization to us?
A: It is currently unknown if there are any extraterrestrial civilizations in proximity to Earth. Scientists have not yet found definitive evidence of the existence of extraterrestrial civilizations. Thus, the answer is unknown. Q: What is the highest building in New York? A: The highest building in New York is the One World Trade Center. It stands at a height of 1,776 feet (541 meters). Thus, the answer is the One World Trade Center.
Q: Will nuclear war break out in the world in 2050?
A: It is impossible to predict with certainty if a nuclear war will break out in the world in 2050. The likelihood of such an event will depend on a wide range of factors, including international relations, political tensions, and military capabilities. Thus, the answer is unknown.
Q: Alice's class is going on a field trip to the Ocean Park. There are 39 students and several teachers in the class. A bus can seat 20 people. How many buses do they need?
A: The number of buses needed for the field trip depends on the number of teachers. If there is only one teacher, then two buses will suffice. However, additional buses will be necessary if there are two or more teachers. Thus, the answer is unknown.
Q: Are Local H and For Against both from the United States?
A: Local H is a rock band from Zion, Illinois, United States. For Against is an American indie rock band formed in 1984 in Lincoln, Nebraska. Both of these bands are from the United States. Thus, the answer is yes.
Q: Gjetost is the national cheese of which country?
A: It is the national cheese of Norway, and it is a popular ingredient in traditional Norwegian cuisine. Thus, the answer is Norway.
Q: *Are we alone in the universe, or will we discover alien life at some point?* A:
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We explore the possible flaws of our article in detail in the Limitation section.
✓ A2. Did you discuss any potential risks of your work?
We examine potential risks to data collection and privacy in our Ethics Statement.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
We succinctly outline the key points of the article in the abstract and the initial section introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
We utilize Grammarly solely for spell checking and grammar correction during the composition process.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We strictly adhere to the terms of use set by OpenAI and Meta when using the GPT-3, InstructGPT, and LLaMA models. We also strictly adhere to the dataset licenses when using the SQuAD, HotpotQA, and TriviaQA datasets, as outlined in the Ethics Statement section.
✓ B1. Did you cite the creators of artifacts you used?
We meticulously cite the appropriate literature regarding our use of the model and dataset in the Experiment section and the Dataset Construction section.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We examine the terms and conditions in the Ethics Statement section and justify the appropriateness of our usage.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We outline the models employed and the alignment of their usage intentions in the Ethics Statement section. For the datasets we created, we clearly define the scope of use and conduct the study in strict accordance with the specified scope.
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
In the Ethics Statement section, we state that we rigorously eliminate any data containing personally identifiable information, offensive content, and is completely anonymous during the data screening process.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
In the Ethics Statement section, we delve into the SelfAware dataset, which is primarily employed to assess a model's self-knowledge. The dataset is in English and comprises both answerable and unanswerable questions.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
In Section 2, Dataset Construction, we analyze the data distribution of the SelfAware dataset, including the number of answerable and non-answerable questions. All data is utilized solely for testing purposes.
## C ✓ **Did You Run Computational Experiments?**
In Section 4, Experiment, we conducted computational experiments.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In Section 4.4, Result, we present the number of parameters of the model used.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In Section 4.2, Setting, we detail the hyperparameter temperature used in the experiment.
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Considering the cost of experimentation, we did not conduct multiple experiments. However, replication of select experiments confirmed that there were no substantial variations in the outcomes, thus ensuring the reliability of our results.
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We developed our own pre-processing and evaluation metrics, instead of utilizing existing packages.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
In Section 2, Dataset Construction, and Section 4, Experiment, we employ human annotators to assist us in sorting through the data and identify sentences with uncertain meanings.
✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
We verbally communicated our expectations to the annotators, clearly outlining their roles and responsibilities.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
In the Ethics Statement section, we recruited three annotators at rates that comply with local wage standards.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
In the Ethics Statement section, we clearly specify in the dataset usage regulations that it can only be used for scientific research.
✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
In the Ethics Statement section, we demonstrate our unwavering adherence to ethical and moral guidelines for data use.
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
We do not possess access to the personal information of the question creators during the data collection process. |
chen-etal-2023-altclip | {A}lt{CLIP}: Altering the Language Encoder in {CLIP} for Extended Language Capabilities | https://aclanthology.org/2023.findings-acl.552 | CLIP (Contrastive Language{--}Image Pretraining) is an English multimodal representation model learned from a massive amount of English text-image pairs and has achieved great success in various downstream tasks, including image classification, text-to-image retrieval, and image generation. When extending CLIP to other languages, the major problem is the lack of good-quality text-image pairs. In this work, we present AltCLIP, a simple and low-resource method to build a strong multilingual multimodal representation model. Instead of training a model from scratch on multilingual text-image pairs, we take the original CLIP model trained on English text-image pairs and alter its text encoder with a pre-trained multilingual text encoder (XLM-R). We then align text and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. Our method utilizes the existence of rich parallel text data and pre-trained multilingual language models. We present extensive experimental evaluations to demonstrate the effectiveness of our proposed method. Our model sets new state-of-the-art zero-shot performances on a wide range of tasks in multilingual multimodal benchmarks, including ImageNet-CN/IT/JA/KO serials, Flicker30k-CN, COCO-CN, Multi30k, and XTD. Further, our model outperforms the original CLIP model on zero-shot cross-modal retrieval, Image Classification in the Wild (ICinW) tasks, and CLIP Benchmark. We plan to open-source our code, pre-trained model weights, and evaluation toolkits of multilingual multimodal tasks, to facilitate research on multilingual multimodal representation learning. | # Altclip: Altering The Language Encoder In Clip For Extended Language Capabilities
Zhongzhi Chen ∗1,2†, Guang Liu ∗1**, Bo-Wen Zhang** 1 Qinghong Yang 2‡**, Ledell Wu** 1‡
1 Beijing Academy of Artificial Intelligence, 2 Beihang University
{liuguang, bwzhang, wuyu}@baai.ac.cn
{jongjyh, yangqh}@buaa.edu.cn
## Abstract
CLIP (Contrastive Language–Image Pretraining) is an English multimodal representation model learned from a massive amount of English text-image pairs and has achieved great success in various downstream tasks, including image classification, text-to-image retrieval, and image generation. When extending CLIP
to other languages, the major problem is the lack of good-quality text-image pairs. In this work, we present AltCLIP, a simple and lowresource method to build a strong multilingual multimodal representation model. Instead of training a model from scratch on multilingual text-image pairs, we take the original CLIP
model trained on English text-image pairs and alter its text encoder with a pre-trained multilingual text encoder (XLM-R). We then align text and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. Our method utilizes the existence of rich parallel text data and pre-trained multilingual language models.
We present extensive experimental evaluations to demonstrate the effectiveness of our proposed method. Our model sets new state-ofthe-art zero-shot performances on a wide range of tasks in multilingual multimodal benchmarks, including ImageNet-CN/IT/JA/KO serials, Flicker30k-CN, COCO-CN, Multi30k, and XTD. Further, our model outperforms the original CLIP model on zero-shot crossmodal retrieval, Image Classification in the Wild (ICinW) tasks, and CLIP Benchmark.
We open-source our code, pre-trained model weights, and evaluation toolkit of multilingual multimodal tasks, to facilitate research on multilingual multimodal representation learning.
## 1 Introduction
suit in the research of Artificial Intelligence (AI).
Recently, the milestone work of CLIP (Radford et al., 2021) from OpenAI demonstrated impressive zero-shot performances across a number of tasks such as image classification on ImageNet (Deng et al., 2009), Image-to-Text and Text-to-Image retrieval on Flicker-30k (Young et al., 2014) and MSCOCO(Lin et al., 2014; Chen et al., 2015).
There has been the pursuit of building contrastive language-image models in other languages such as Italian (Bianchi et al., 2021), Korean (Ko and Gu, 2022), Chinese (Changpinyo et al., 2021; Fei et al.,
2021; Wang et al., 2022; Gu et al., 2022; Xie et al., 2022) or in a cross-lingual and multilingual setting
(Aggarwal and Kale, 2020a).
Training a good language-image representation model often requires a huge amount of text-image pairs and vast computational resources. For instance, CLIP used 400M text-image pairs, and Taiyi (Wang et al., 2022), a recently proposed Chinese model, used 123M text-image pairs. To alleviate these problems, several works manage to take advantage of the existing text-image representation CLIP and extend its language capabilities to other languages (Portaz et al., 2019; Aggarwal and Kale, 2020a; Gu et al., 2022; Zhai et al.,
2022).CN-CLIP (Yang et al., 2022) aligns a new Chinese text encoder to the CLIP vision encoder through 200M Chinese text-image pairs. More recently, M-CLIP (Carlsson et al., 2022) proposed to use Teacher Learning (a.k.a. Knowledge Distillation) on the text encoder of the CLIP model to learn a multilingual text-image representation model. This method only uses machine-translated data from English to a target language, without text-image pairs.
However, existing works in the cross-lingual or multilingual setting mainly focus on the model's retrieval performance and ignore their generalization ability. The dataset to evaluate retrieval performance is often small, e.g., 1, 000 images in test sets for Flickr-30k. The retrieval performance fluctuates acutely with the change in training data distribution. Although current methods achieve good performance in retrieval, these methods often do not perform well on the ImageNet classification tasks. The ability to accurately predict images over 1, 000 classes often indicates better generalization ability of the model.
To address the aforementioned problems, we propose a multilingual model named Alter ego CLIP
(AltCLIP) which achieved strong performances on ImageNet and multimodal retrieval tasks across languages. Our proposed method AltCLIP learns a multilingual text-image representation under a twostage framework (see Figure 1 for an overview). In the first stage, we use Teacher Learning on parallel text to distill the knowledge learned from CLIP and align different languages and images. In the second stage, we further improve the alignment of text and image via Contrastive Learning (Hadsell et al., 2006) on a moderate amount of multilingual text-image pairs. We employ this method to train a multilingual Vision-Language model that supports nine languages which we call AltCLIPM9.
We present an extensive experimental comparison over a variety of benchmarks and baseline methods, to demonstrate the effectiveness of our method. We show that using recall-based parallel text data in teacher learning can learn well-aligned text-image representation in both English and extended languages, while contrastive learning with text-image pairs effectively aligns the multilingual language model to the CLIP vision encoder. The model trained by this two-step training strategy results in a very strong performance on a broad range of multilingual multimodal benchmarks, including the original English multimodal benchmarks studied in CLIP (Radford et al., 2021). AltCLIPM9 sets new state-of-the-art results on multilingual image classification and retrieval tasks. Furthermore, AltCLIPM9 achieves superior cross-modal performances in Chinese, Korean, Japanese, and Italian compared to methods trained from scratch with large-scale text-image pairs. Lastly, we apply AltCLIPM9 to the task of text-to-image generations
(Ramesh et al., 2021; Rombach et al., 2022) to show that it enables high-quality image generation from prompts in different languages.
## 2 Related Work
CLIP (Radford et al., 2021) provides a strong English Vision-Language representation. To expand the language of the CLIP model, there are prior studies on learning a bilingual text-image representation (Ko and Gu, 2022; Bianchi et al., 2021), and multilingual text-image representation (Aggarwal and Kale, 2020a). In the realm of multi-language models, MURAL(Jain et al., 2021), a dual-tower model, employs contrastive learning between multilanguage text and text-image pairs to expand the paradigm of multi-modal learning. It was trained on large-scale private data obtained through web crawling, including more than 6 billion translation pairs and 1.8 billion image-caption pairs. Carlsson et al. (2022) proposed a way to utilize Teacher Learning (a.k.a. Knowledge Distillation) (Hinton et al., 2015) to train a new textual encoder from the original CLIP model with only machinetranslated parallel data. Although this method achieves promising cross-lingual retrieval performances with only text data, its zero-shot classification performance in English drops significantly.
In the domain of Chinese text-image pretraining models, prior work includes Taiyi (Wang et al.,
2022), CN-CLIP (Yang et al., 2022), Wukong (Gu et al., 2022), R2D2 (Xie et al., 2022) and BriVL
(Huo et al., 2021; Fei et al., 2021). These methods often need large-scale Chinese text-image pairs and suffer from a significant performance decline in English tasks.
XLM-R (Conneau et al., 2020) is a multilingual language model that achieves strong performances on a wide range of cross-lingual tasks. In our work, we use the XLM-R model as the underlying text encoder and align it with the image encoder trained in CLIP, to achieve competitive performances on cross-lingual and cross-modality tasks.
Knowledge distillation. In knowledge distillation, the teacher-student architecture is a generic carrier to form knowledge transfer. The model capacity gap between a large deep neural network and a small student neural network can degrade knowledge transfer.(Mirzadeh et al., 2020; Gao et al.,
2021). To effectively transfer knowledge to student networks, a variety of methods have been proposed for a controlled reduction of the model complexity(Crowley et al., 2018; Liu et al., 2019; Wang et al., 2018). In this work, we use a multilingual model XLM-R as a student model for effectively transferring multilingual knowledge.
![2_image_0.png](2_image_0.png)
## 3 Methodology
We propose a two-stage method to learn a multilingual multimodal representation model. In the first stage, we follow the work of Carlsson et al. (2022) to use Teacher Learning to learn a multilingual text encoder from the CLIP text encoder. In this step, no image is needed in training and only language parallel data is used. In the second stage, we use text-image pairs to further fine-tune the model from contrastive learning. Our overall training procedure is summarized in Figure 1.
## 3.1 Teacher Learning Stage
In this stage, we perform Teacher Learning (Hinton et al., 2015) on text encoders. We use the text encoder from CLIP (Radford et al., 2021)
as the teacher text encoder, and the XLM-R (Conneau et al., 2020) model pretrained on multilingual data as the student text encoder. A fully-connected layer is added to transform the output of the XLMR model into the same output dimension as the teacher encoder. We use parallel text data between English and other languages * to distill the knowledge of text-image alignment.
Given parallel text input (sent1*, sent*2), the teacher text encoder generates the learning target from input *sent*1, which is the embedding of the
[TOS] token, denoted by x ttos. The student text encoder generates embedding x s cls from input *sent*2.
We minimize Mean Squared Error (MSE) between x ttos and x s cls. After such training, the student text encoder can keep most of its multilingual capability and obtain text-image alignment capability in both languages. Note that the teacher encoder is only used at training time. At inference time, only the student encoder is used as the text encoder.
*We also include English-English text pairs as parallel text To show that our method is extensible in including more languages, we build a multilingual version (AltCLIPM9) and a bilingual version
(AltCLIPM2). AltCLIPM9 supports nine different languages: English(EN), Chinese(CN), Spanish(ES), French(FR), Russian(RU), Arabic(AR),
Japanese(JA), Korean(KO), and Italian(IT). For the bilingual version (AltCLIPM2), we align Chinese with English, with the same concept and architecture as in the multilingual version.
## 3.2 Contrastive Learning Stage
This stage of training aims to further improve textimage alignment by contrastive learning on multilingual text-image pairs. As illustrated in Figure 1, here we use the image encoder from CLIP which is based on Vision Transformer (ViT) (Dosovitskiy et al., 2020) as our image encoder, and use the student text encoder learned from the Teacher Learning Stage as our text encoder.
We use Contrastive Loss (Hadsell et al., 2006)
between the output projection of the image encoder and text encoder, as done similarly in previous work (Radford et al., 2021). We follow LiT (Zhai et al., 2022) to freeze the image encoder at training time and only update the parameters in the text encoder. We observe that this stage of training further improves the model's performance on various evaluation benchmarks, as presented in Section 5.
## 4 Model Training 4.1 Training Datasets
In this section, we describe the training datasets used in our two-stage training schema.
Teacher Learning Stage We only use the parallel corpus to align the original CLIP text
![3_image_0.png](3_image_0.png)
encoder and XLM-R text encoder. The parallel corpus consists of a recall-based corpus and a machine-translated corpus translated by MBART(Tang et al., 2020). We use the same amount of data for each language, which contains 5M recall-based parallel data collected from OPUS(Tiedemann, 2012)
†, 10M machinetranslated data from LAION(Schuhmann et al.,
2021)
‡and 3M machine-translated data from Conceptual Captions (CC3M)(Sharma et al., 2018). We use TSL2019(5M)(Xu, 2019) as parallel data for the training of AltCLIPM2.
Contrastive Learning Stage We use unfiltered text-image pair data in this stage. For AltCLIPM9, we randomly selected 7 million textimage pairs for each language from the LAION2BMulti(Schuhmann et al., 2022). For AltCLIPM2, we only employed half a million text-image pairs for each language in training.
## 4.2 Implementation Details
We initialize our text encoder from XLM-R*Large* and use the text encoder from CLIP*V iT* −L14 as the teacher text encoder. We use the image encoder from CLIP*V iT* −L14 as our image encoder. In the Teacher Learning stage, we trained for 27 hours using 11×8 NVIDIA A100-SXM4-40GB GPUs. In the Contrastive Learning stage, we continued training for an additional 12 hours using 8 NVIDIA
A100-SXM4-40GB GPUs. Detailed training settings can be found in Appendix A.3.
## 5 Experiments
We present experimental results in this section. In Section 5.1, we introduce the datasets and metrics used. We comprehensively validate our model through multilingual multimodal benchmarks in Section 5.2. In Section 5.3, we conduct an ablation study on the effects of various design choices in Teacher Learning and Contrastive Learning. Finally, in Section 5.4, we apply AltCLIP to textimage generation, and show that our model is capable to align text in different languages.
## 5.1 Evaluation Datasets And Metrics
In this section, we describe the datasets and metrics used. We use ImageNet (Deng et al.,
2009) and its four out-of-distribution test variants, i.e. ImageNet Sketch (Wang et al., 2019),
ImageNet-A (Hendrycks et al., 2021b), ImageNetR (Hendrycks et al., 2021a), ImageNetV2 (Recht et al., 2019), to evaluate zero-shot image classification performances in English(Radford et al., 2021),
Chinese, Japanese, Italain and Korean§. We adapt templates of manual prompts from CLIP for English and the corresponding machine translation templates for Chinese and Korean. For Japanese and Italian, the templates are collected from the same sources with the translated class names.
For cross-modal retrieval, we evaluate AltCLIPM9 on the XTD (Aggarwal and Kale, 2020b) dataset and Multi30k (Elliott et al., 2016).
| Lan. | Method | Txt-Img Data | IN-Adv. | IN-Ren. | IN-Ske. | IN-1K | IN-V2 | avg. |
|-----------|----------|----------------|-----------|-----------|-----------|---------|-------------|--------|
| English | M-CLIP | - | 59.1 | 81.6 | 44.2 | 52.3 | 47.4 | 56.9 |
| OpenCLIP | 50× | 53.9 | 87.5 | 63.3 | 75.3 | 67.7 | 69.5 | |
| AltCLIPM9 | 1× | 69.8 | 87.2 | 58.4 | 74.0 | 67.6 | 71.4(+1.9) | |
| Chinese | M-CLIP | - | 50.9 | 68.4 | 36.2 | 43.0 | 39.6 | 47.6 |
| CN-CLIP | 25× | 43.3 | 78.1 | 47.3 | 53.3 | 48.1 | 54.0 | |
| AltCLIPM9 | 1× | 61.2 | 82.4 | 48.4 | 59.6 | 54.0 | 61.1(+7.1) | |
| Japanese | M-CLIP | - | 21.8 | 44.5 | 24.6 | 26.9 | 24.2 | 28.4 |
| JA-CLIP† | NA | 21.2 | 50.9 | 25.1 | 50.7 | 43.5 | 38.3 | |
| AltCLIPM9 | 1× | 52.7 | 75.6 | 46.7 | 55.0 | 50.3 | 56.1(+17.8) | |
| Italian | M-CLIP | - | 51.8 | 72.9 | 38.3 | 43.0 | 38.9 | 49.0 |
| IT-CLIP† | 0.7× | 10.5 | 27.2 | 16.5 | 21.9 | 19.4 | 19.1 | |
| AltCLIPM9 | 1× | 56.7 | 78.2 | 45.9 | 55.3 | 50.4 | 57.3(+8.3) | |
| Korean | M-CLIP | - | 20.9 | 39.3 | 22.1 | 25.2 | 22.8 | 26.0 |
| KELIP† | 100× | 19.4 | 53.1 | 26.6 | 33.7 | 30.3 | 32.6 | |
| AltCLIPM9 | 1× | 51.1 | 72.9 | 44.8 | 55.2 | 50.5 | 54.9(+22.5) | |
XTD is built from selecting 1K images from COCO (Lin et al., 2014), and translating the corresponding English Captions into 11 languages.
¶. The Multi30k dataset is a collection of multilingual image captions that provides translations of captions in English, German, French, and Czech for 29,000 images. We select Flickr30k (Young et al., 2014), COCO, as well as their corresponding Chinese datasets, Flickr30kCN (Lan et al., 2017), COCOCN
|| (Li et al., 2019), to evaluate zero-shot image-to-text retrieval and text-to-image retrieval performances on Chinese.
We further evaluated our model on a wide range of English tasks to compare its performance with the original CLIP model. We used datasets introduced in CLIP and the Open CLIP benchmark** and "Image Classification in the Wild (ICinW)" dataset from the ELEVATER
benchmark (Li et al., 2022), including Birdsnap (Berg et al., 2014), Caltech-101 (Fei-Fei et al., 2006), Stanford Cars (Krause et al., 2013),
CIFAR-10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), Country211 (Radford et al., 2021), DTD (Cimpoi et al., 2014),
EuroSAT (Helber et al., 2019), Facial Emotion Recognition 2013 (Goodfellow et al., 2013), FGVC
Aircraft (Blaschko et al., 2012), Oxford Flow-
¶English(EN), German(DE), French(FR), Chinese(CN),
Japanese(JA), Italian(IT), Spanish(ES), Russian(RU), Polish(PL), Turkish(TR), Korean(KO)
||There are two versions: texts in the 1k version(COCOCNa) are manually written captions while in the 5k version (COCOCNb) are manually translated captions
**https://github.com/LAION-AI/CLIP_
benchmark ers 102 (Nilsback and Zisserman, 2008), Food101 (Bossard et al., 2014), GTSRB (Stallkamp et al., 2011), Kinetics400 (Kay et al., 2017), Kinetics600 (Carreira et al., 2018), MNIST (Cire¸san et al., 2011), PatchCamelyon (Veeling et al.,
2018), ObjectNet (Barbu et al., 2019), Oxford-IIIT
Pets (Parkhi et al., 2012), Rendered SST2 (Radford et al., 2021), RESISC45 (Cheng et al., 2017),
STL-10 (Coates et al., 2011), SUN397 (Xiao et al.,
2010), UCF101 (Soomro et al., 2012), Pascal VOC
2007 Classification (Everingham, 2007), Pascal VOC 2007 Multilabel Classification (Everingham, 2007), KITTI-Distance (Fritsch et al., 2013) and hateful-memes (Kiela et al., 2020).
The evaluation metrics for image classification benchmarks are accuracy (default), mean per class
(the average of recall obtained on each category, for imbalanced datasets, such as FGVC Aircraft, Oxford-IIIT Pets, Caltech-101, Oxford Flowers 102), 11-point mAP (mean average of 11-pt interpolated precision for each class, for VOC 2007), and mean(top1, top5) (the mean of acc@1 and acc@5, for Kinetics400 and Kinetics600). For cross-modal retrieval benchmarks, we use Recall@K where K ∈ {1, 5, 10}, and Mean Recall (the average of Recall@K) for both image-to-text retrieval and textto-image retrieval tasks, which are the same as the setups in CLIP (Radford et al., 2021).
## 5.2 Zero-Shot Performance
Image Classification We first present evaluation results of zero-shot image classification on the ImageNet dataset and its four out-of-distribution
| Model | XTD | Multi30K | | | | | | | | | | |
|-------------------|-------|------------|------|------|------|------|------|------|------|------|-------|-------|
| En | Es | Fr | Zh | It | Ko | Ru | Jp | En | Fr | De | Cs | |
| Base Model | | | | | | | | | | | | |
| CLIPV iT −B32 | 90.3 | - | - | - | - | - | - | - | - | - | - | - |
| mUSE PATR | 83.6 | 75.6 | 76.9 | 76.1 | 73.4 | 64.3 | 73.6 | 69.4 | - | - | - | - |
| mUSE m3 | 85.3 | 78.9 | 78.9 | 76.7 | 73.6 | 67.8 | 76.1 | 70.7 | - | - | - | - |
| UC2 | 65.2 | 56.5 | 59.7 | 60.1 | 57.7 | 50.2 | 50.9 | 50.5 | 66.6 | 60.4 | 62.5 | 55.1 |
| MLAV iT −B16 | 76.0 | 62.8 | 72.9 | 73.8 | 64.7 | 57.3 | 58.1 | 67.2 | 86.4 | 80.9 | 80.8 | 72.9 |
| ALIGNBASE | - | 88.8 | - | 86.5 | 87.9 | 76.6 | 82.3 | - | 84.3 | 78.3 | 78.9 | 71.1 |
| MURALBASE | - | 89.6 | - | 88.3 | 88.4 | 82.4 | 83.6 | - | 82.4 | 75.0 | 76.2 | 64.6 |
| M-CLIP‡ V iT −B32 | 91.8 | 89.1 | 89.4 | 89.3 | 89.8 | 82.1 | 86.1 | 81.0 | 80.4 | 71.1 | 71.4 | 67.7 |
| Large Model | | | | | | | | | | | | |
| CLIPV iT −L14 | 91.8 | - | - | - | - | - | - | - | 87.7 | - | - | - |
| M-CLIP‡ V iT −L14 | 92.4 | 91 | 90 | 89.7 | 91.1 | 85.2 | 85.8 | 81.9 | 87.8 | 82.5 | 83.1 | 81.3 |
| MURALLARGE | - | 92.9 | - | 89.7 | 91.8 | 88.1 | 87.2 | - | 89.2 | 83.1 | 83.5 | 77.0 |
| AltCLIPM9 | 93.3 | 92.2 | 91.1 | 92.2 | 91.9 | 91.5 | 89.2 | 89.1 | 89.9 | 85.2 | 65.5† | 36.6† |
| Average | Caltech-101 | Cars | CIFAR-10 | CIFAR-100 | Country211 | DTD | EuroSAT | FER2013 | FGVC-Aircraft | Flowers | Food101 | GTSRB | hateful-memes | KITTI-Distance | MNIST | PCAM | Pets | Rendered-SST2 | RESISC45 | VOC2007 |
|---------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|--------|------------|-------------|--------------|-------|-----------|-----------|-----------------|-----------|-----------|---------|-----------------|------------------|---------|--------|--------|-----------------|------------|-----------|
| M-CLIP | 53.5 81.4 53.5 93.8 72.6 22.5 41.2 62.0 47.7 7.3 26.3 68.8 42.5 53.0 28.7 60.1 51.3 49.9 65.6 62.0 79.7 | | | | | | | | | | | | | | | | | | | |
| CLIP | 64.9 86.6 77.3 95.6 75.8 31.9 55.4 60.0 49.9 31.9 79.1 93.1 50.6 56.0 21.8 76.4 52.0 93.6 69.0 64.5 77.4 | | | | | | | | | | | | | | | | | | | |
| AltCLIP-M9 66.1 87.5 75.2 95.7 79.0 31.7 57.3 60.3 56.8 29.6 71.5 92.3 49.2 57.2 25.5 70.5 63.3 93.8 74.7 69.8 80.5 | | | | | | | | | | | | | | | | | | | | |
variants. For baselines, we compare our model with OpenCLIP (Radford et al., 2021), CN-CLIP
(Yang et al., 2022), KELIP (Ko and Gu, 2022),
IT-CLIP (Bianchi et al., 2021), JA-CLIP ( , 2022)
and multilingual CLIP (M-CLIP) (Carlsson et al.,
2022). As illustrated in Table 1, AltCLIPM9 outperforms OpenCLIP in English and sets new state-of-the-art results on ImageNet, ImageNetA, ImageNet-R, and ImageNet V2 in Chinese, Japanese, Korean and Italian. These results demonstrate the effectiveness of our method in expanding the language ability of CLIP. Compared to Chinese/Korean baseline models where hundreds of millions of text-image pairs are used in pretraining, we only use 18M parallel text data and 7M
text-image pairs (per language) in training.
Multilingual Cross-modal Retrieval We compare our model with CLIP, M-CLIP (Carlsson et al., 2022), mUSE (Yang et al., 2020),
UC2(Zhou et al., 2021), MLA (Zhang et al.,
2022), ALIGN (Jia et al., 2021) and MURAL (Jain et al., 2021). The results of the comparison on Multi30k(Elliott et al., 2016) and XTD (Aggarwal and Kale, 2020b) are shown in Table 2, where AltCLIPM9 achieves state-of-the-art results in 7 languages and outperforms the original CLIP
model in English. This superior performance of our model is likely due to the use of higher-quality parallel corpora during the Teacher Learning stage, which effectively eliminates potential bias from machine translation. Additionally, we utilize contrastive learning to further align the text and image representation, which is crucial for downstream tasks. We will discuss this in more detail in Section 5. We also provide additional cases in Appenix A.4.
| Flickr30k COCO Flickr30kCN COCOCNa COCOCNb |
|----------------------------------------------|
| Dataset | Method | Text-to-Image Retrival | Image-to-Text Retrival | MR | | | |
|------------|----------|--------------------------|--------------------------|------|------|------|------|
| R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | |
| CLIP | 65.0 | 87.1 | 92.2 | 85.1 | 97.3 | 99.2 | 87.6 |
| Taiyi | 25.3 | 48.2 | 59.2 | 39.3 | 68.1 | 79.6 | 53.3 |
| CN-CLIP | 49.5 | 76.9 | 83.8 | 66.5 | 91.2 | 96.0 | 77.3 |
| AltCLIP-M2 | 72.5 | 91.6 | 95.4 | 86.0 | 98.0 | 99.1 | 90.4 |
| AltCLIP-M9 | 69.8 | 90.8 | 94.2 | 86.6 | 97.8 | 99.2 | 89.7 |
| CLIP | 36.5 | 61.1 | 71.1 | 56.4 | 79.5 | 86.5 | 65.2 |
| Taiyi | 11.7 | 27.8 | 37.4 | 19.8 | 42.1 | 54.3 | 32.2 |
| CN-CLIP | 26.1 | 50.0 | 61.3 | 40.9 | 65.8 | 76.3 | 53.4 |
| AltCLIP-M2 | 42.9 | 68.0 | 77.4 | 58.6 | 80.6 | 87.8 | 69.2 |
| AltCLIP-M9 | 40.5 | 65.2 | 74.9 | 58.7 | 81.2 | 88.3 | 68.2 |
| CLIP | 0 | 2.4 | 4.0 | 2.3 | 8.1 | 12.6 | 5.0 |
| Taiyi | 53.7 | 79.8 | 86.6 | 63.8 | 90.5 | 95.9 | 78.4 |
| Wukong† | 51.7 | 78.9 | 86.3 | 76.1 | 94.8 | 97.5 | 80.9 |
| R2D2† | 60.9 | 86.8 | 92.7 | 77.6 | 96.7 | 98.9 | 85.6 |
| CN-CLIP | 68 | 89.7 | 94.4 | 80.2 | 96.6 | 98.2 | 87.9 |
| AltCLIP-M2 | 69.8 | 89.9 | 94.7 | 84.8 | 97.4 | 98.8 | 89.2 |
| AltCLIP-M9 | 68.6 | 89.4 | 94.5 | 85.8 | 98.2 | 99.0 | 89.2 |
| CLIP | 0.6 | 4.1 | 7.1 | 1.8 | 6.7 | 11.9 | 5.4 |
| Taiyi | 52.0 | 80.2 | 89.6 | 46.6 | 76.3 | 88.6 | 72.2 |
| Wukong† | 55.2 | 81.0 | 90.6 | 53.4 | 80.2 | 90.1 | 75.1 |
| R2D2† | 63.3 | 89.3 | 95.7 | 56.4 | 85.0 | 93.1 | 80.5 |
| CN-CLIP | 63.7 | 88.7 | 94.4 | 61.0 | 84.7 | 93.6 | 81.0 |
| AltCLIP-M2 | 63.9 | 87.2 | 93.9 | 62.8 | 88.8 | 95.5 | 82.0 |
| AltCLIP-M9 | 60.6 | 86.3 | 93.4 | 66.2 | 88.9 | 96.2 | 81.9 |
| CLIP | 0.8 | 3.9 | 5.8 | 3.5 | 8.9 | 14.4 | 6.2 |
| Taiyi | 46.1 | 74.9 | 85.1 | 58.1 | 83.9 | 91.7 | 73.3 |
| CN-CLIP | 58.6 | 85.3 | 92.7 | 72.1 | 90.9 | 94.7 | 82.4 |
| AltCLIP-M2 | 61.3 | 86.0 | 93.2 | 77.8 | 94.4 | 97.5 | 85.0 |
| AltCLIP-M9 | 58.9 | 84.5 | 92.5 | 77.7 | 94.3 | 97.7 | 84.3 |
Full CLIP benchmark We present the evaluation results for a range of tasks in English in Figure 2. We compare the effectiveness of multilingual AltCLIPM9 and AltCLIPM9−T with the original CLIP. AltCLIPM9 outperforms CLIP, indicating that our method effectively fuses the abilities of CLIP and XLMR. We observed that at the Teacher Learning stage, the model already learns a good representation of text-image representation, as it achieves better average results than the original CLIP model on a range of zero-shot benchmarks.
The Contrastive Learning stage further improves the model's performance, particularly on retrieval tasks such as Flickr30k.
Task-level transferability We evaluated the transferability of AltCLIP for zero-shot image classification on the "Image Classification in the Wild (ICinW)" dataset from the ELEVATER benchmark (Li et al., 2022). ICinW is a publicly available benchmark to evaluate the large-scale tasklevel transferability of Vison Language models.
ICinW consists of a series of image classification datasets such as KITTI-Distance (Fritsch et al.,
2013) and hateful-memes (Kiela et al., 2020). As shown in Table 3, AltCLIPM9 achieved an average score of 66.1, outperforming the original CLIP
and achieving a 23.6% improvement compared to M-CLIP, demonstrating the effectiveness of our training strategy.
| Method | Multilingual | English | | | | | |
|----------|----------------|-----------|------|------|------|------|------|
| INs | IRs | TRs | INs | IRs | TRs | CinW | |
| MT | 47.8 | 54.2 | 63.5 | 71.5 | 57.6 | 73.2 | 66.1 |
| RB | 50.2 | 51.8 | 60.8 | 67.2 | 55.1 | 71.2 | 61.5 |
| MT+RB | 56.2 | 56.2 | 65.6 | 72.2 | 57.7 | 73.1 | 65.8 |
| MT+RB+CL | 58.4 | 60.6 | 68.7 | 71.4 | 60.8 | 74.8 | 66.1 |
Comparison with models trained from scratch.
We compare our model with the ones trained with hundreds of millions of text-image pairs: CLIP in English and R2D2 (Xie et al., 2022), Wukong
(Gu et al., 2022), Taiyi (Wang et al., 2022) and CN-CLIP (Yang et al., 2022) in Chinese. The results are shown in Table 4. AltCLIPM9 outperforms all baseline models including models trained with large-scale text-image pairs on most datasets and tasks. We notice that AltCLIPM2 outperforms CLIP on both text-to-image and image-to-text retrieval. This could be due to the following reasons: 1). We used a small subset (less than 1M)
of LAION 5B at the Contrastive Learning stage, which is in a different distribution of the pretraining data used in CLIP; 2). Our language encoder initialized from XLM-R provides better language understanding ability. We elaborate on the detailed results of Bilingual settings in Appendix A.2.
## 5.3 Ablation Study
We evaluate the effectiveness of our AltCLIPM9 by analyzing its major components in this section. We use CL to denote the Contrastive Learning stage, and MT and RB to denote the Machine-Translated and Recall-Based parallel data used in the Teacher Learning stage. We evaluate the variations of our models in English-only and in multilingual settings.
We use the average score on ImageNet series (INs),
Image Retrieval tasks (IRs), and Text Retrieval tasks (TRs) as evaluation metrics. Results in Table 5 show that excluding machine-translated data has a significant impact on performance, except for the multilingual ImageNet series tasks. Combining machine-translated and recall-based parallel data leads to a significant improvement in most tasks, indicating that the quality and diversity in training data are both important. Additionally, the Contrastive Learning stage significantly improves the model's performances on multilingual tasks, achieving 58.4 on multilingual INs, a 3.9% improvement.
## 5.4 Examples Of Text-To-Image Generation
In this section, we apply our model to the task of text-to-image generation to enable multilingual image generation, and to show the effect of language alignment in our model. We use the text encoder of AltCLIPM9 to fine-tune a Stable Diffusion model
(Rombach et al., 2022). We use stable-diffusion v1-4†† as initialization and AltCLIPM9 as the language encoder, and we freeze all parameters in the diffusion model except for the key and value projection layers of the cross-attention block during
††https://huggingface.co/CompVis/
stable-diffusion-v-1-4-original
![7_image_0.png](7_image_0.png)
fine-tuning. The dataset used for fine-tuning is the same one used for the Contrastive Learning stage as described in Section 4.1. As demonstrated in Fig. 3, our model generates high-quality images comparable to those generated by Stable Diffusion.
This is likely due to the reason that AltCLIPM9 achieves competitive performance in English with CLIP, where the latter is used in the original Stable Diffusion model. Additionally, we observe that our model generates similar images for translated English and Chinese prompts, demonstrating the effect of language alignment. More examples with images generated from different languages can be found in Appendix A.5.
## 6 Conclusion
In this work, we propose an effective two-stage training method for learning multilingual multimodal representation models, through teacher learning and contrastive learning. The effectiveness is demonstrated through extensive experiments on a wide range of tasks in multilingual multimodal benchmarks. AltCLIPM9 outperforms the original CLIP model on many tasks in English and sets new state-of-the-art zero-shot results on multiple image classification tasks in Chinese/Korean/Italian/Japanese and multilingual retrieval tasks. Meanwhile, our method is highly data-efficient, which consumes only around 1%
text-image pairs compared to the hundreds of millions of text-image pairs used by prior work on vision-language pretraining models.
## 7 Limitations
It's worth noting that this study has certain limitations. One of the limitations is the limited scope of the training data employed. The AltCLIP model is trained on open-source parallel corpora and publicly available unfiltered text-image pairs. A more careful study of the training data, i.e. filtering textimage pairs by relevance and text/image quality may help to further improve the overall performance of the model. Another limitation is the challenge of evaluating the model in a multilingual setting. Despite our best efforts to include as many benchmarks as possible and to translate from English datasets, the evaluation of the model's performance in other languages is not as comprehensive as it is in English. For example, there may be fewer tasks available such as OCR or action recognition in videos in other languages. In addition, the use of machine translation may introduce biases that could affect performance. Future research should focus on creating a more robust and scientifically rigorous multilingual evaluation framework.
## 8 Ethics Statement
The AltCLIP approach presents an innovative way of building robust multilingual multimodal representation models while minimizing the need for energy-intensive GPU training, promoting a more sustainable approach. Additionally, it allows for greater accessibility as it does not require extensive computational resources to implement. Furthermore, our model was trained using open-sourced data and our model is open-sourced to promote transparency and reproducibility. However, we have not carefully investigated the training data we used, such as LAION (Schuhmann et al., 2022).
The data may contain unsafe or biased text and/or images. It is important to note that models pretrained on it have the potential to reproduce sensitive training data. It is crucial to use this method responsibly and ethically to ensure it contributes to safe applications.
## References
Pranav Aggarwal and Ajinkya Kale. 2020a. Towards zero-shot cross-lingual image retrieval. arXiv preprint arXiv:2012.05107.
Pranav Aggarwal and Ajinkya Kale. 2020b. Towards zero-shot cross-lingual image retrieval.
Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. 2019. Objectnet: A largescale bias-controlled dataset for pushing the limits of object recognition models. Advances in neural information processing systems, 32.
Thomas Berg, Jiongxin Liu, Seung Woo Lee, Michelle L
Alexander, David W Jacobs, and Peter N Belhumeur.
2014. Birdsnap: Large-scale fine-grained visual categorization of birds. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, pages 2011–2018.
Federico Bianchi, Giuseppe Attanasio, Raphael Pisoni, Silvia Terragni, Gabriele Sarti, and Sri Lakshmi. 2021. Contrastive language-image pretraining for the italian language. arXiv preprint arXiv:2108.08688.
Matthew Blaschko, Ross B Girshick, Juho Kannala, Iasonas Kokkinos, Siddarth Mahendran, Subhransu Maji, Sammy Mohammed, Esa Rahtu, Naomi Saphra, Karen Simonyan, et al. 2012. Towards a detailed understanding of objects and scenes in natural images.
Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool.
2014. Food-101 - mining discriminative components with random forests. In European Conference on Computer Vision.
Fredrik Carlsson, Philipp Eisen, Faton Rekathati, and Magnus Sahlgren. 2022. Cross-lingual and multilingual clip. In *Proceedings of the Thirteenth Language* Resources and Evaluation Conference, pages 6848–
6854.
Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. 2018. A
short note about kinetics-600. *arXiv preprint* arXiv:1808.01340.
Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. 2021. Conceptual 12m: Pushing webscale image-text pre-training to recognize long-tail visual concepts. In *Proceedings of the IEEE/CVF*
Conference on Computer Vision and Pattern Recognition, pages 3558–3568.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. *arXiv preprint* arXiv:1504.00325.
Gong Cheng, Junwei Han, and Xiaoqiang Lu. 2017.
Remote sensing image scene classification: Benchmark and state of the art. *Proceedings of the IEEE*,
105(10):1865–1883.
Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. 2014. Describing textures in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3606–3613.
Dan C Cire¸san, Ueli Meier, Jonathan Masci, Luca M
Gambardella, and Jürgen Schmidhuber. 2011. Highperformance neural networks for visual object classification. *arXiv preprint arXiv:1102.0183*.
Adam Coates, Andrew Ng, and Honglak Lee. 2011.
An analysis of single-layer networks in unsupervised feature learning. In *Proceedings of the fourteenth international conference on artificial intelligence and* statistics, pages 215–223. JMLR Workshop and Conference Proceedings.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL.
Elliot J Crowley, Gavin Gray, and Amos J Storkey. 2018.
Moonshine: Distilling with cheap convolutions. *Advances in Neural Information Processing Systems*,
31.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In *CVPR*, pages 248–
255. Ieee.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020.
An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint* arXiv:2010.11929.
Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30k: Multilingual english-german image descriptions. arXiv preprint arXiv:1605.00459.
Mark Everingham. 2007. The pascal visual object classes challenge,(voc2007) results. http://pascallin.ecs.soton.ac.uk/challenges/VOC/
voc2007/index.html.
Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, et al. 2021. Wenlan 2.0: Make ai imagine via a multimodal foundation model. *arXiv* preprint arXiv:2110.14378.
Li Fei-Fei, Robert Fergus, and Pietro Perona. 2006.
One-shot learning of object categories. *IEEE transactions on pattern analysis and machine intelligence*,
28(4):594–611.
Jannik Fritsch, Tobias Kuehnl, and Andreas Geiger.
2013. A new performance measure and evaluation benchmark for road detection algorithms. In *16th* International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), pages 1693–1700.
IEEE.
Mengya Gao, Yujun Wang, and Liang Wan. 2021.
Residual error based knowledge distillation. *Neurocomputing*, 433:154–161.
Ian J Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, et al. 2013. Challenges in representation learning: A report on three machine learning contests.
In *International conference on neural information* processing, pages 117–124. Springer.
Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Minzhe Niu, Hang Xu, Xiaodan Liang, Wei Zhang, Xin Jiang, and Chunjing Xu. 2022. Wukong: 100 million large-scale chinese cross-modal pre-training dataset and a foundation framework. arXiv preprint arXiv:2202.06767.
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006.
Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(CVPR'06), volume 2, pages 1735–1742. IEEE.
Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2019. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7):2217–2226.
Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. 2021a. The many faces of robustness: A critical analysis of out-of-distribution generalization. In *Proceedings* of the IEEE/CVF International Conference on Computer Vision, pages 8340–8349.
Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. 2021b. Natural adversarial examples. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*,
pages 15262–15271.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015.
Distilling the knowledge in a neural network. *stat*,
1050:9.
Yuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, et al. 2021.
Wenlan: Bridging vision and language by largescale multi-modal pre-training. *arXiv preprint* arXiv:2103.06561.
Aashi Jain, Mandy Guo, Krishna Srinivasan, Ting Chen, Sneha Kudugunta, Chao Jia, Yinfei Yang, and Jason Baldridge. 2021. Mural: multimodal, multitask retrieval across languages. *arXiv preprint* arXiv:2109.05125.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904–4916. PMLR.
Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al.
2017. The kinetics human action video dataset.
arXiv preprint arXiv:1705.06950.
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes challenge: Detecting hate speech in multimodal memes.
Advances in Neural Information Processing Systems, 33:2611–2624.
Byungsoo Ko and Geonmo Gu. 2022. Large-scale bilingual language-image contrastive learning. *arXiv* preprint arXiv:2203.14463.
Jonathan Krause, Michael Stark, Jia Deng, and Li FeiFei. 2013. 3d object representations for fine-grained categorization. In *4th International IEEE Workshop* on 3D Representation and Recognition (3dRR-13),
Sydney, Australia.
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images.
Weiyu Lan, Xirong Li, and Jianfeng Dong. 2017.
Fluency-guided cross-lingual image captioning. In Proceedings of the 25th ACM international conference on Multimedia, pages 1549–1557.
Chunyuan Li, Haotian Liu, Liunian Harold Li, Pengchuan Zhang, Jyoti Aneja, Jianwei Yang, Ping Jin, Yong Jae Lee, Houdong Hu, Zicheng Liu, et al.
2022. Elevater: A benchmark and toolkit for evaluating language-augmented visual models. *arXiv* preprint arXiv:2204.08790.
Xirong Li, Chaoxi Xu, Xiaoxu Wang, Weiyu Lan, Zhengxiong Jia, Gang Yang, and Jieping Xu. 2019.
Coco-cn for cross-lingual image tagging, captioning, and retrieval. *IEEE Transactions on Multimedia*,
21(9):2347–2360.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco:
Common objects in context. In *European conference on computer vision*, pages 740–755. Springer.
Iou-Jen Liu, Jian Peng, and Alexander G Schwing. 2019.
Knowledge flow: Improve upon your teachers. *arXiv* preprint arXiv:1904.05878.
Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. 2020. Improved knowledge distillation via teacher assistant. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 5191–5198.
Maria-Elena Nilsback and Andrew Zisserman. 2008.
Automated flower classification over a large number of classes. In *2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing*, pages 722–729. IEEE.
Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. 2012. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pages 3498–3505. IEEE.
Maxime Portaz, Hicham Randrianarivo, Adrien Nivaggioli, Estelle Maudet, Christophe Servan, and Sylvain Peyronnet. 2019. Image search using multilingual texts: a cross-modal learning approach between image and text. *arXiv preprint arXiv:1903.11299*.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763.
PMLR.
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR.
Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do imagenet classifiers generalize to imagenet? In *International Conference* on Machine Learning, pages 5389–5400. PMLR.
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 10684–10695.
Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. 2022. Laion-5b: An open large-scale dataset for training next generation imagetext models. *arXiv preprint arXiv:2210.08402*.
Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. 2021. Laion-400m: Open dataset of clipfiltered 400 million image-text pairs. arXiv preprint arXiv:2111.02114.
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565.
Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402.
Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. 2011. The german traffic sign recognition benchmark: a multi-class classification competition. In The 2011 international joint conference on neural networks, pages 1453–1460. IEEE.
Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning.
Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In *Lrec*, volume 2012, pages 2214–
2218.
Bastiaan S Veeling, Jasper Linmans, Jim Winkens, Taco Cohen, and Max Welling. 2018. Rotation equivariant cnns for digital pathology. In *International Conference on Medical image computing and computerassisted intervention*, pages 210–218. Springer.
Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P
Xing. 2019. Learning robust global representations by penalizing local predictive power. *Advances in* Neural Information Processing Systems, 32.
Hui Wang, Hanbin Zhao, Xi Li, and Xu Tan. 2018. Progressive blockwise knowledge distillation for neural network acceleration. In *IJCAI*, pages 2769–2775.
Junjie Wang, Yuxiang Zhang, Lin Zhang, Ping Yang, Xinyu Gao, Ziwei Wu, Xiaoqun Dong, Junqing He, Jianheng Zhuo, Qi Yang, Yongfeng Huang, Xiayu Li, Yanghan Wu, Junyu Lu, Xinyu Zhu, Weifeng Chen, Ting Han, Kunhao Pan, Rui Wang, Hao Wang, Xiaojun Wu, Zhongshen Zeng, Chongpei Chen, Ruyi Gan, and Jiaxing Zhang. 2022. Fengshenbang 1.0: Being the foundation of chinese cognitive intelligence.
CoRR, abs/2209.02970.
J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. 2010. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3485–3492.
Chunyu Xie, Heng Cai, Jianfei Song, Jincheng Li, Fanjing Kong, Xiaoyu Wu, Henrique Morimitsu, Lin Yao, Dexin Wang, Dawei Leng, et al. 2022. Zero and r2d2: A large-scale chinese cross-modal benchmark and a vision-language framework. *arXiv preprint* arXiv:2205.03860.
Bright Xu. 2019. Nlp chinese corpus: Large scale chinese corpus for nlp.
An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, and Chang Zhou. 2022. Chinese clip: Contrastive vision-language pretraining in chinese. *arXiv preprint arXiv:2211.01335*.
Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2020. Multilingual universal sentence encoder for semantic retrieval. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 87–94, Online. Association for Computational Linguistics.
Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78.
Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. 2022. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18123–18133.
Liang Zhang, Anwen Hu, and Qin Jin. 2022. Generalizing multimodal pre-training into multilingual via language acquisition. *arXiv preprint arXiv:2206.11091*.
Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu, and Jingjing Liu.
2021. Uc2: Universal cross-lingual cross-modal vision-and-language pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4155–4165.
, . 2022. . In The 25th Meeting on Image Recognition and Understanding.
## A Appendix A.1 Classification On Imagenet Series
| Lan. Method IN-Adv. IN-Ren. IN-Ske. IN-1K IN-V2 avg. M-CLIP 54.4 75.4 39.3 41.3 45.7 51.2 ES Our 58.1 76.8 46.6 52.9 57.9 58.5 Imp. +3.7 +1.4 +7.3 +11.6 +12.2 +7.3 M-CLIP 50.3 71.6 38.3 40.8 44.8 49.2 FR Our 58.6 78.1 47.9 53.3 58.4 59.2 Imp. +8.3 +6.5 +9.6 +12.5 +13.6 +10.0 M-CLIP 47.4 72.9 36.9 39.5 42.7 47.9 RU Our 50.7 76.1 44.9 49.4 54.4 55.1 Imp. +3.3 +3.2 +8.0 +9.9 +11.7 +7.2 M-CLIP 46.2 61.7 31.2 32.4 35.7 41.4 AR Our 53.9 70.7 41.0 44.8 49.4 52.0 Imp. +7.7 +9.0 +9.8 +12.4 +13.7 +10.6 |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
As shown in Table 6, our proposed AltCLIPM9 outperforms M-CLIP on Spanish, French, Russian, and Arabic on the ImageNet series datasets.
## A.2 Effects Of English-English Data
We present results from ablation studies on AltCLIPM2. We show the significance of including various parallel data in the Teacher Learning stage in Table 7. As illustrated in the 3rd and 5th lines, without English-to-English parallel data, the accuracy on English ImageNet drastically drops to 15.47 from 53.8. Similarly, excluding machinetranslated English-to-Chinese data, has a great impact on the performances on Chinese benchmarks, i.e. ImagenetCN and Flickr30KCN , due to influenced Chinese text-image representation. Moreover, empirical experiments show that introducing recall-based parallel data leads to a great improvement in ImagenetCN which may be related to the distribution of the data set. This indicates that the diversity of training data used for teacher learning can benefit the language model to gain more knowledge about entities or concepts.
## A.3 Hyper-Parameters
As shown in Table 8, we set the hyper-parameters for bilingual and multilingual AltCLIP training.
Table 8: Hyper-parameters setting in Teacher Learning Stage and Contrastive Learning Stage.
## A.4 Examples For Multilingual Cross-Modal Retrival
As illustrated in Tab. 9, our AltCLIPM9 can recall the accurate results.
| Hyper-paramters | TL | CL |
|----------------------|---------------|---------------|
| Batch size | 11264 | 1024 |
| Optimizer (AdamW, β) | (0.99, 0.999) | (0.99, 0.999) |
| Learning rate | 2e-4 | 2e-6 |
| Weight decay | 2e-1 | 5e-2 |
| Eps | 1e-8 | 1e-8 |
| Warmup steps | 500 | 2000 |
| #Epochs | 10 | 1 |
| Gradient clipping | 1.0 | 5.0 |
| Steps | 146500 | 2000 |
## A.5 Examples For Text-Image Generation
We show more examples generated from our AltCLIPM9 guided diffusion model: we use the same prompt and translate it into different lanugages and present the results in Tab. 10. One can observe that the model generates similar images but with subtle differences for different languages.
| EN-EN | | EN-CNMT | EN-CNRB | CL | Flickr30KEN | Flickr30KCN | ImageNetEN | ImageNetCN |
|---------|------|-----------|-----------|------|---------------|---------------|--------------|--------------|
| ! | ! | ! | ! | 90.4 | 89.2 | 74.5 | 59.6 | |
| ! | ! | ! | 88.3 | 87.2 | 74.7 | 58.2 | | |
| ! | ! | 86.8 | 85.8 | 51.6 | 41.7 | | | |
| ! | 86.6 | 53.9 | 53.8 | 12.8 | | | | |
| ! | 61.9 | 85.4 | 15.5 | 42.5 | | | | |
| Image | En | Pred. |
|-----------------------------------------------------------------------------------------------------------------|------|---------|
| a vegetarian sandwich , cut in half is on a red plate | 83.2 | |
| a hoagie sandwich with several vegetables and turkey on it | 12.4 | |
| the sandwich is in half on the table next to pickle slices | 2.8 | |
| a plate with salad , chips , and large white bread sandwiches with meat | 0.9 | |
| soup , a sandwich , a pickle , and some chips are all on a plate | 0.4 | |
| a cow appears to run while two men on horses wearing hats are seen with lassos | 48.9 | |
| a man pulling two cows by ropes with a lot of people gathered together | 41.3 | |
| horses are running with their faces very close to each other | 2.7 | |
| a man on a horse landing on the backside of an obstacle | 2.3 | |
| it is always fun to have a good friend along for the ride | 1.1 | |
| Fr | | |
| un sandwich végétarien , coupé en deux est sur une plaque rouge . | 84.6 | |
| un sandwich hoagie avec plusieurs légumes et dinde sur elle . | 8.9 | |
| le sandwich est dans la moitié sur la table à côté de tranches de cornichons . | 5.7 | |
| Une assiette avec de la salade , des chips et d'énormes sandwichs de pain blanc avec de la viande . | 0.5 | |
| soupe , un sandwich , un cornichon , et certaines puces sont tous sur une plaque . | 0.1 | |
| two giraffes standing on all fours next to one another with grass , bushes and trees around them | 63.7 | |
| Deux giraffes regardent autour pendant que l'autre se penche pour manger . | 23.2 | |
| une mère et un bébé girafe dans les arbres bordé d'un parc animalier . | 7.6 | |
| une girafe est debout à côté d'un arbre comme d'autres girafes marchent derrière eux . | 1.5 | |
| une girafe dans une enceinte tord sa tête pour manger un peu de feuillage sur un poteau . | 1.4 | |
| Es | | |
| un sándwich vegetariano cortado por la mitad en un plato rojo | 57.0 | |
| sándwich con verduras y pavo | 36.5 | |
| sándwich cortado a la mitad sobre la mesa junto a rebanadas de pepinillos | 5.3 | |
| sándwich con pepinillo, queso, mostaza, ketchup y mahonesa en un plato con un tenedor | 0.4 | |
| un planto con ensalada, patatas y grandes sándwiches con carne | 0.3 | |
| tres tortitas con mantequilla en un plato amarillo con forma ovalada | 98.2 | |
| la comida en el plato en la mesa ya está lista para comerse | 1.1 | |
| mesa con platos de desayuno y bebidas | 0.2 | |
| varias personas sentadas a una mesa con platos con comida | 0.1 | |
| mesa llena de platos de comida y dos vasos con bebida | 0.1 | |
| It | | |
| sandwich vegetariano tagliato a metà su un piatto rosso | 86.2 | |
| mezzo sandwich sul tavolo accanto a fettine di sottaceto | 11.4 | |
| piatto con insalata, patatine e grandi sandwich di pane bianco con carne | 0.7 | |
| panino imbottito con verdure varie e tacchino | 0.6 | |
| sandwich con sottaceto, formaggio, senape, ketchup e maionese su un piatto con una forchetta | 0.5 | |
| cavallo bianco e marrone che bruca un prato verde | 89.2 | |
| cavallo bianco e marrone in piedi su un prato | 10.5 | |
| mucca bianca e marrone in un pascolo | 0.1 | |
| cavallo marrone che bruca l'erba in mezzo a un bosco | 0.1 | |
| cavallo con paraocchi legato a un palo in un parcheggio | 0.0 | |
| Table 9: A Case Study of multi-lingual retrieval results. We conduct a case study on the XTD dataset, using our | | |
Table 9: **A Case Study of multi-lingual retrieval results.** We conduct a case study on the XTD dataset, using our proposed model AltCLIP-M9 for text retrieval in one of four languages (English, French, Italian, Spanish) for each randomly selected image. Our model demonstrated consistent and satisfactory performance in object recognition and understanding spatial relationships across languages, with top 5 most similar texts retrieved from the dataset for each image. Bold text indicates ground truth and prediction (Pred.) value is in %.
![14_image_0.png](14_image_0.png)
Table 10: The images generated by AltCLIP M9 -guided Diffusion with the same prompt translated to nine languages
Prompts EN:clean simple line art of a cute little girl with short wavy curly hair. she is dressed as an astronaut. no background. well composed, clean coloring book page, beautiful detailed face. coloring book line art by artgerm and greg rutkowski and johanna basford and alphonse mucha ZH:一个可爱的小女孩的干净简单的线条艺 术 , 短波浪巻发 , 她打扮成宇航员 , 没有 育景。构图良好,涂色书页面干净,面部 细节优美。Artgerm\#IGreg Rutkowskibl R Johanna Basford fi l Alphonse Muchairs 着色书线条艺术 FR:Nettoyer I'art au trait simple d'une petite fille mignonne avec des cheveux. courts et bouclés ondulés. Elle est habillée en astronaute. Pas d' antecedents. Bien compose, page de livre à colorier propre, beau visage détaillé. Dessin de ligne de livre de coloriage par Artgerm et Greg Rutkowski et Johanna Basford et Alphonse Mucha SP:Arte de línea simple y limpia de una linda niña con cabello corto y rizado ondulado. Está vestida de astronauta. sin antecedentes. Bien compuesto, página de libro para colorear limpio, hermosa cara detallada. Coloring Book Line Art por Artgerm y Greg Rutkowski y Johanna Basford y Alphonse Mucha RU:чистая простая линия искусства милой маленькой девочки с короткими волнистыми выощимися волосами, она одета как астронавт. нет предыстории. хорошо составленная, чистая раскраска, красивое детализированное лицо.
раскраска линии артгерма и Грега Рутковски и Джоанны Басфорд и Альфонса Мухи AR: ان الخط لنطيف انيسيل لفناة صطيرة نو نار مجمد رعش رعش رعش نام نام ي ز ي ر ا لك فضاء . أي خلقية . مزلفة بشكل جي ذ ، صفحة كثاب نظيفة ، وجه مقصل ج ةب رانملا ، كذاب تلوين فن المضلا برانمطة
, Johanna Basford , Greg Rutkowski ,
Alphonse Mucha JA: B いウェーブのかかった巻巻毛を持つ かわいい女の子の音れいなシンプルなラ イン アート 。 彼女は宇宙飛行士の格好を しています。 背景なし。 よく構成された
、されいな壇り捨べージ、美しい詳細な Mi, artgerm と greg rutkowski と johanna basford と alphonse mucha IC
よる査り線の線面 KO: 짧은 물질 모양의 곱슬머리를 가진 귀여운 소녀의 재못하고 단순한 라인 아트. 그녀는 우주 비행사 옷을 입고 있습니다. 배경이 없습니다. 잘 구성되고 재못한 철려링 복 페이지, 아름다운 디 테일한 얼굴. artgerm 实 greg rutkowski 및 johanna basford § alphonse mucha의 색 칠하기 책 라인 아트 IT: linea semplice e pulita di una bambina carina con capelli ricci corti e ondulati. è vestita da astronauta. nessuno sfondo, pagina del libro da colorare ben composta, pulita, bel viso dettagliato. disegno al tratto del libro da colorare di artgerm e greg rutkowski e johanna basford e alphonse mucha and a fixed seed.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1 (Introduction)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2, 3
✓ B1. Did you cite the creators of artifacts you used?
1,2,5 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
5
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
2, 5
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
5
## C ✓ **Did You Run Computational Experiments?** 2
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
liu-etal-2023-rhgn | {RHGN}: Relation-gated Heterogeneous Graph Network for Entity Alignment in Knowledge Graphs | https://aclanthology.org/2023.findings-acl.553 | Entity Alignment, which aims to identify equivalent entities from various Knowledge Graphs (KGs), is a fundamental and crucial task in knowledge graph fusion. Existing methods typically use triple or neighbor information to represent entities, and then align those entities using similarity matching. Most of them, however, fail to account for the heterogeneity among KGs and the distinction between KG entities and relations. To better solve these problems, we propose a Relation-gated Heterogeneous Graph Network (RHGN) for entity alignment. Specifically, RHGN contains a relation-gated convolutional layer to distinguish relations and entities in the KG. In addition, RHGN adopts a cross-graph embedding exchange module and a soft relation alignment module to address the neighbor heterogeneity and relation heterogeneity between different KGs, respectively. Extensive experiments on four benchmark datasets demonstrate that RHGN is superior to existing state-of-the-art entity alignment methods. | # Rhgn: Relation-Gated Heterogeneous Graph Network For Entity Alignment In Knowledge Graphs
Xukai Liu, Kai Zhang∗
, Ye Liu, Enhong Chen, Zhenya Huang Linan Yue, **Jiaxian Yan**
Anhui Province Key Laboratory of Big Data Analysis and Application, University of Science and Technology of China State Key Laboratory of Cognitive Intelligence
{chthollylxk,kkzhang0808,liuyer,lnyue,jiaxianyan}@mail.ustc.edu.cn;
{cheneh, huangzhy}@ustc.edu.cn
## Abstract
Entity Alignment, which aims to identify equivalent entities from various Knowledge Graphs
(KGs), is a fundamental and crucial task in knowledge graph fusion. Existing methods typically use triples or neighbor information to represent entities, and then align those entities using similarity matching. Most of them, however, fail to account for the heterogeneity among KGs and the distinction between KG
entities and relations. To better solve these problems, we propose a Relation-gated Heterogeneous Graph Network (RHGN) for entity alignment in knowledge graphs. Specifically, RHGN contains a relation-gated convolutional layer to distinguish relations and entities in the KG. In addition, RHGN adopts a cross-graph embedding exchange module and a soft relation alignment module to address the neighbor heterogeneity and relation heterogeneity between different KGs, respectively. Extensive experiments on four benchmark datasets demonstrate that RHGN is superior to existing state-of-theart entity alignment methods.
## 1 Introduction
Knowledge Graphs (KGs), which are sets of triples like (head entity, relation, *tail entity*), have been widely constructed (Sevgili et al., 2022; Wang et al.,
2023) and applied (Liu et al., 2020a; Zhang et al.,
2022, 2021) in various fields in recent years, such as DBpedia (Lehmann et al., 2015) and YAGO (Rebele et al., 2016). In the real world, a single KG
is usually incomplete as limited sources can be collected by one KG. From this perspective, *entity alignment*, which aims to determine equivalent entities from various KGs, is a crucial task of knowledge graph fusion and is being increasingly researched (Sun et al., 2020c; Chen et al., 2022).
Specifically, entity alignment is a task to find equivalent entities with the same color across two KGs, as illustrated in Figure 1. As the neighbors
∗corresponding author.
![0_image_0.png](0_image_0.png)
and relations of the same entity in various KGs are often different, also known as the heterogeneity problem, it is time-consuming to find aligned entities manually. To align the entities efficiently, many embedding-based methods have been proposed. Traditional methods (Chen et al., 2017; Zhu et al., 2017) follow the translational principle, such as TransE (Bordes et al., 2013), to represent entity embedding, which consider the triples but disregard the local neighbors. Recently, many methods
(Wang et al., 2018; Sun et al., 2020b) have adopted the Graph Convolutional Network (GCN) and its variants to capture local neighbor information due to the GCNs' remarkable ability (Welling and Kipf, 2016; Velickovic et al., 2017; Wu et al., 2021, 2023). Additionally, researchers have proposed some models to utilize relations as weights (Cao et al., 2019) or information (Mao et al., 2021; Yu et al., 2021) in the GCN-based framework. Despite this, the following two primary challenges have been encountered by the vast majority of prior methods when attempting to use relation information to solve KG heterogeneity:
First, relations should not be directly incorporated into entity representation, since confusing relations with entities leads to smooth entity representations. In DBpedia, there are 4,233,000 entities but only 3,000 relations, making the same relation often established between various entities (e.g.,
Country in Figure 1(b)). To separate relations from entities, R-GCN (Schlichtkrull et al., 2018) learns relation matrices but numerous relations bring trouble for parameter optimization (Vashishth et al.,
2019). Therefore, existing models (Nathani et al.,
2019; Mao et al., 2020) employ vectors to represent relations and apply simple functions (e.g., subtraction and projection) as the neighbor message functions. However, these simple functions barely distinguish relations from entities and still bring much noise to entity representation.
Second, due to KG heterogeneity, it is challenging to unify the semantic representations between KGs during the alignment process. Specifically, KG heterogeneity includes (1) neighbor heterogeneity and (2) relation heterogeneity. Neighbor heterogeneity indicates that the same entity in different KGs have different neighbors. As illustrated in Figure 1, neighbor heterogeneity is reflected in that *Da Vinci* have different neighbors in two KGs, which may make us mistakenly match *Da Vinci* in KG1 with *Florence Cathedral* in KG2 as they have more identical neighbors. Relation heterogeneity means that the relation between the same entity pair can be expressed in various ways, even though these relations have similar intentions. As Figure 1 shows, relation heterogeneity is expressed as that the relation between *Da Vinci* and Italy is *Nationality* in KG1, while it is *Citizenship* in KG2, which causes trouble for aligning these triples though they have the similar meaning.
To tackle these obstacles, we propose a Relationgated Heterogeneous Graph Network (RHGN) for entity alignment. Specifically, we first propose a novel Relation Gated Convolution (RGC) to make entity representations more discriminative. RGC
uses relations as signals to control the flow of neighbor information, which separates relations from entities and avoids noise flowing into entities in representation learning. Second, to tackle the neighbor heterogeneity between two KGs, we devise Crossgraph Embedding Exchange (CEE) to propagate information via aligned entities across different KGs, thereby unifying the entity semantics between two KGs. Third, we design Soft Relation Alignment
(SRA) to deal with the relation heterogeneity. SRA
leverages entity embedding to generate soft labels for relation alignment between KGs, hence reducing the semantic distance of similar relations across KGs. Finally, extensive experiments on four realworld datasets demonstrate the effectiveness of our proposed method. The source code is available at https://github.com/laquabe/RGHN.
## 2 Related Works 2.1 Entity Alignment
Entity alignment is a fundamental task in knowledge graph study. It seeks to recognize identical entities from different KGs (Sun et al., 2020c; Chen et al., 2020). To efficiently find identical entities, embedding-based models have been extensively studied. Traditional models, such as MtransE (Chen et al., 2017), used translation-based models (e.g., TransE (Bordes et al., 2013)) to make the distance between aligned entities get closer. Following this thought, IPTransE (Zhu et al., 2017),
JAPE (Sun et al., 2017), and BootEA (Sun et al.,
2018) constrained models from semantic space, attributes, and labels, respectively. Traditional models, however, neglect neighbor structures in favor of triples.
Inspired by the great success of Graph Neural Networks (GNNs), numerous methods (e.g, GCN-Align (Wang et al., 2018), AliNet (Sun et al., 2020b)) employed the GNNs and the variants to capture local neighbor information (Zeng et al., 2021). Since the knowledge graph contains abundant relations, RDGCN (Wu et al., 2019a),
RSN4EA (Guo et al., 2019), and Dual-AMN (Mao et al., 2021) utilized relations as weights, paths, and projection matrices in GNNs. RREA (Mao et al., 2020) proposed a unified framework for entity alignment using relations. IMEA (Xin et al.,
2022) encoded neighbor nodes, triples, and relation paths together with transformers. Unfortunately, they have not paid enough attention to the differences between entities and relations, and ignored semantic differences between different graphs due to KG heterogeneity.
Relation alignment, meantime, greatly aids in entity alignment. MuGNN (Cao et al., 2019) and ERMC (Yang et al., 2021) directly used the relation alignment labels but relation alignment labels are scarce in the real world. RNM (Zhu et al.,
2021) and IMEA (Xin et al., 2022) applied postprocessing to relation alignment with statistical features. However, post-processing can mine limited aligned relations. HGCN-JE (Wu et al., 2019b)
jointly learned entity alignment and relation alignment, which incorporated neighbor relations into
![2_image_0.png](2_image_0.png)
entities. Unfortunately, non-aligned entities may also have similar neighbor relations, which means relation alignment and entity alignment should be separated. Therefore, effective relation alignment methods remain to be explored.
## 2.2 Graph Convolutional Network
Graph Convolutional Networks (GCNs) generalize convolution operations from traditional data
(e.g., images or grids) to non-Euclidean data structures (Defferrard et al., 2016). The fundamental idea of graph convolutional networks is to enhance node self-representation by using neighbor information. Therefore, GCNs are typically expressed as a neighborhood aggregation or message-passing scheme (Gilmer et al., 2017).
In the broad application of GCNs, GCN (Welling and Kipf, 2016) and GAT (Velickovic et al., 2017)
showed the powerful ability to capture neighbor information. Despite this, they performed poorly in KG representation as they ignored relations. To emphasize the essential role of relations in entity representation, R-GCN (Schlichtkrull et al., 2018)
used a matrix to represent each relation. However, massive relations in the knowledge graph make it challenging for the relation matrixes to be fully learned. Thus, most follow-up works used vectors to represent relations. For example, KBGAT (Nathani et al., 2019) concentrated the neighbor triples as information. CompGCN (Vashishth et al., 2019) leveraged the entity-relation composition operations from knowledge embedding methods like TransE (Bordes et al., 2013) as message.
KE-GCN (Yu et al., 2021) passed the gradient of the scoring function to the central node. Nevertheless, none of the above models takes account of the inequality of relations and entities. In contrast, our RHGN is able to make a clear distinction between relations and entities, resulting in more distinct entity representations.
## 3 Preliminaries
In this section, we formalize the problem of entity alignment and give some related definitions.
## 3.1 Problem Definition
In this paper, we formally define a KG as G =
(E, R, T), where E is the set of entities, R is the set of relations, and T = E × R × E is the set of triples like (*Florence, Country, Italy*) as illustrated in Figure 1. Without loss of generality, we consider the entity alignment task between two KGs, i.e., G1 = (E1, R1, T1) and G2 = (E2, R2, T2). The goal is to find the 1-to-1 alignment of entities SKG1,KG2 = {(e1, e2) ∈ E1 × E2|e1 ∼ e2},
where ∼ denotes the equivalence relation. To train the model, a small subset of the alignment S′KG1,KG2∈ SKG1,KG2 is given as the training data, and we call it seed alignment set.
## 3.2 Graph Convolutional Layers
Following previous works (Sun et al., 2020b; Guo et al., 2020; Xin et al., 2022), our RHGN model is built upon GCN framework (Welling and Kipf, 2016) to embed the entities E in KGs. Our model contains multiple stacked GCN layers, which enables entity embeddings to incorporate information from higher-order neighbors. The input for k-th GCN layer is an entity feature matrix, Ek =
![3_image_0.png](3_image_0.png)
{e k 1
, ek 2
, ..., ekn|e k i ∈ G}, where n is the number of entities in G. To update the embedding of entities in layer k, the GCN layer aggregates neighbor information, which can be formally described as:
e k+1 i = γ k(e k i, Aggj∈N(i)ϕ(e k i, ek j, rk i,j )) (1)
where N(i) is the neighbors of entity i, γ is the transformation function like MLP, Agg is the Aggregate function like sum, mean or max, and ϕ is the score function.
## 4 Rhgn: Relation-Gated Heterogeneous Graph Network
In this section, we first present an overview of our RHGN. Then we introduce the technical details of RHGN.
## 4.1 An Overview Of Rhgn
As shown in Figure 2, our approach contains four components: (a) Graph Data Preprocessing (GDP),
(b) Relation Gated Convolution (RGC), (c) Crossgraph Embedding Exchange (CEE), and (d) Soft Relation Alignment (SRA). Specifically, GDP first preprocesses graphs through two aspects: completing graphs by adding inverse relations and constructing the cross graph by exchanging aligned entities. Then, several RGC layers are devised to aggregate information in both original and cross graphs to get the representation of entities and relations. Meanwhile, CEE exchanges the embedding of original graphs and cross graphs between each RGC layer for efficient information propagation.
Finally, SRA employs the embedding of entities to produce soft labels for relation alignment and the embedding of entities and relations will be sent to the model loss for optimization.
## 4.2 Graph Data Preprocessing
In order to make better use of the relations and address heterogeneity, we first perform data preprocessing on graphs to make graphs more complete.
In detail, GDP contains two parts: Inverse Relation Embedding and Cross Graph Construction.
## 4.2.1 Inverse Relation Embedding
Since relations in KGs are normally unidirectional, following previous works (Sun et al., 2020b; Vashishth et al., 2019), we also add inverse relation to KGs. The inverse relation is defined as:
$$r_{i n v_{i}}=W_{i n v}r_{i},$$
$$\mathbf{\Pi}_{J}^{\dagger}$$
$$\mathbf{i})$$
rinvi = Winvri, (2)
where rinvi is the inverse relation of relation ri.
Winv is the weight matrix of inverse relation transformation. Therefore, we extend graphs as:
$$T^{\prime}=T\cup\{(t,r_{i n v},h)|(h,r,t)\in T\},$$
where (*h, r, t*) is the triple in the original graph.
## 4.2.2 Cross Graph Construction
As we discussed in Section 1, to address neighbor heterogeneity, in this part, we first construct cross graphs through the aligned entities in the seed alignment set for efficient information propagation across KGs. Specifically, as Figure 2(a)
shows, Cross Graph Construction generates cross graphs by exchanging the aligned entities in the seed alignment set S′KG1,KG2
. The entities E*cross* 1 in the cross graph G*cross* 1are defined as:
e
$$\circ_{1}^{c r o s s}=\left\{\begin{array}{l}{{e_{2}}}\\ {{e_{1}}}\end{array}\right.$$
e1 else.
Similarly, the entities E*cross*
2in the cross graph
$$\begin{array}{l}{{\mathrm{if}\quad}e_{1}\in S_{K G_{1},K G_{2}}^{\prime}\;a n d\,e_{1}\sim e_{2},}}\\ {{\mathrm{else}.}}\end{array}$$ (4)
G*cross*
2are defined as:
e
$$\circ_{2}^{c r o s s}=\left\{\begin{array}{l}{{e_{1}}}\\ {{e_{2}}}\end{array}\right.$$
e2 else.
Taking Figure 1 as an example, (Da Vinci, Citizenship, *Italy*) will be in cross KG2 as we exchange
Da Vinci in KG1 and *Leonardo da Vinci* in KG2.
if $e_{2}\in S_{K}G_{1},KG_{2}$ and $e_{2}\sim e_{1}$, else.
$$\smile e_{1},$$
Finally, the cross graphs G*cross*
1and G*cross*
2
are defined as G*cross*
1 = (E*cross*
1, R1, T
$\left(5\right)$.
## Cross
1) And
G*Cross*
2 = (E*Cross*
2, R2, T
Cross
2).The Embeddings Of
Entities And Relations Are Randomly Initialized. 4.3 Relation Gated Convolution
After getting the preprocessed graphs, in Figure 2(b), we use RGC to aggregate neighbors and relations to the central entity. As discussed in Section 1, directly incorporating relation into entity representation may introduce much noise. To tackle this, we separate the semantic space of relations and entities. Specifically, in figure 3, we use a non-linear activation function (σ2) as a gate to aggregate neighbors and relations. The gate treats relations as control signals to regulate the inflow of neighbor information. For the entity i at k-th layer e k i
, the embedding of entity i at k+1-th layer e k+1 i is computed as follows:
$$e_{i}^{k+1}=\sigma_{1}(\sum_{j\in N(i)}W_{e}^{k}(e_{j}^{k}\otimes\sigma_{2}(r_{i,j}^{k}))),\quad\mathbf{(6)}$$
where N(i) is the set of neighbors of entity i, and r k i,j is the relation from entity j to entity i, Wk e is the entity weight matrix of k-th layer, ⊗ denotes element-wise multiplication between vectors, σ1(·)
and σ2(·) are non-linear activation functions. We use tanh(·) for σ1(·) and *sigmoid*(·) for σ2(·).
Moreover, inspired by (Vashishth et al., 2019),
we also update the embedding of relations r k i,j as:
$$r_{i,j}^{k+1}=W_{r}^{k}r_{i,j}^{k},$$
where Wk r is the relation weight matrix of the k-th layer. In order to reduce the semantic gap between the two KGs, we share the weights of the RGCs between two graphs in each layer.
## 4.4 Cross-Graph Embedding Exchange
According to Section 4.2, we build the cross graph to address neighbor heterogeneity among different KGs. In this section, to make information propagation across KGs more efficient, we introduce a cross-graph embedding exchange method on both original and cross graphs to reduce the entity semantic distance between KGs. As illustrated in Figure 2(c), we exchange entity embeddings between the original graph and the cross graph at each intermediate layer. Formally, Ekand Ek cross represent the entity embedding of original graph and cross graph in k-th layer respectively, the k+1th layer can be computed as:
$$E^{k+1}=R G C(E_{c o r o s s}^{k},R^{k},G^{k},W^{k}),$$ $E_{c r o s s}^{k+1}=R G C(E^{k},R_{c o r o s s}^{k},G_{c r o s s}^{k},W^{k}).$
Compared with previous work (Cao et al., 2019)
that adds edges between aligned entities in the seed alignment set, CEE can effectively reduce the distance of information propagation across two KGs.
Taking the entity *Florence* in Figure 1 as an example, if we assume that *Italy* in two KGs is aligned, the information from *Florence* in KG1 can propagate to *Florence* in KG2 only through 3 edges and 2 nodes with the help of CEE. According to Huang et al. (2020), a shorter propagation distance spreads more information across two KGs, making the two graphs' entity semantics closer.
## 4.5 Soft Relation Alignment
As discussed in Section 1, relation heterogeneity also complicates entity alignment. Relation alignment, which seeks out mutually similar ties across KGs, is one direct method for resolving this problem. However, due to the lack of labels, we need to produce soft relation alignment labels by ourselves.
Inspired by prior works (Wu et al., 2019b; Zhu et al., 2021), we make use of entities to produce soft relation alignment labels as shown in Figure 2(d).
We define relation label embedding as:
r ′ = concat[ 1 Hr X ei∈Hr ei,1 Tr X ej∈Tr ej ], (10)
$$\left(7\right)$$
where Hr and Tr are the sets of head entities and tail entities of relaiton r, respectively. Then, the relation alignment label is defined as:
$$y_{i j}=\mathbb{I}(c o s(r_{i}^{\prime},r_{j}^{\prime})>\gamma),\qquad\qquad(11)$$
where γ is the hyperparameter of the threshold.
It is noteworthy that our method may either produce multiple alignment labels or no alignment labels for one relation since relation alignment does not obey 1-to-1 constraints. As shown in Figure 1, Nationality and *Famous People* in KG1 may be similar to *Citizenship* in KG2, while *Location* in KG2 has no similar relation KG1. This feature makes us decide to convert relation alignment task to a multi-label classification task in model loss.
## 4.6 Training
(8) (9) $\frac{1}{2}$
In this subsection, we introduce our loss components: the entity alignment loss and the relation alignment loss, which capture alignment information of entities and relations, respectively.
Dataset KG #Ent. #Rel. **#Rel tr.**
EN-FR EN 15,000 267 47,334
FR 15,000 210 40,864
EN-DE EN 15,000 215 47,676
DE 15,000 131 50,419
D-W DB 15,000 248 38,265
WD 15,000 169 42,746
D-Y DB 15,000 165 30,291
YG 15,000 28 26,638
## 4.6.1 Entity Alignment Loss
Following previous work (Sun et al., 2020b; Xin et al., 2022), we minimize the contrastive alignment loss to make the distance between the aligned entities as close as possible, while the distance between the non-aligned entities is very far. The alignment loss is defined as:
$${\mathcal{L}}_{1}=$$
L1 =X
$$\sum_{(i,j)\in A^{+}}||e_{i}-e_{j}||+\sum_{(i^{\prime},j^{\prime})\in A^{-}}\alpha_{1}[\lambda-||e_{i^{\prime}}-e_{j^{\prime}}||]-\tag{12}$$
where eiis the entity embedding concentration of all layers in the original graph and cross graph.
A− is the set of negative samples generated by truncated-ϵ negative sampling strategy, *|| · ||* denotes L2 distance. [·]+ = max(0, x), and we hope the distance of negative samples to be larger than a margin λ. α1 is a hyperparameter to keep the balance between positive and negative samples.
## 4.6.2 Relation Alignment Loss
As we mentioned in Section 4.5, we transform relation alignment into a multi-label classification task.
Consequently, we first calculate the cosine similarity of relations in the last layer between graphs:
$$x_{i j}=c o s(r_{i},r_{j}).$$
xij = cos(ri, rj ). (13)
Then, we use the soft labels produced in SRA to calculate the relation alignment loss, we adopt the multi-label soft margin loss:
$$\mathcal{L}_2=-\frac{1}{|R|}\sum_i(y_i\cdot log(\frac{1}{1+exp(-x_i)})\tag{1}$$ $$+(1-y_i)\cdot log\frac{exp(-x_i)}{1+exp(-x_i)}).$$ Finally, RHGH combines the two losses as:
(14)
$${\mathcal{L}}={\mathcal{L}}_{1}+\alpha_{2}{\mathcal{L}}_{2},$$
$$(15)$$
We implement our method through PyG (Fey and Lenssen, 2019) on Pytorch. We initialize the trainable parameters with Xavier initialization (Glorot and Bengio, 2010) and optimize loss with Adam (Kingma and Ba, 2015). As for hyper-parameters, we decide the important hyperparameters by grid search and keep them the same in all datasets. For example the number of RGCs' layers is 4, the hidden size of each layer is 256, the batch size is 256, and the learning rate is 0.001. We set α2 = 10 to keep the balance of alignment loss and semantic loss. We randomly sample 25 negative samples for each pre-aligned entity pair. After every 25 epochs, we resample 25 negative samples based on the CSLS (Lample et al., 2018) and resample 100 head and tail entities respectively to generate soft relation alignment labels. The threshold γ is 0.5, the negative sample distance margin λ is 1.5 and the negative sample weight α1 is 0.1.
Followed the previous work (Sun et al., 2020b; Xin et al., 2022), we also use early stopping to terminate training based on Hits@1 performance on the validation set with a patient of 25 epochs, and the maximum training epochs is 1000. According to most previous work, we report the Hits@1, Hits@5 and MRR (mean reciprocal rank) results to assess entity alignment performance. We conduct the experiments with 5-fold cross-validation to ensure the unbiased evaluation.
where α2 is a hyperparameter to keep the balance between entity alignment and relation alignment.
## 5 Experiments 5.1 Dataset
For the reliability and authority of experimental results, we use the dataset (V1) in OpenEA (Sun et al.,
2020c) for evaluation since it closely resembles the data distribution of real KGs. It contains two crosslingual settings extracted from multi-lingual DBpedia: English-French and English-German, as well as two monolingual settings among popular KGs:
DBpedia-Wikidata and DBpedia-YAGO. We use the setting that datasets contain 15K pairs of reference entity alignment and no reference relation alignment. Table 1 provides further information about the datasets. We adhere to OpenEA's dataset divisions, which use a 20% seed for training, 10%
for validation, and 70% for testing.
## 5.2 Implementation Details
| Dateset | EN_FR_V1 | EN_DE_V1 | D_W_V1 | D_Y_V1 | | | | | | | | | |
|-------------------|------------|------------|----------|----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| Category | Method | H@1 | H@5 | MRR | H@1 | H@5 | MRR | H@1 | H@5 | MRR | H@1 | H@5 | MRR |
| MTransE | 0.247 | 0.467 | 0.351 | 0.307 | 0.518 | 0.407 | 0.259 | 0.461 | 0.354 | 0.463 | 0.675 | 0.559 | |
| IPTransE | 0.169 | 0.320 | 0.243 | 0.350 | 0.515 | 0.430 | 0.232 | 0.380 | 0.303 | 0.313 | 0.456 | 0.378 | |
| Triple-based | AlignE | 0.357 | 0.611 | 0.473 | 0.552 | 0.741 | 0.638 | 0.406 | 0.627 | 0.506 | 0.551 | 0.743 | 0.636 |
| SEA | 0.280 | 0.530 | 0.397 | 0.530 | 0.718 | 0.617 | 0.360 | 0.572 | 0.458 | 0.500 | 0.706 | 0.591 | |
| GCN-Align | 0.338 | 0.589 | 0.451 | 0.481 | 0.679 | 0.571 | 0.364 | 0.580 | 0.461 | 0.465 | 0.626 | 0.536 | |
| Neighbor-based | AliNet | 0.364 | 0.597 | 0.467 | 0.604 | 0.759 | 0.673 | 0.440 | 0.628 | 0.522 | 0.559 | 0.690 | 0.617 |
| HyperKA | 0.353 | 0.630 | 0.477 | 0.560 | 0.780 | 0.656 | 0.440 | 0.686 | 0.548 | 0.568 | 0.777 | 0.659 | |
| RSN4EA | 0.393 | 0.595 | 0.487 | 0.587 | 0.752 | 0.662 | 0.441 | 0.615 | 0.521 | 0.514 | 0.655 | 0.580 | |
| Relation-enhanced | KE-GCN | 0.408 | 0.670 | 0.524 | 0.658 | 0.822 | 0.730 | 0.519 | 0.727 | 0.608 | 0.560 | 0.750 | 0.644 |
| IMEA | 0.458 | 0.720 | 0.574 | 0.639 | 0.827 | 0.724 | 0.527 | 0.753 | 0.626 | 0.639 | 0.804 | 0.712 | |
| Ours | RHGN | 0.500 | 0.739 | 0.603 | 0.704 | 0.859 | 0.771 | 0.560 | 0.753 | 0.644 | 0.708 | 0.831 | 0.762 |
| Dateset | EN_FR_V1 | D_W_V1 | | | | |
|-----------|------------|----------|-------|-------|-------|-------|
| Method | H@1 | H@5 | MRR | H@1 | H@5 | MRR |
| GCN | 0.391 | 0.612 | 0.488 | 0.474 | 0.649 | 0.550 |
| GAT | 0.362 | 0.577 | 0.457 | 0.448 | 0.625 | 0.525 |
| R-GCN | 0.468 | 0.708 | 0.572 | 0.538 | 0.736 | 0.624 |
| CompGCN | 0.473 | 0.726 | 0.584 | 0.524 | 0.729 | 0.613 |
| RGC | 0.500 | 0.739 | 0.603 | 0.560 | 0.753 | 0.644 |
## 5.3 Benchmark Methods
To evaluate the effectiveness of RHGN, we compare it with the state-of-the-art supervised structurebased entity alignment methods. we use codes and parameters released by the authors and display the best results among reproduced results and reported results in original articles. In general terms, we can classify them as follows.
- **Triple-based Models.** These models focus on triple, they usually use TransE (Bordes et al., 2013) to represent entities and relations, including MTransE (Chen et al., 2017), IPTransE (Zhu et al., 2017), AlignE (Sun et al.,
2018), and SEA (Pei et al., 2019).
- **Neighbor-based Models.** These models emphasize neighbor information but ignore the relation information, they usually use GNNs to represent entities, including GCNAlign (Wang et al., 2018), AliNet (Sun et al.,
2020b), and HyperKA (Sun et al., 2020a).
- **Relation-enhanced Models.** These models take into account the importance of relation information and incorporate relation information into entity representations, including
RSN4EA (Guo et al., 2019), KE-GCN (Yu et al., 2021), and IMEA (Xin et al., 2022).
Our model and the above baselines all focus on the structural information of KGs. For a fair comparison, we disregard additional models that incorporate side information (e.g., attributes, entity names and descriptions) like RDGCN (Wu et al., 2019a), KDCoE (Chen et al., 2018) and AttrGNN (Liu et al., 2020b).
## 5.4 Experimental Results
The results of all methods on OpenEA datasets are shown in Table 2. In general, the RHGN model has achieved the best performance compared with these SOTA baselines. Specifically, our method outperforms the best-performing baseline (i.e., IMEA,
KE-GCN) on Hits@1 by 3%-6%, on MRR by 1%-5%, and on Hits@5 by 1%-3% (except for D_W_V1). Additionally, we discover some interesting phenomena as follows:
First, on all datasets, relation-enhanced models outperform neighbor-based models, and both outperform triple-based models. This fully demonstrates that relation information plays an important role in neighbor information aggregation. Second, our model has significant improvements on EN_DE_V1 and D_Y_V1, but the improvements of our model are relatively limited on EN_FR_V1 and D_W_V1, and we find that all baselines do not perform well on datasets EN_FR_V1 and D_W_V1. We believe that the semantic distance between the graphs in the two datasets is far apart, which makes it is hard to find aligned entities.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
## 5.5 Ablation Study 5.5.1 Rgc'S Ability To Utilize Relations
To compare the ability to utilize relations of various convolutions, We replace the RGC
with re-tuned GNN variants GCN (Welling and Kipf, 2016), GAT (Velickovic et al.,
2017), R-GCN (Schlichtkrull et al., 2018), and CompGCN (Vashishth et al., 2019) with the same parameters. The results are shown in Table 3.
Among these models, our RGC also achieves the best performance, as GCN and GAT ignore the relations, while R-GCN and CompGCN can not take advantage of the relations well. Meanwhile, the result that R-GCN and CompGCN outperform GCN and GAT proves the essential role of relations in entity representation.
## 5.5.2 The Impact Of Different Heterogeneity
To verify the impact of different heterogeneity, figure 5 reports the performances after removing CEE
and SRA, respectively. We observe that both components contribute to performance improvement, demonstrating that each component design in our framework is reasonable. Meanwhile, the effects of the two components on different datasets are also different, implying that the impact of neighbor heterogeneity and relationship heterogeneity varies between different KGs.
## 5.6 The Distance Of Information Propagation
We explore the effect of RGC's layer number on model performance as layer numbers reflect the distance of information propagation. In Figure 6, we present the effect of RGC's layer numbers with 1 to 5 on EN_FR_V1. Obviously, RHGN with 4 layers achieves the best performance over all three metrics. When the number of layers exceeds 4, the performance decline as adding more layers allows the model to collect more distant neighbor data and adds noise during information propagation. We also observe that RHGN with 2 layers has a huge improvement over RHGN with 1 layer. We believe that due to the lack of exchange entity embedding, RHGN with 1 layer cannot obtain information from the other KGs, resulting in poor performance.
Then we calculate the shortest path length from the test set entities to the training set entities in the EN_FR_V1 dataset. The average and median of shortest path length are 1.5 and 1 in EN, and the length is 1.6 and 2 in FR. This shows that most entities need 3 to 4 hops to pass their own information to the aligned entity of another graph with CEE module. As a matter of fact, RHGN with 3 and 4 layers achieves similar performance and is ahead of other variants, which also verifies that our CEE module is effective.
## 5.7 Visualization Of Entity Embedding
For a more intuitive comparison of how our proposed model addresses heterogeneity across different KGs with other methods, we conduct visualization on the D_W_V1 datasets. Specifically, we perform dimensionality reduction on entity embedding of GCN, GAT, R-GCN, CompGCN, and RHGN with t-SNE (Van der Maaten and Hinton, 2008). Results are shown in Figure 4, where the same color means entities are in the same KG. Ideally, the entity distributions of two graphs should overlap as much as possible, and entity embeddings should be sparsely distributed.
From Figure 4, we find some phenomena as follows. First, entities represented by previous methods have obvious clustering in space, while incorporating relation can effectively alleviate the clustering. This phenomenon suggests that relations play an essential role in distinguishing entities and preventing over-smoothing. Second, all previous arts have significant space that is not aligned, which demonstrates that they are unable to bridge the semantic space gap caused by KG heterogeneity.
However, our RHGN model's entity embeddings are sparsely distributed in space and have a high degree of overlaps, making the model distinguish entities well and easily find aligned entities.
## 6 Limitations
Although we have demonstrated the superiority of our RHGH model compared to previous work on four real-world datasets, there are still two limitations that should be addressed in the future:
(1) As our RGC layer employs the whole graph to learn the embedding of entities and relations, like most GCN's frameworks, the computational resources and time required by our framework increase linearly with the size of KG. To make our RHGH model effective on the KG with millions of entities, it is desirable to apply some graph chunking techniques, such as Cluster-GCN (Chiang et al.,
2019), to reduce the size of the KG for our RHGH model to improve computational efficiency.
(2) Currently, our RHGH model treats each relation individually. However, relation paths consisting of multiple relations will contain more complex semantic information in KGs. Relation paths enable entities to obtain higher-order neighbor information, but it is also more difficult to align relational paths in different knowledge graphs. In future work, we will explore more efficient ways to utilize the relation path in entity alignment, such as the relation path matching in different KGs.
## 7 Conclusion
In this paper, we studied the problem of entity alignment and proposed the RHGN model, which could distinguish relation and entity semantic spaces, and further address heterogeneity across different KGs.
Specifically, we first designed a novel relationgated convolutional layer to regulate the flow of neighbor information through relations. Then, we proposed an innovative cross-graph embedding exchange module, which reduces the entity semantic distance between graphs to address neighbor heterogeneity. Finally, we devised a soft relation alignment module for the unsupervised relation alignment task, which solves the relation heterogeneity problem between graphs. Extensive experiments on four real-world datasets verified the effectiveness of our proposed methods. In future work, we will explore more ways to utilize the relation information in entity alignment, such as the relation path matching in different KGs.
## Acknowledgements
This research was partially supported by grants from the National Natural Science Foundation of China (Grants No. U20A20229, No. 62106244),
and the University Synergy Innovation Program of Anhui Province (No. GXXT-2022-042).
## References
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26.
Yixin Cao, Zhiyuan Liu, Chengjiang Li, Juanzi Li, and Tat-Seng Chua. 2019. Multi-channel graph neural network for entity alignment. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 1452–1461.
Liyi Chen, Zhi Li, Yijun Wang, Tong Xu, Zhefeng Wang, and Enhong Chen. 2020. Mmea: entity alignment for multi-modal knowledge graph. In Proc. of KSEM.
Liyi Chen, Zhi Li, Tong Xu, Han Wu, Zhefeng Wang, Nicholas Jing Yuan, and Enhong Chen. 2022. Multimodal siamese network for entity alignment. In Proc.
of KDD.
Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 2018. Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In *Proceedings of* the 27th International Joint Conference on Artificial Intelligence, pages 3998–4004.
Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In IJCAI.
Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. 2019. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In *Proceedings of the 25th* ACM SIGKDD international conference on knowledge discovery & data mining, pages 257–266.
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. *Advances* in neural information processing systems, 29.
Matthias Fey and Jan E. Lenssen. 2019. Fast graph representation learning with PyTorch Geometric.
In ICLR Workshop on Representation Learning on Graphs and Manifolds.
Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In *International* conference on machine learning, pages 1263–1272.
PMLR.
Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings.
Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to exploit long-term relational dependencies in knowledge graphs. In *International Conference on* Machine Learning, pages 2505–2514. PMLR.
Lingbing Guo, Weiqing Wang, Zequn Sun, Chenghao Liu, and Wei Hu. 2020. Decentralized knowledge graph representation learning. arXiv preprint arXiv:2010.08114.
Kexin Huang and Marinka Zitnik. 2020. Graph meta learning via local subgraphs. *Advances in Neural* Information Processing Systems, 33:5862–5874.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *ICLR (Poster)*.
Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018.
Word translation without parallel data. In *International Conference on Learning Representations*.
Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia.
Semantic web, 6(2):167–195.
Ye Liu, Han Wu, Zhenya Huang, Hao Wang, Jianhui Ma, Qi Liu, Enhong Chen, Hanqing Tao, and Ke Rui. 2020a. Technical phrase extraction for patent mining:
A multi-level approach. In *2020 IEEE International* Conference on Data Mining (ICDM), pages 1142–
1147. IEEE.
Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020b. Exploring and evaluating attributes, values, and structures for entity alignment.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 6355–6364.
Xin Mao, Wenting Wang, Yuanbin Wu, and Man Lan.
2021. Boosting the speed of entity alignment 10×:
Dual attention matching network with normalized hard sample mining. In *Proceedings of the Web Conference 2021*, pages 821–832.
Xin Mao, Wenting Wang, Huimin Xu, Yuanbin Wu, and Man Lan. 2020. Relational reflection entity alignment. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1095–1104.
Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4710–
4723.
Shichao Pei, Lu Yu, Robert Hoehndorf, and Xiangliang Zhang. 2019. Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference. In *The World Wide Web Conference*,
pages 3130–3136.
Thomas Rebele, Fabian Suchanek, Johannes Hoffart, Joanna Biega, Erdal Kuzey, and Gerhard Weikum.
2016. Yago: A multilingual knowledge base from wikipedia, wordnet, and geonames. In International semantic web conference, pages 177–185. Springer.
Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling.
2018. Modeling relational data with graph convolutional networks. In *European semantic web conference*, pages 593–607. Springer.
Özge Sevgili, Artem Shelmanov, Mikhail Arkhipov, Alexander Panchenko, and Chris Biemann. 2022.
Neural entity linking:: A survey of models based on deep learning. *Semantic Web*, 13(3):527–570.
Zequn Sun, Muhao Chen, Wei Hu, Chengming Wang, Jian Dai, and Wei Zhang. 2020a. Knowledge association with hyperbolic knowledge graph embeddings.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 5704–5716.
Zequn Sun, Wei Hu, and Chengkai Li. 2017. Crosslingual entity alignment via joint attribute-preserving embedding. In *International Semantic Web Conference*, pages 628–644. Springer.
Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu.
2018. Bootstrapping entity alignment with knowledge graph embedding. In *IJCAI*, volume 18, pages 4396–4402.
Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020b.
Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
volume 34, pages 222–229.
Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020c. A benchmarking study of embeddingbased entity alignment for knowledge graphs. *Proceedings of the VLDB Endowment*, 13(12).
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. Journal of machine learning research, 9(11).
Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. 2019. Composition-based multirelational graph convolutional networks. In *International Conference on Learning Representations*.
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio.
2017. Graph attention networks. *stat*, 1050:20.
Kehang Wang, Qi Liu, Kai Zhang, Ye Liu, Hanqing Tao, Zhenya Huang, and Enhong Chen. 2023. Classdynamic and hierarchy-constrained network for entity linking. In Database Systems for Advanced Applications: 28th International Conference, DASFAA
2023, Tianjin, China, April 17–20, 2023, Proceedings, Part II, pages 622–638. Springer.
Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In *Proceedings of the 2018 conference on empirical methods in natural language processing*, pages 349–357.
Max Welling and Thomas N Kipf. 2016. Semisupervised classification with graph convolutional networks. In *J. International Conference on Learning Representations (ICLR 2017)*.
Likang Wu, Zhi Li, Hongke Zhao, Qi Liu, Jun Wang, Mengdi Zhang, and Enhong Chen. 2021. Learning the implicit semantic representation on graphstructured data. In Database Systems for Advanced Applications: 26th International Conference, DASFAA 2021, Taipei, Taiwan, April 11–14, 2021, Proceedings, Part I 26, pages 3–19. Springer.
Likang Wu, Hongke Zhao, Zhi Li, Zhenya Huang, Qi Liu, and Enhong Chen. 2023. Learning the explainable semantic relations via unified graph topicdisentangled neural networks. ACM Transactions on Knowledge Discovery from Data.
Y Wu, X Liu, Y Feng, Z Wang, R Yan, and D Zhao.
2019a. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence.
Y Wu, X Liu, Y Feng, Z Wang, and D Zhao. 2019b.
Jointly learning entity and relation representations for entity alignment. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP),
pages 240–249. Association for Computational Linguistics.
Kexuan Xin, Zequn Sun, Wen Hua, Wei Hu, and Xiaofang Zhou. 2022. Informed multi-context entity alignment. In *Proceedings of the Fifteenth ACM*
International Conference on Web Search and Data Mining, pages 1197–1205.
Jinzhu Yang, Ding Wang, Wei Zhou, Wanhui Qian, Xin Wang, Jizhong Han, and Songlin Hu. 2021. Entity and relation matching consensus for entity alignment.
In *Proceedings of the 30th ACM International Conference on Information & Knowledge Management*,
pages 2331–2341.
Donghan Yu, Yiming Yang, Ruohong Zhang, and Yuexin Wu. 2021. Knowledge embedding based graph convolutional network. In *Proceedings of the* Web Conference 2021, pages 1619–1628.
Kaisheng Zeng, Chengjiang Li, Lei Hou, Juanzi Li, and Ling Feng. 2021. A comprehensive survey of entity alignment for knowledge graphs. *AI Open*, 2:1–13.
Kai Zhang, Qi Liu, Hao Qian, Biao Xiang, Qing Cui, Jun Zhou, and Enhong Chen. 2021. Eatn: An efficient adaptive transfer network for aspect-level sentiment analysis. *IEEE Transactions on Knowledge* and Data Engineering, 35(1):377–389.
Kai Zhang, Kun Zhang, Mengdi Zhang, Hongke Zhao, Qi Liu, Wei Wu, and Enhong Chen. 2022. Incorporating dynamic semantics into pre-trained language model for aspect-based sentiment analysis. *arXiv* preprint arXiv:2203.16369.
Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via knowledge embeddings. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI).
Yao Zhu, Hongzhi Liu, Zhonghai Wu, and Yingpeng Du.
2021. Relation-aware neighborhood matching model for entity alignment. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 4749–4756.
| Dateset | EN_DE_V1 | D_Y_V1 | | | | |
|-----------|------------|----------|-------|-------|-------|-------|
| Method | H@1 | H@5 | MRR | H@1 | H@5 | MRR |
| GCN | 0.622 | 0.771 | 0.688 | 0.611 | 0.759 | 0.670 |
| GAT | 0.590 | 0.750 | 0.661 | 0.611 | 0.737 | 0.667 |
| R-GCN | 0.680 | 0.839 | 0.748 | 0.688 | 0.818 | 0.746 |
| CompGCN | 0.697 | 0.857 | 0.767 | 0.702 | 0.825 | 0.756 |
| RGC | 0.704 | 0.859 | 0.771 | 0.708 | 0.831 | 0.762 |
Table 4: Entity Alignment of Various Convolution Layers on datasets EN_DE_V1 and D_Y_V1
| Dateset | EN_DE_V1 | D_Y_V1 | | | | |
|-----------|------------|----------|-------|-------|-------|-------|
| Method | H@1 | H@5 | MRR | H@1 | H@5 | MRR |
| RHGN-CEE | 0.688 | 0.846 | 0.757 | 0.707 | 0.828 | 0.760 |
| RHGN-SRA | 0.689 | 0.850 | 0.758 | 0.709 | 0.829 | 0.762 |
| RHGN | 0.704 | 0.859 | 0.771 | 0.708 | 0.831 | 0.762 |
Table 5: Ablation Study of CEE and SRA on datasets EN_DE_V1 and D_Y_V1
| Num Neighbors | 1 | 2 | 3-4 | 5-6 | 7-9 | >9 |
|-----------------|-------|-------|-------|-------|-------|-------|
| H@1 | 0.445 | 0.467 | 0.500 | 0.545 | 0.561 | 0.601 |
Table 6: Results for Entities with Different Numbers of Neighbors on EN_FR_V1
## A Supplementary Experiments
We add some experimental results to demonstrate the effectiveness of our framework. Due to space limitations, we present the experimental results in detail in the appendix.
## A.1 Ablation Study On Other Datasets
To verify that our various modules in the RHGH
framework (including RGC, CEE, and SRA) are valid, we have presented the experimental results on datasets EN_FR_V1 and D_W_V1 in Table 3 and Figure 5. To fully verify that all our modules are also effective on datasets EN_DE_V1 and D_Y_V1, Table 4 shows the capability of our RGC
compared with other GCNs, while Table 5 proves the validity of CEE and SRA.
From Table 4 and Table 5, we find that RHGN
achieves the best performance among all variants on most metrics in all datasets, which is consistent with the experimental analysis in Section 5.5.
These experiments prove that all components are valid and non-redundant in the model.
## A.2 Sensitivity Analysis Of Other Parameters
In section 5.6, we have discussed how the number of layers affects the model performance and
![11_image_0.png](11_image_0.png)
found that the number of layers is determined by the distance of information propagation. Meanwhile, other hyper-parameters may also affect the performance of the model, such as relation alignment loss α2 and relation alignment label threshold γ. Figure 7 reports how these hyper-parameters affect the experiment results on D_W_V1. The effect of these hyper-parameters on model performance is slight and further illustrates the robustness of our RHGN framework. For other hyper-parameters
(e.g., negative sample distance margin λ and negative sample weight α1), we follow the previous works(like AliNet (Sun et al., 2020b)).
## A.3 Error Analysis
In order to explore the advantages of our RHGN
model, Table 6 shows the results for entities with different numbers of neighbors on EN_FR_V1. We observe that with the increase of neighbor number, the performance of our model improves significantly. In more detail, for entities with various neighbors, our RGC can better avoid the noise caused by multiple relations. However, for entities with fewer neighbors, there is not enough information for them to align. We will attempt to solve this problem by acquiring further neighbors (like relation paths) in future work.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5.4, Section 6 and Section 7
✗ A2. Did you discuss any potential risks of your work?
We do not use huge models and our experiments are fair.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5.1 And Section 5.2
✓ B1. Did you cite the creators of artifacts you used?
Section 5.1 and Section 5.2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
They are public datasets and open source codes.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 5.1
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Date contains no information that names or uniquely identifies individual people or offensive content.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5.1 and Section 5.2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5.1
## C ✓ **Did You Run Computational Experiments?** Section 5
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We do not use huge models and efficiency is not our goal The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
ection 5.2 and Section 5.4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
jumelet-zuidema-2023-feature | Feature Interactions Reveal Linguistic Structure in Language Models | https://aclanthology.org/2023.findings-acl.554 | We study feature interactions in the context of feature attribution methods for post-hoc interpretability. In interpretability research, getting to grips with feature interactions is increasingly recognised as an important challenge, because interacting features are key to the success of neural networks. Feature interactions allow a model to build up hierarchical representations for its input, and might provide an ideal starting point for the investigation into linguistic structure in language models. However, uncovering the exact role that these interactions play is also difficult, and a diverse range of interaction attribution methods has been proposed. In this paper, we focus on the question which of these methods most faithfully reflects the inner workings of the target models. We work out a grey box methodology, in which we train models to perfection on a formal language classification task, using PCFGs. We show that under specific configurations, some methods are indeed able to uncover the grammatical rules acquired by a model. Based on these findings we extend our evaluation to a case study on language models, providing novel insights into the linguistic structure that these models have acquired. | # Feature Interactions Reveal Linguistic Structure In Language Models
Jaap Jumelet Willem Zuidema Institute for Logic, Language and Computation University of Amsterdam
{j.w.d.jumelet, w.zuidema}@uva.nl
## Abstract
We study *feature interactions* in the context of feature attribution methods for post-hoc interpretability. In interpretability research, getting to grips with feature interactions is increasingly recognised as an important challenge, because interacting features are key to the success of neural networks. Feature interactions allow a model to build up hierarchical representations for its input, and might provide an ideal starting point for the investigation into linguistic structure in language models. However, uncovering the exact role that these interactions play is also difficult, and a diverse range of interaction attribution methods has been proposed. In this paper, we focus on the question which of these methods most *faithfully* reflects the inner workings of the target models. We work out a grey box methodology, in which we train models to perfection on a formal language classification task, using PCFGs. We show that under specific configurations, some methods are indeed able to uncover the grammatical rules acquired by a model. Based on these findings we extend our evaluation to a case study on language models, providing novel insights into the linguistic structure that these models have acquired.1
## 1 Introduction
Feature attribution methods (FAMs) are a popular family of tools for explaining the behaviour of deep learning models, by explaining a prediction in terms of contributions of individual features
(Ribeiro et al., 2016; Lundberg and Lee, 2017).
There are many such methods proposed, and mathematical results (such as axiomatic approaches based on game theory) and theoretical frameworks
(such as Covert et al. (2021)'s 'Explaining by Removing') are starting to offer a good understanding of how different methods relate to one another.
However, there are also some important shortcomings. Perhaps most importantly, popular FAMs 1All code and data is available here: https://github.
com/jumelet/fidam-eval mostly ignore the existence of interactions between the effects of features on the prediction. This is problematic, because **Feature Interactions** are widely seen as a major factor in the success of neural networks (Goodfellow et al., 2016). This is all the more important in domains such as language and music processing, because feature interactions allow neural networks to model hierarchical representations of their input, which is considered a key design feature of language and music. To address these shortcomings, there is now an emerging literature on **feature interaction detection and attribution methods** (FIDAMs) that explain model predictions in terms of interacting features (Tsang et al., 2020; Janizek et al., 2021).
However, assessing the faithfulness of FIDAMs is even more challenging than assessing the faithfulness of feature attribution methods more generally
(Jacovi and Goldberg, 2021). In this paper, we present a systematic framework to characterise FIDAMs, and derive several new FIDAMs based on that framework. We then proceed with creating an evaluation pipeline that measures a FIDAM's ability to recover the structural rules for which we have good evidence that they play an important role in the target model's performance (Figure 1). We first test this on a set of small-scale formal language tasks, that provide stronger faithfulness guarantees. Finally, we present a case study of a large language model on the CoLA task for linguistic acceptability.
We find that the performance of FIDAMs is very variable, and that the performance on the smallscale formal language tasks may not be predictive of the performance of methods on the large-scale natural language task. This is an illustration of what we call the **Attribution Generalisation problem**.
We argue that this problem remains a key open problem in the study of explanation methods in general.
![1_image_0.png](1_image_0.png)
## 2 Related Work: Assessing Faithfulness
In this section we discuss related work on assessing the faithfulness of feature attribution methods
(FAMs). A model explanation ideally provides better insights into model behaviour. However, it is important that an explanation is faithful to the reasoning of the model, and not merely plausible to a researcher. Unfortunately, attribution models can yield vastly different outcomes (Neely et al., 2022).
Defining a notion of faithfulness itself is an ongoing debate, and it has been argued that we should not be aiming for a binary notion, but a graded one instead (Jacovi and Goldberg, 2021). To this end, various methodologies have been proposed to evaluate the faithfulness of explanation methods.
One research direction introduces metrics to evaluate faithfulness by quantifying the impact of features that were deemed to contribute the most by an attribution method. Hooker et al. (2019) does this by *retraining* a model on data from which the most contributing features have been removed.
DeYoung et al. (2020) provide a more direct measure, by quantifying changes in model predictions when only a subset of the most contributing features is fed to model. Atanasova et al. (2020) build on this notion, introducing a range of diagnostic metrics that capture various aspects of explanation quality including faithfulness, human rationale agreement, and explanation consistency. Jain et al.
(2020) ensure and evaluate faithfulness by only allowing a model access to the set of features that were deemed important by the explanation method, which has also been shown to improve model robustness (Wiegreffe et al., 2021; Ross et al., 2022).
Another line of work modifies the training data in such a way that we obtain guarantees of certain features the model must be paying attention to when making a prediction: e.g. by shuffling test data such that only part of the input resembles the statistics from the train set (Pörner et al., 2018), or by explicitly adding exploitable heuristics in the train set (Bastings et al., 2022; Adebayo et al., 2022). These two approaches could be characterised as *grey box* models: we adapt the data in such a way that we gain a degree of confidence what cues the model must be relying on, without having a full understanding of the model's internal reasoning. A *glass box* model, on the other hand, is a model whose behaviour is fully understood:
it's not derived by training a model on a task, but hand-crafted. Hao (2020) utilises such models to evaluate FAMs on formal language tasks, providing more robust guarantees on model behaviour.
Our own approach is related to the first line of research, making use of *grey box* models. Instead of evaluating FAMS, we evaluate FIDAMs, that provide more comprehensive insights into model reasoning. Deployment of such methods within NLP has been fairly limited, and as such evaluating their faithfulness in a language context has been an underexplored research topic.
## 3 A Framework For Characterising Fidams
Feature attribution methods typically decompose a model prediction into a sum of feature contributions (Sundararajan et al., 2017; Lundberg and Lee, 2017). A large contribution then indicates that this feature played an important role in a model's prediction. Although feature attributions can provide meaningful insights into the inner model dynamics, they paint a fairly limited picture of the model behaviour. Most importantly, **interactions** between features are lumped together, making it impossible to discern whether a large contribution of a feature stemmed from that feature alone, or from its interaction with neighbouring features. To address this, multiple methods have been proposed that decompose a model prediction into a sum of feature interactions, based on similar mathematical formalism as those of feature attributions.
Notation A neural network is represented as a single function f. The input to f is denoted as x, which consists of N input features. A partial input xS only consists of input features S ⊆ N. A value function v(xS) quantifies the model output on the partial input xS. Padding the missing features in xS with replacement features x′\S
is denoted as xS ∪ x′\S
. The attribution value of feature i is denoted as ϕi, and the interaction effect of a set of features I is denoted as ΓI.
Attribution Dimensions Attribution methods can generally be characterised along two dimensions (Covert et al., 2021): 1) how the method deals with feature removal, and 2) how the impact of removing a feature is quantified. FIDAMs are built on the same principles as FAMs, and can be categorised along the same two dimension. By discerning these two dimensions we can separately evaluate their impact on the faithfulness of the attribution method. Furthermore, we can combine feature removal procedures with influence quantification methods in order to obtain novel attribution methods, an observation that has also been made in the context of FIDAMs by Jiang and SteinertThrelkeld (2023), who, concurrent to our work, provide a general framework for characterising FIDAMs.
## 3.1 Feature Removal
It is not straight-forward to define the absence of a feature to a model's input. The main goal here is to replace the removed feature with a neutral baseline, that adequately represents the absence of the feature. Methods often make use of a neutral input feature, the **static baseline** x′, such as a zerovalued embedding or a pad token:
$$v(\mathbf{x}_{S})=f(\mathbf{x}_{S}\cup\mathbf{x}_{\backslash S}^{\prime})\qquad\qquad(1)$$
This may, however, lead to input that lies outside of the original input distribution (Kim et al., 2020).
The reason why this is problematic is that the model may behave erratically on such modified input, posing issues to the faithfulness of the explanation.
Instead of using a static baseline, we can also opt to use a baseline that is sampled from a background distribution (Datta et al., 2016). There exist two approaches to this procedure (Sundararajan and Najmi, 2020; Chen et al., 2020b). The **observational**
conditional expectation samples the baseline features from a distribution that is conditioned on the set of features that are still present in the input
(Frye et al., 2020; Aas et al., 2021):
$$v(\mathbf{x}_{S})=\mathbb{E}_{\mathbf{x}_{\backslash S}^{\prime}}\left[f(\mathbf{x}_{S}\cup\mathbf{x}_{\backslash S}^{\prime})\mid\mathbf{x}_{S}\right]$$
$$\mathrm{(2)}$$
i(2)
The **interventional conditional expectation** drops the conditional, and samples the baseline features from an independent distribution:
$$v(\mathbf{x}_{S})=\mathbb{E}_{\mathbf{x}_{\backslash S}^{\prime}}\left[f(\mathbf{x}_{S}\cup\mathbf{x}_{\backslash S}^{\prime})\right]$$
$$(3)$$
i(3)
There exist two motivations for the latter approach:
Lundberg and Lee (2017) drop the conditional expectation for computational reasons, allowing them to approximate the observational conditional expectation. Janzing et al. (2020) provide a perspective derived from causality theory, stating that the *intervention* of removing a feature should break the dependence between the baseline and remaining features, and hence conditioning on these features is fundamentally wrong.
The previous two methods sample baseline values for individual missing features, but we can also compute the expectation over the range of possible baselines. This yields the technique of **expected**
explanations (Erion et al., 2021), in which attributions with different static baselines are averaged out over a background distribution D:
$$\phi_{i}=\mathbb{E}_{\mathbf{x}^{\prime}\sim D}\left[\phi_{i}(\mathbf{x};\mathbf{x}^{\prime})\right]$$
$$(4)$$
(4)
## 3.2 Quantifying Feature Influence
The simplest method of quantifying the influence of a feature is expressed as the output difference after **ablating** the feature:
$$\phi_{i}=v(\mathbf{x})-v(\mathbf{x}_{\backslash i})$$
$$(5)$$
Note that this formulation can be combined with any of the feature removal methods: e.g. Occlusion
(Zeiler and Fergus, 2014) combines this influence method with a static baseline (Eq. 1), whereas Kim et al. (2020) combines it with the observational conditional expectation (Eq. 2), employing BERT
as the conditional distribution.
A more involved method leverages a technique from the field of game theory, called the **Shapley value** (Shapley, 1953). Shapley values were originally introduced in the domain of cooperative games, in which players can form coalitions to change the outcome of the game. This setup can be transferred directly to machine learning models, in which features now take up the role of the players.
A Shapley value expresses the contribution of a feature as the marginal gain of including that feature in the input, averaged over all possible coalitions of features.
## 4 Fidams
We now address a series of interaction methods that we use in our own experiments.
Group Ablation The feature influence principle of Equation 5 can straightforwardly be extended to groups of features. In our experiments we will focus on pairwise interactions, but any kind of feature subset can be used here.
$$\Gamma_{i,j}=v(\mathbf{x})-v(\mathbf{x}_{\backslash i j})$$
Γi,j = v(x) − v(x\ij ) (6)
Archipelago Explaining model behaviour in
terms of pairwise interactions will already yield a
better portrayal of its internal behaviour than 'flat'
attributions, but it neglects the interactions that occur within larger groups of features. Archipelago
(Tsang et al., 2020) splits up the feature interaction procedure into two phases: first an interaction
detection method is performed that clusters features into interaction sets, and afterwards interaction scores are assigned to these sets as a whole.
Interaction detection is based on measuring the nonadditive effect of pairs of features. The interaction
effect that is assigned to an interaction set I is expressed as follows, with respect to a static baseline
x′:
$$\Gamma_{\mathcal{I}}=f(\mathbf{x}_{\mathcal{I}}\cup\mathbf{x}_{\backslash\mathcal{I}}^{\prime})-f(\mathbf{x}^{\prime})$$
′) (7)
Note that Archipelago expresses the interaction effect inversely compared to the Group Ablation procedure: instead of measuring the impact of removing a group of features, we now measure the impact of solely keeping this group in the input.
Shapley(-Taylor) Interaction Index Both the previous methods base interaction effects on direct output differences. We can modify the formulation of the Shapley value to yield interaction effects.
This modification was originally introduced in the field of game theory, called the Shapley Interaction Index (SII, Owen, 1972; Grabisch and Roubens, 1999). Instead of computing the marginal gain that is achieved by a single feature, we now compute the marginal gain of *groups* of features. The ShapleyTaylor Interaction Index (STII, Sundararajan et al.,
2020) is an extension of SII, satisfying additional theoretical properties.
Hessian Analogous to utilising the gradient for feature attributions, we can employ the secondorder derivative to quantify interactions between features, which is captured by the Hessian matrix.
Friedman and Popescu (2008) and Sorokina et al.
(2008) consider an interaction between two variables to exist when the effect of one variable on the response depends on values of the other variable, which can be expressed in terms of the secondorder partial derivative:
$$\Gamma_{i,j}=\left[{\frac{\partial^{2}f(\mathbf{x})}{\partial x_{i}\partial x_{j}}}\right]^{2}$$
$$(6)$$
A common approach when using the gradient of a model as a proxy for feature importance is to multiply it with the input embeddings (Shrikumar et al.,
2017; Ancona et al., 2019): in our experiments we consider an analogous method to the Hessian that we call Hessian × **Input**.
Integrated Hessians Directly using the Hessian as explanation method is prone to the same caveats as using the gradient: the interactions signal may vanish due to saturation. Integrated Hessians (IH,
Janizek et al., 2021) address this issue by integrating over the Hessian manifold along a path between the input and a baseline. This is achieved by applying the method of Integrated Gradients (Sundararajan et al., 2017) to itself. An IH interaction between features i and j can hence be interpreted as the contribution of i to the contribution of j to the models prediction. The path integral between input and baseline is approximated via a Riemann sum interpolation.
Other Methods The methods explained thus far have all been incorporated in our experimental pipeline. The scope of our work focuses mainly on *pairwise* interactions, but methods that extract higher-order interactions have been proposed as well (Jin et al., 2020). Comparing such methods to linguistic structure is an exciting avenue that we leave open to future work. Other interaction methods that were not considered include two methods that preceded Archipelago: Neural Interaction Detection (Tsang et al., 2018a) and MAHE (Tsang et al., 2018b). The feature attribution method Contextual Decomposition (Murdoch et al., 2018) has been extended to extract interactions as well (Singh et al., 2019; Saphra and Lopez, 2020; Chen et al.,
2020a), but these methods place the constraint that only contiguous groups of features can interact. Integrated Directional Gradients (Sikdar et al., 2021),
an extension of Integrated Gradients to capture group attributions, could be adapted to our framework, but we leave this open for future work.
## 5 Evaluating Fidams
The final component of our framework is a methodology for evaluating the faithfulness of FIDAMs. To lay a robust foundation for such work, we propose to evaluate a range of interaction methods and baselines on smaller deep learning models (using LSTM and Transformer architectures) that have been trained to recognise formal languages, based on a probabilistic context-free grammar (PCFG).
Our models are trained on a binary language classification task, in which a model needs to learn to discern between well-formed strings and minimally corrupted counterparts. Models are trained to perfection (100% accuracy) on both train and test set.
To obtain perfect performance, a model must rely solely on the grammatical rules that underlie the language, without resorting to spurious heuristics, because only these results allow completely solving the task. This way, due to the controlled nature of the task, we obtain a high degree of confidence about the model's behaviour.
The goal of our experimental approach is to recover the structure of the language based on the trained model itself. This is achieved by the FIDAMs outlined in §4. We aim to uncover whether a structural dependency between two features results in a high interaction effect. Since our models have been trained to perfection, this allows us to employ our setup as a way of measuring the **faithfulness** of a FIDAM. A method that assigns a high interaction effect to features that contain a dependency in the original grammar is able to provide a faithful reflection of a model's understanding of the task. By testing a wide range of FIDAMs and baselines we can uncover which configuration yields the most faithful explanations. A graphical overview of our
## Approach Is Depicted In Figure 1.
Task The binary language classification task is set up by generating positive examples D+, based on some PCFG, and negative examples D−, derived from minimally corrupting the positive examples. We split the union of these two sets into a random train/test split of 80/20%. We train our models with a default cross-entropy loss, using the AdamW optimiser (Loshchilov and Hutter, 2019),
a learning rate of 0.01, and a batch size of 48.
Models Our pipeline permits the use of any kind of neural model architecture, in our experiments we considered both LSTMs (Hochreiter and Schmidhuber, 1997) and Transformers (Vaswani et al., 2017).
In our experiments we report the results of the LSTM model, but we observed similar results for Transformers: due to the black-box approach of our explanation procedure the architecture itself is not of great importance. The models are deliberately small: we use an embedding size that is equal to the number of symbols in the language it is trained on, a hidden state size of 20, and a single layer. This results in models that provide a compute-friendly test bed for evaluating the FIDAMs.
Evaluation We focus on *pairwise* interactions:
interactions between individual pairs of features.
A FIDAM that extracts pairwise interactions for an input sequence x ∈ R
N returns a matrix of interaction effects Γ ∈ R
N×N . Since our goal is to uncover whether structural dependencies result in high interaction effects, we approach the evaluation of the interaction matrix as a retrieval task. By aggregating and normalising the *rank* of each interaction of interest we can quantify the performance of a FIDAM. We call this metric the **Average Relative Rank** (ARR):
$$A R R(\Gamma,{\mathcal{I}})={\frac{1}{|{\mathcal{I}}|}}\sum_{i,j\in I}{\frac{R(\Gamma_{i})_{j}}{N-1}}\qquad(8)$$
where I denotes the set of interaction pairs of interest and R(Γi) denotes the rank of each interaction between feature i and the other features in input x (the lowest interaction is ranked 0, and the highest interaction is ranked N − 1). We aggregate these scores over an evaluation set to obtain a general performance score of the FIDAM. A graphical overview of this procedure is provided in Figure 2.
Baselines We consider a range of baselines in our experiments, based on the procedures explained
![5_image_1.png](5_image_1.png)
![5_image_0.png](5_image_0.png)
in §3.1. For the static baselines we consider a zero-valued baseline (x′ = 0), and a baseline that utilises a fixed mapping T based on the original input symbols (x′ = T(x)). Expected attributions are marginalised over samples from the distribution of well-formed strings D+ and corrupted strings D−. The interventional conditional expectation
(Eq. 3) is computed with a corpus-wide unigram distribution (P(xi)), a unigram distribution that is conditioned on the sentence position (P(xi|i)),
and as a joint distribution over the missing features
(P(x′\S
)), that we sample from the training corpus.
The observational conditional expectation (Eq. 2)
is computed based on the original corpus data.2
## 6 Experiments On Formal Languages
We apply the evaluation procedure of §5 to two formal languages: the Identity Rule language and the Dyck-2 language. In the appendix (§A) we also present results on a palindrome language.
## 6.1 Identity Rule
The first language we consider is a regular language
consisting of strings in which the first two symbols
are identical, followed by a random sequence of
symbols. The language is formed by the following grammar:
$$\begin{array}{l l l l}{{\rightarrow}}&{{x}}&{{x\,\,\land}}&{{}}&{{x\in\{a,b,c\}}}\\ {{\rightarrow}}&{{x\,\,\land}}&{{\mid\,\,\epsilon}}&{{}}&{{x\in\{a,b,c\}}}\end{array}$$
2Due to the small scale of the PCFGs considered here we
can generated the complete language up to a certain length, and sample from strings that have feature overlap with the
features that are still present in the partial input. For more
complex tasks an auxiliary LM can be used instead.
| NB | 0 | x′ ∼ D + | x ′ ∼ D − | |
|----------------|------|------------|-------------|------|
| Group Ablation | - | 0.49 | 1.00 | 0.53 |
| Archipelago | - | 0.30 | 0.24 | 1.00 |
| SII | - | 0.70 | 1.00 | 1.00 |
| STII | - | 0.83 | 1.00 | 1.00 |
| Hessian | 0.93 | - | - | - |
| Hessian×Input | 0.66 | - | - | - |
| IH | - | 0.81 | 1.00 | 0.31 |
The only interaction of interest here is between the first two symbols; all subsequent symbols are irrelevant for the prediction. An ARR score of 1.0 then indicates that for all corpus items the interaction between the first two items was the strongest out of all interactions.
We use a corpus size of 1.000, a maximum sequence length of 20, with 3 different input symbols.
Corrupted strings are derived by altering one of the first two symbols (e.g. aabcb → cabcb).
Results The results for an LSTM that was trained on the language are shown in Table 1. Due to the simplicity of the language and for brevity we only report results on three baselines. A static zero-valued baseline provides imperfect interactions for all methods. The Hessian, that does not depend on any baseline, performs better than all other methods here. When sampling the baseline, however, multiple methods perfectly retrieve the interaction between the first two symbols for all corpus items. Interestingly, Group Ablation and IH
benefit from sampling from the distribution of wellformed items, whereas Archipelago performs best when sampling from the distribution of corrupted items.
## 6.2 Dyck-2
The Dyck language is the language of well-nested brackets, and is a popular testbed for research on formal languages. It is a context-free language with center embedding clauses, requiring a model to keep track of a memory stack while processing a string. Earlier work on Dyck languages has shown that a wide range of neural model architectures can learn the grammar, including LSTMs
(Sennhauser and Berwick, 2018), memory augmented RNNs (Suzgun et al., 2019), Transform-
| Static | Expected | Interventional | Observational | | | | | | |
|----------------|------------|------------------|-----------------|-------|----------|-------------|---------|------------|-------|
| No baseline | 0 | T(x) | D + | D − | P(x ′ i) | P(x ′ i |i) | P(x \S) | P(x \S|xS) | |
| ′ | ′ | | | | | | | | |
| Group Ablation | - | 0.684 | 1.000 | 0.916 | 0.884 | 0.822 | 0.821 | 0.938 | 0.956 |
| Archipelago | - | 0.466 | 0.528 | 0.250 | 0.554 | - | - | - | - |
| SII | - | 0.555 | 1.000 | 0.921 | 0.895 | 0.876 | 0.885 | 0.923 | 0.989 |
| STII | - | 0.583 | 0.999 | 0.876 | 0.820 | 0.881 | 0.906 | 0.952 | 0.991 |
| Hessian | 0.413 | - | - | - | - | - | - | - | - |
| Hessian×Input | 0.542 | - | - | - | - | - | - | - | - |
| IH | - | 0.591 | 0.837 | 0.723 | 0.665 | - | - | - | - |
ers (Ebrahimi et al., 2020), and handcrafted RNNs
(Hewitt et al., 2020; Hao, 2020). We consider the Dyck-2 language, consisting of two types of brackets. The language is formed by the following grammar:
S → [ S ] | ( S ) | S S | ϵ We use a corpus size of 15.000, a maximum sequence length of 20, and a maximum branching depth of 4. We use the same branching probabilities as Suzgun et al. (2019), which results in a uniform probability of 0.25 for each rule. Corrupted strings are derived by flipping a single bracket to any other bracket. For the baseline mapping T(x), we map a bracket to the other bracket type, i.e. '(' ↔ '[' and ')' ↔ ']'. This results in a baseline that is of the same structure as the original input, but without feature overlap.
Results We report the results for this language in Table 2, computed over all our baselines for an LSTM. The zero-valued baseline again turns out to be a mediocre baseline: for none of the methods this results in a high ARR score. The method that performs best is the fixed mapping T(x). For Group Ablation, SII, and STII this results in a perfect ARR; for IH it is the best performing baseline.
It is encouraging that a baseline exists that results in perfect ARR scores, but this mapping depends strongly on the nature of the Dyck task itself. It is, for example, unclear how this static mapping would transfer to the natural language domain. Ideally, a more general solution makes no strong assumptions about the baseline itself. The three other baseline types in Table 2 may provide such a solution, as these only depend on the access to the original training data. Out of these, the observational baseline performs best: for the SII and STII methods this baseline performs nearly on par with the static mapping. Obtaining this conditional distribution is challenging for more complex tasks, and it can be seen here that the interventional baseline with a joint distribution over the missing features performs well too.
## 7 **A Natural Language Case Study: Cola**
As a case study on a larger scale natural language task, we apply our methodology to language models fine-tuned on the CoLA task (Warstadt et al.,
2019). CoLA is part of the GLUE Benchmark
(Wang et al., 2019), and is defined as a binary classification task of determining the linguistic acceptability of a single input sentence. The task consists of linguistically valid sentences, and sentences that contain either a syntactic, semantic, or morphological violation. A model that performs well on this task must have a thorough grasp of grammatical structure, and as such it provides a useful test bed for our FIDAM evaluation procedure.
In the previous experiments there was a degree of certainty about the structure that must be encoded by the model. In the natural language domain, however, we do not have such certainty, and should therefore be careful of making strong claims about faithfulness. Furthermore, natural language is highly multi-faceted and can not be captured by a single hierarchical structure that covers all these facets. Nonetheless, we consider it valuable to test our setup on a natural domain in order to see if interesting differences between FIDAMs arise, and whether particular facets of language such as syntactic dependency structure can be extracted.
## 7.1 Experimental Setup
For our experiment we consider the RoBERTa-base model (Liu et al., 2019) which obtains a Matthew's Correlation Coefficient score of 69.70 on the indomain validation split. We filter out sentences that contain words that are split into multiple subwords by the tokenizer, since this leads to issues with aligning the interactions of multiple subwords to the dependency graph that is used for evaluation.
Furthermore, we limit sentences to a max length of 14 in order to allow the STII and SII methods to be computed exactly without approximations. This resulted in a subset of around 60% of the original in-domain validation split that we will use in our experiment.
We evaluate the FIDAM scores on the dependency parse tree of the sentence, that we obtain with the parser of spaCy (Honnibal et al., 2020).
The ARR score is computed based on the interaction of each token with its *parent* token. We omit the interaction of the token that has the ROOT node as its parent. An example of this procedure can be found in Appendix B. Do note that our evaluation procedure is one of many possibilities: we make the assumption that a token should interact strongly with its parent, but other interactions are likely to play a role within the model as well. We leave a more detailed investigation into using different types of linguistic structure open for future work.
We again consider the FIDAMs of Group Ablation, STII/SII, and Integrated Hessians. We leave out Archipelago, since its procedure of assigning features to a single interaction set is not feasible with our setup in which multiple child tokens might be interacting with the same parent token. Due to computational constraints we were unable to compute the full Hessian matrix of the language model, whose computation scales quadratically in the number of input *neurons* (Bishop, 2007, §5.4). For the static baselines we again consider the zero-valued baseline, as well as the <pad> token. The interventional baselines are obtained by computing simple count-based distributions over a sample of 100.000 sentences from the Google Books corpus. The distributions are based on the tokenization of the model's tokenizer, and allow for computationally efficient sampling. We leave the incorporation of an observational baseline for future work, where an auxiliary masked LM might provide a useful conditional probability distribution.
## 7.2 Results
The results for the experiment are shown in Table 3. As expected, due to reasons outlined at the start of this section, none of the methods reaches ARR scores that are close to 1. Nonetheless, it is encouraging to see that various method/baseline combinations attain ARR scores that are far above chance level, indicating that there exists a strong
| Static | Interventional | | | |
|----------------|------------------|----------|-----------|-------|
| 0 | <pad> | P(x ′ i) | P(x ′ \S) | |
| Group Ablation | 0.702 | 0.757 | 0.518 | 0.491 |
| SII | 0.746 | 0.668 | 0.714 | 0.696 |
| STII | 0.741 | 0.708 | 0.704 | 0.658 |
| IH | 0.577 | 0.516 | - | - |
degree of alignment between feature interactions and dependency structure. Contrary to the Dyck results, using a zero-valued baseline yields some of the highest ARR scores, which indicates that within RoBERTa's embedding space this baseline represents a better neutral value.
A closer inspection of these results shows that the ARR scores are strongly negatively correlated to sentence length: for Group Ablation with a
<pad> baseline, for example, we obtain a Spearman correlation of -0.38 (*p <<* 0.001, regression plot in Appendix C). This is not surprising: as the sentence length increases, the chance of a token's largest interaction being with its parent decreases. Another correlation of interest is between the ARR score and the model's prediction of a sentence's acceptability. A high correlation would indicate that the FIDAM's alignment with dependency structure are indicative of a model's performance. For this we obtain a Spearman correlation of 0.14 (p = 0.036): a relatively weak result that indicates that the structure our FIDAM extracted is only partly driving the model's comprehension of the sentence structure.
## 8 Discussion & Conclusions
In this paper, we have presented a framework for characterising FIDAMs and evaluating their faithfulness. For the characterisation we set out two dimensions, feature removal and feature influence, along which existing FIDAMs can be characterised, by extending the 'Explaining by Removing' framework of Covert et al. to also apply to FIDAMs. This allows us to place each of the known FIDAMs in a two-dimensional grid, and to define novel variants of these models. As such, many of the methods that we incorporated in our experiments are novel FIDAMs, such as combining Archipelago with expected explanations and STII with an observational baseline.
To assess the faithfulness of FIDAMs, we made use of formal language theory and 'grey box models'. We use formal grammars to generate multiple datasets, each with known feature interactions, and train deep learning models to perfection on those datasets. Using FIDAMs, we can then extract the learned feature interactions based on the model itself, and compare these interactions to the dependencies in the original grammar. We demonstrate that only specific combinations of FIDAMs and baselines are able to retrieve the correct interactions, while methods such as Archipelago and Integrated Hessians consistently fail to do so.
Finally, we tested our methodology on a natural language case study using a model fine-tuned on the CoLA task for linguistic acceptability. Our results on the formal language tasks either did not turn out to be predictive of this experiment or, alternatively, the results *were* predictive but the LMs made less use of dependency graph information than we might have expected. This illustrates the challenge of the Attribution Generalisation problem, and the open question remains how we can transfer faithfulness guarantees from a synthetic, controlled context to the domain of natural language and LLMs.
We do show, however, that under certain configurations feature interactions align to some degree with the (syntactic) dependency structure of a sentence. This paves the way for revealing linguistic structure in a more direct way than, for instance, can be achieved with Structural Probes (Hewitt and Manning, 2019). Investigating whether different methods and baseline configurations are able to retrieve different aspects of structure is an exciting next step that we look forward to exploring in more detail. This could be examined, for instance, through the lens of contrastive explanations Yin and Neubig (2022), a procedure that demonstrates that different baselines can reveal different aspects of linguistic structure. Furthermore, investigating the role that attention plays in modelling interactions could be a fruitful line of work, for instance by incorporating *context mixing* methods to our pipeline, such as *Value Zeroing* (Mohebbi et al.,
2023) and *ALTI* (Ferrando et al., 2022).
## 9 Limitations
Our work has only considered *pairwise* interactions, but linguistic structure can also manifest through higher-order interactions. We show that our results on small-scale, formal languages, are different from our results on a natural language task.
It would be premature to conclude that small-scale, synthetic tasks can not be predictive of behaviour on more complex tasks, and a more detailed investigation into the properties of the task that play a role is a viable next step. Some of the FIDAMs we considered, most notably SII and STII, are intractable for larger inputs (scaling O(2n)), and a necessary step in employing these methods to larger models is to construct better approximation procedures, e.g. by adapting SHAP to SII as has been done before for tabular data by Lundberg et al. (2018).
More generally, although we believe our probabilistic formal language setup provides a important step forward, solving the Attribution Generalization problem - i.e., showing that results for small setups generalize to very large model - remains a key open problem.
## References
Kjersti Aas, Martin Jullum, and Anders Løland. 2021.
Explaining individual predictions when features are dependent: More accurate approximations to shapley values. *Artif. Intell.*, 298:103502.
Julius Adebayo, Michael Muelly, Harold Abelson, and Been Kim. 2022. Post hoc explanations may be ineffective for detecting unknown spurious correlation.
In *The Tenth International Conference on Learning* Representations, ICLR 2022, Virtual Event, April 2529, 2022. OpenReview.net.
Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus H. Gross. 2019. Gradient-based attribution methods. In Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller, editors, *Explainable AI: Interpreting, Explaining and Visualizing Deep Learning*, volume 11700 of *Lecture Notes in Computer Science*, pages 169–191. Springer.
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3256–3274. Association for Computational Linguistics.
Jasmijn Bastings, Sebastian Ebert, Polina Zablotskaia, Anders Sandholm, and Katja Filippova. 2022. "will you find these shortcuts?" a protocol for evaluating the faithfulness of input salience methods for text classification. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 976–991, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Christopher M. Bishop. 2007. *Pattern Recognition and* Machine Learning (Information Science and Statistics), 1 edition. Springer.
Hanjie Chen, Guangtao Zheng, and Yangfeng Ji. 2020a.
Generating hierarchical explanations on text classification via feature interaction detection. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5578–5593. Association for Computational Linguistics.
Hugh Chen, Joseph D. Janizek, Scott M. Lundberg, and Su-In Lee. 2020b. True to the model or true to the data? *CoRR*, abs/2006.16234.
Ian Covert, Scott M. Lundberg, and Su-In Lee. 2021.
Explaining by removing: A unified framework for model explanation. *J. Mach. Learn. Res.*, 22:209:1–
209:90.
Anupam Datta, Shayak Sen, and Yair Zick. 2016. Algorithmic transparency via quantitative input influence:
Theory and experiments with learning systems. In IEEE Symposium on Security and Privacy, SP 2016, San Jose, CA, USA, May 22-26, 2016, pages 598–617.
IEEE Computer Society.
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4443–4458. Association for Computational Linguistics.
Javid Ebrahimi, Dhruv Gelda, and Wei Zhang. 2020.
How can self-attention networks recognize Dyck-n languages? In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4301–
4306, Online. Association for Computational Linguistics.
Gabriel G. Erion, Joseph D. Janizek, Pascal Sturmfels, Scott M. Lundberg, and Su-In Lee. 2021. Improving performance of deep learning models with axiomatic attribution priors and expected gradients. Nat. Mach.
Intell., 3(7):620–631.
Javier Ferrando, Gerard I. Gállego, and Marta R. Costajussà. 2022. Measuring the mixing of contextual information in the transformer. In *Proceedings of* the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8698–8714, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jerome H. Friedman and Bogdan E. Popescu. 2008.
Predictive learning via rule ensembles. *The annals* of applied statistics, 2(3):916–954.
Christopher Frye, Colin Rowat, and Ilya Feige. 2020.
Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability. In Advances in Neural Information Processing Systems 33:
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Ian J. Goodfellow, Yoshua Bengio, and Aaron C.
Courville. 2016. *Deep Learning*. Adaptive computation and machine learning. MIT Press.
Michel Grabisch and Marc Roubens. 1999. An axiomatic approach to the concept of interaction among players in cooperative games. *Int. J. Game Theory*,
28(4):547–565.
Yiding Hao. 2020. Evaluating attribution methods using white-box LSTMs. In *Proceedings of the Third* BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 300–313, Online. Association for Computational Linguistics.
John Hewitt, Michael Hahn, Surya Ganguli, Percy Liang, and Christopher D. Manning. 2020. RNNs can generate bounded hierarchical languages with optimal memory. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1978–2010, Online. Association for Computational Linguistics.
John Hewitt and Christopher D. Manning. 2019. A
structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4129–4138. Association for Computational Linguistics.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Computation*, 9(8):1735–
1780.
Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrialstrength Natural Language Processing in Python.
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. 2019. A benchmark for interpretability methods in deep neural networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 9734–9745.
Alon Jacovi and Yoav Goldberg. 2021. Aligning faithful interpretations with their social attribution. Trans.
Assoc. Comput. Linguistics, 9:294–310.
Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and Byron C. Wallace. 2020. Learning to faithfully rationalize by construction. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 4459–4473, Online. Association for Computational Linguistics.
Joseph D. Janizek, Pascal Sturmfels, and Su-In Lee.
2021. Explaining explanations: Axiomatic feature interactions for deep networks. *J. Mach. Learn. Res.*,
22:104:1–104:54.
Dominik Janzing, Lenon Minorics, and Patrick Blöbaum. 2020. Feature relevance quantification in explainable AI: A causal problem. In *The 23rd International Conference on Artificial Intelligence and* Statistics, AISTATS 2020, 26-28 August 2020, Online
[Palermo, Sicily, Italy], volume 108 of Proceedings of Machine Learning Research, pages 2907–2916.
PMLR.
Yifan Jiang and Shane Steinert-Threlkeld. 2023. The weighted möbius score: A unified framework for feature attribution.
Xisen Jin, Zhongyu Wei, Junyi Du, Xiangyang Xue, and Xiang Ren. 2020. Towards hierarchical importance attribution: Explaining compositional semantics for neural sequence models. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Siwon Kim, Jihun Yi, Eunji Kim, and Sungroh Yoon.
2020. Interpretation of NLP models through input marginalization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3154–3167, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *7th International* Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Scott M. Lundberg, Gabriel G. Erion, and Su-In Lee.
2018. Consistent individualized feature attribution for tree ensembles. *CoRR*, abs/1802.03888.
Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In *Advances in Neural Information Processing Systems 30:*
Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4765–4774.
Hosein Mohebbi, Willem Zuidema, Grzegorz Chrupała, and Afra Alishahi. 2023. Quantifying context mixing in transformers. In *Proceedings of the 17th Conference of the European Chapter of the Association* for Computational Linguistics, pages 3378–3400, Dubrovnik, Croatia. Association for Computational Linguistics.
W. James Murdoch, Peter J. Liu, and Bin Yu. 2018. Beyond word importance: Contextual decomposition to extract interactions from lstms. In *6th International*
Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
Michael Neely, Stefan F. Schouten, Maurits J. R.
Bleeker, and Ana Lucic. 2022. A song of
(dis)agreement: Evaluating the evaluation of explainable artificial intelligence in natural language processing. *CoRR*, abs/2205.04559.
Guillermo Owen. 1972. Multilinear extensions of games. *Management Science*, 18(5):P64–P79.
Nina Pörner, Hinrich Schütze, and Benjamin Roth. 2018.
Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement. In *Proceedings of the 56th Annual Meeting of* the Association for Computational Linguistics, ACL
2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 340–350. Association for Computational Linguistics.
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explaining the predictions of any classifier. In *Proceedings* of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135–
1144. ACM.
Alexis Ross, Matthew Peters, and Ana Marasovic. 2022.
Does self-rationalization improve robustness to spurious correlations? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7403–7416, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Naomi Saphra and Adam Lopez. 2020. LSTMs compose—and Learn—Bottom-up. In Findings of the Association for Computational Linguistics:
EMNLP 2020, pages 2797–2809, Online. Association for Computational Linguistics.
Luzi Sennhauser and Robert C. Berwick. 2018. Evaluating the ability of lstms to learn context-free grammars. In Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2018, Brussels, Belgium, November 1, 2018, pages 115–124. Association for Computational Linguistics.
Lloyd S. Shapley. 1953. A value for n-person games.
Contributions to the Theory of Games, (28):307–317.
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In *Proceedings* of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3145–3153. PMLR.
Sandipan Sikdar, Parantapa Bhattacharya, and Kieran Heese. 2021. Integrated directional gradients: Feature interaction attribution for neural NLP models. In
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 865–878, Online. Association for Computational Linguistics.
Chandan Singh, W. James Murdoch, and Bin Yu. 2019.
Hierarchical interpretations for neural network predictions. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA,
USA, May 6-9, 2019. OpenReview.net.
Daria Sorokina, Rich Caruana, Mirek Riedewald, and Daniel Fink. 2008. Detecting statistical interactions with additive groves of trees. In *Machine Learning,*
Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, volume 307 of ACM International Conference Proceeding Series, pages 1000–1007. ACM.
Mukund Sundararajan, Kedar Dhamdhere, and Ashish Agarwal. 2020. The shapley taylor interaction index.
In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine* Learning Research, pages 9259–9268. PMLR.
Mukund Sundararajan and Amir Najmi. 2020. The many shapley values for model explanation. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9269–9278. PMLR.
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017.
Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3319–3328. PMLR.
Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, and Stuart M. Shieber. 2019. Memory-augmented recurrent neural networks can learn generalized dyck languages. *CoRR*, abs/1911.03329.
Michael Tsang, Dehua Cheng, and Yan Liu. 2018a. Detecting statistical interactions from neural network weights. In *6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC,*
Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
Michael Tsang, Sirisha Rambhatla, and Yan Liu. 2020.
How does this interaction affect me? interpretable attribution for feature interactions. In *Advances in* Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Michael Tsang, Youbang Sun, Dongxu Ren, and Yan Liu. 2018b. Can I trust you more? model-agnostic hierarchical explanations. *CoRR*, abs/1812.04801.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
OpenReview.net.
Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments.
Trans. Assoc. Comput. Linguistics, 7:625–641.
Sarah Wiegreffe, Ana Marasovic, and Noah A. Smith. ´
2021. Measuring association between labels and free-text rationales. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10266–10284, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Kayo Yin and Graham Neubig. 2022. Interpreting language models with contrastive explanations. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 184–198, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I, volume 8689 of Lecture Notes in Computer Science, pages 818–833. Springer.
## A Palindromes
One additional language we investigated is the context-free language of palindromes. In order to process a palindrome, a model needs to keep track of the dependency between each token in the first half of the string with its counterpart in the second half. Palindromes can contain a special symbol in the middle of a string to demarcate the two string halves, making it less ambiguous for the model at which point it should track whether the palindrome is well-formed. In our experiments, however, we found our models to perform well on both forms of palindromes. Furthermore, following Suzgun et al.
(2019), we use a homomorphic mapping h for the second half of the string, allowing the model to use separate embeddings for symbols occurring in the first and second half of a string:
$${\mathsf{S}}\ \to\ x\ \ {\mathsf{S}}\ \ h(x)\ \ |\ \ \epsilon\qquad x\in\{a,b,c,\cdots\}$$
We use a corpus size of 5.000, 10 different input symbols, and a maximum sequence length of 18. For the fixed baseline mapping T(x) we map a symbol onto another random symbol, preserving the grammaticality of the palindrome (e.g.
abBA → *cdDC*).
Results The results for this language, trained with an LSTM, are shown in Figure 4. Again, the zero-valued baseline performs poorly, with most methods scoring ARRs even below chance level.
The fixed baseline mapping again performs well for Group Ablation, SII, and STII, although it is not the best performing baseline this time. These three FIDAMs obtain perfect performance when using the expected baselines over a distribution of well-formed palindromes, which also holds for the interventional baseline with a joint distribution over the missing features. This is in contrast to the Dyck results, where the observational baseline resulted in better ARR scores for all three of these methods.
## B Arr Example
An example of a sentence with a high ARR (0.93),
for the Group Ablation method with a <pad> baseline:
![12_image_0.png](12_image_0.png)
## C Correlation Cola Arr And Sentence Length
Correlation between sentence length and ARR,
shown here for Group Ablation with a <pad> baseline. Spearman's ρ = −0.38 (*p <<* 0.001):
![12_image_1.png](12_image_1.png)
| Static | Expected | Interventional | Observational | | | | | |
|----------------|------------|------------------|-----------------|----------|-------------|-----------|--------------|-------|
| 0 | T(x) | D + | D − | P(x ′ i) | P(x ′ i |i) | P(x ′ \S) | P(x ′ \S|xS) | |
| Group Ablation | 0.450 | 0.980 | 1.000 | 0.943 | 0.777 | 0.836 | 1.000 | 0.939 |
| Archipelago | 0.356 | 0.452 | 0.439 | 0.717 | - | - | - | - |
| SII | 0.472 | 0.933 | 1.000 | 0.892 | 0.804 | 0.817 | 1.000 | 1.000 |
| STII | 0.472 | 0.921 | 0.999 | 0.917 | 0.760 | 0.792 | 1.000 | 0.999 |
| Hessian | 0.523 | - | - | - | - | - | - | - |
| Hessian×Input | 0.523 | - | - | - | - | - | - | - |
| IH | 0.505 | 0.637 | 0.693 | 0.535 | - | - | - | - |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 9
✗ A2. Did you discuss any potential risks of your work?
Our work is of a more theoretical nature, providing a new way of measuring the faithfulness of feature interaction methods.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 6 & 7
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We report model details. The small scale of our setup allows it to be ran on any normal contemporary laptop, and as such we do not report GPU hourse.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 6 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
No response.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Mentioned in Section 7 (spaCy).
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
deng-etal-2023-clustering | Clustering-Aware Negative Sampling for Unsupervised Sentence Representation | https://aclanthology.org/2023.findings-acl.555 | Contrastive learning has been widely studied in sentence representation learning. However, earlier works mainly focus on the construction of positive examples, while in-batch samples are often simply treated as negative examples. This approach overlooks the importance of selecting appropriate negative examples, potentially leading to a scarcity of hard negatives and the inclusion of false negatives. To address these issues, we propose ClusterNS (Clustering-aware Negative Sampling), a novel method that incorporates cluster information into contrastive learning for unsupervised sentence representation learning. We apply a modified K-means clustering algorithm to supply hard negatives and recognize in-batch false negatives during training, aiming to solve the two issues in one unified framework. Experiments on semantic textual similarity (STS) tasks demonstrate that our proposed ClusterNS compares favorably with baselines in unsupervised sentence representation learning. Our code has been made publicly available at github.com/djz233/ClusterNS. | # Clustering-Aware Negative Sampling For Unsupervised Sentence Representation
Jinghao Deng1, Fanqi Wan1, Tao Yang1, Xiaojun Quan1∗**, Rui Wang**2 1School of Computer Science and Engineering, Sun Yat-sen University, China 2Vipshop (China) Co., Ltd., China
{dengjh27, wanfq, yangt225}@mail2.sysu.edu.cn, [email protected], [email protected]
## Abstract
Contrastive learning has been widely studied in sentence representation learning. However, earlier works mainly focus on the construction of positive examples, while in-batch samples are often simply treated as negative examples. This approach overlooks the importance of selecting appropriate negative examples, potentially leading to a scarcity of hard negatives and the inclusion of false negatives. To address these issues, we propose ClusterNS (**Cluster**ingaware Negative Sampling), a novel method that incorporates cluster information into contrastive learning for unsupervised sentence representation learning. We apply a modified Kmeans clustering algorithm to supply hard negatives and recognize in-batch false negatives during training, aiming to solve the two issues in one unified framework. Experiments on semantic textual similarity (STS) tasks demonstrate that our proposed ClusterNS compares favorably with baselines in unsupervised sentence representation learning. Our code has been made publicly available.1
## 1 Introduction
Learning sentence representation is one of the fundamental tasks in natural language processing and has been widely studied (Kiros et al., 2015; Hill et al., 2016; Cer et al., 2018; Reimers and Gurevych, 2019). Reimers and Gurevych (2019)
show that sentence embeddings produced by BERT
(Devlin et al., 2019) are even worse than GloVe embeddings (Pennington et al., 2014), attracting more research on sentence representation with pretrained language models (PLMs) (Devlin et al.,
2019; Liu et al., 2019; Radford et al., 2019). Li et al. (2020a) and Ethayarajh (2019) further find out that PLM embeddings suffer from anisotropy, motivating more researchers to study this issue (Su et al., 2021; Gao et al., 2021). Besides, Gao et al.
∗Corresponding authors 1https://github.com/djz233/ClusterNS
![0_image_0.png](0_image_0.png)
Figure 1: An example of in-batch negatives, a hard negative (in blue dotted box) and a false negative (in red dotted box). Cosine similarity is calculated with SimCSE (Gao et al., 2021). In-batch negatives may include false negatives, while lacking hard negatives.
(2021) show that contrastive learning (CL) is able to bring significant improvement to sentence representation. As pointed out by Wang and Isola
(2020), contrastive learning improves the uniformity and alignment of embeddings, thus mitigating the anisotropy issue.
Most previous works of constrastive learning concentrate on the construction of positive examples (Kim et al., 2021; Giorgi et al., 2021; Wu et al.,
2020; Yan et al., 2021; Gao et al., 2021; Wu et al.,
2022) and simply treat all other in-batch samples as negatives, which is sub-optimal. We show an example in Figure 1. In this work, we view sentences having higher similarity with the anchor sample as *hard negatives*, which means they are difficult to distinguish from positive samples. When all the negatives are sampled uniformly, the impact of hard negatives is ignored. In addition, various negative samples share different similarity values with the anchor sample and some may be incorrectly labeled (i.e., *false negatives*) and pushed away in the semantic space.
Recently quite a few researchers have demonstrated that hard negatives are important for contrastive learning (Zhang et al., 2022a; Kalantidis et al., 2020; Xuan et al., 2020). However, it is not trivial to obtain enough hard negatives through sampling in the unsupervised learning setting. Admittedly, they can be obtained through retrieval (Wang et al., 2022b) or fine-grained data augmentation
(Wang et al., 2022a), but the processes are usually time-consuming. Incorrectly pushing away false negatives in the semantic space is another problem in unsupervised learning scenarios, because all negatives are treated equally. In fact, in-batch negatives are quite diverse in terms of similarity values with the anchor samples. Therefore, false negatives do exist in the batches and auxiliary models may be required to identify them (Zhou et al., 2022). In sum, we view these issues as the major obstacles to further improve the performance of contrastive learning in unsupervised scenarios.
Since the issues mentioned above have a close connection with similarity, reasonable differentiation of negatives based on similarity is the key. In the meanwhile, clustering is a natural and simple way of grouping samples into various clusters without supervision. Therefore, in this paper, we propose a new negative sampling method called **ClusterNS** for unsupervised sentence embedding learning, which combines clustering with contrastive learning. Specifically, for each mini-batch during training, we cluster them with the K-means algorithm (Hartigan and Wong, 1979), and for each sample, we select its nearest neighboring centroid
(cluster center) as the hard negative. Then we treat other sentences belonging to the same cluster as false negatives. Instead of directly taking them as positive samples, we use the Bidirectional Margin Loss to constrain them. Since continuously updating sentence embeddings and the large size of the training dataset pose efficiency challenges for the clustering, we modify the K-means clustering to make it more suitable for training unsupervised sentence representation.
Overall, our proposed negative sampling approach is simple and easy to be plugged into existing methods, boosting the performance. For example, we improve SimCSE and PromptBERT in RoBERTabase by 1.41/0.59, and in BERTlarge by 0.78/0.88 respectively. The main contributions of this paper are summarized as follows:
- We propose a novel method for unsupervised sentence representation learning, leveraging clustering to solve hard negative and false negative problems in one unified framework.
- We modify K-means clustering for unsupervised sentence representation, making it more efficient and achieve better results.
- Experiments on STS tasks demonstrate our evident improvement to baselines and we reach 79.74 for RoBERTabase, the best result with this model.
## 2 Related Works 2.1 Contrastive Learning
Contrastive learning is a widely-used method in sentence representation learning. Early works focus on positive examples, and have raised various kinds of effective data augmentations (Giorgi et al.,
2021; Wu et al., 2020; Yan et al., 2021; Gao et al.,
2021). Following these works, Wu et al. (2022)
improve positive construction based on Gao et al.
(2021). Zhou et al. (2022) improve the uniformity of negative. Besides, Zhang et al. (2022b) modify the objective function and Chuang et al. (2022)
introduce Replaced Token Detection task (Clark et al., 2020), reaching higher performance.
## 2.2 Negative Sampling
In-batch negative sampling is a common strategy in unsupervised contrastive learning, which may have limitations as we mentioned above. To fix these issues, Zhang et al. (2022a) and Kalantidis et al.
(2020) synthesize hard negatives by mixing positives with in-batch negatives. Wang et al. (2022a)
utilize dependency parsing to create the negation of original sentences as soft negatives. Following Jiang et al. (2022) who use different prompt templates as positive, Zeng et al. (2022) derive negatives from the negation of the templates. The two methods create negatives with fixed templates and rules, thus may introduce bias. Chuang et al. (2020)
design a debiased contrastive objective that corrects the false negatives without true labels. Zhou et al.
(2022) use a trained model to distinguish false negatives, which results in addition module comparing with our method.
## 2.3 Neural Clustering
Clustering methods have been extended to deep learning and used for unsupervised representation learning (Xie et al., 2016; Yang et al., 2017; Caron et al., 2018; Li et al., 2020b; Zhang et al., 2021b).
Prototypical Network (Snell et al., 2017), a variety of clustering, is widely used in few-shot learning (Cui et al., 2022; Ding et al., 2020; Gao et al.,
2019). Several works have combined clustering
![2_image_0.png](2_image_0.png)
Framework
with contrastive learning (Li et al., 2020b; Caron et al., 2020; Zhang et al., 2021b; Wang et al., 2021).
Among them, Li et al. (2020b) argue that clustering encodes high-level semantics, which can augment instance-wise contrastive learning.
## 3 Methods 3.1 Preliminaries
Our clustering-based negative sampling method for unsupervised sentence representation can be easily integrated with contrastive learning approaches like SimCSE (Gao et al., 2021) or PromptBERT (Jiang et al., 2022). An illustration of ClusterNS and the original contrastive learning framework is shown in Figure 2. For a sentence xi in one mini-batch
{xi}Ni=1 (N samples in each mini-batch), SimCSE
uses Dropout (Srivastava et al., 2014) and PromptBERT uses different prompt-based templates to obtain its positive example x+i . Then they treat the other samples in the mini-batch as "default" negatives and apply the *InfoNCE loss* (Oord et al., 2018)
in Eq. (1), where τ is the temperature coefficient.
$${\mathcal{L}}_{c l}=-\log\frac{e^{s i m\left(x_{i},x_{i}^{+}\right)/\tau}}{\sum\limits_{j=1}^{N}e^{s i m\left(x_{i},x_{j}^{+}\right)/\tau}}\qquad\qquad(1)$$
## 3.2 Boosting Negative Sampling
Our main contribution of this work is to improve the negative sampling method with clustering. To be more specific, we combine clustering with contrastive learning in the training process, recognizing false negatives in the mini-batch and providing additional hard negatives based on the clustering result. The clustering procedure will be introduced in Section 3.3 in detail and for the moment, we assume the samples in each mini-batch have been properly clustered.
Supposing that there are K centroids c = {ci}Ki=1 after clustering, standing for K clusters C =
{Ci}Ki=1. For a sample xi in the mini-batch, we sort the clusters C = [Ci1,Ci2,...,CiK] and centroids c = [ci1,ci2,...,ciK] by their cosine similarity cos(xi, cij ) with xi. In this case, ci1 and ciK
are the nearest and farthest centroids to xi, respectively. Therefore, xi is the most similar to ci1 and belongs to cluster Ci1. We define the set x∗i =
{x∗ij}
count(Ci1)
j=1 , whose elements belong to Ci1, the same cluster as xi.
Hard Negatives Zhang et al. (2022a) show that hard negatives bring stronger gradient signals, which are helpful for further training. The critical question is how to discover or even produce such negatives. In our method, the introduced centroids c can be viewed as hard negative candidates.
We get rough groups in mini-batch after clustering and sorting by similarity. For the sample xi, we pick the centroid ci2 as its hard negative. The reason is that ci2 gets the highest similarity with xi among all the centroids (except for ci1 which xi belongs to) while having a different cluster.
In this way, all the samples have proper centroids as their hard negatives, and the training objective Lcl is as follows:
$$\mathcal{L}_{c l}=-\log\frac{e^{s i m\left(x_{i},x_{i}^{+}\right)/\tau}}{\sum\limits_{j=1}^{N}\left(e^{s i m\left(x_{i},x_{j}^{+}\right)/\tau}+\mu e^{s i m\left(x_{i},x_{j}^{-}\right)/\tau}\right)}\tag{2}$$
where x−j is the hard negative corresponding to xj ,
μ is the weight of the hard negative. Note that ci1 is more similar to xi compared with ci2, which is another candidate for the hard negative. We have compared the different choices in the ablation study described in Section 4.4.
False Negatives For sample xi, we aim to 1) recognize the false negatives in the mini-batch and 2)
prevent them from being pushed away incorrectly in the semantic space. For the former, we treat elements in x∗i as false negatives, since they belong to the same cluster and share higher similarity with xi.
For the latter, it is unreliable to directly use them as positives, since the labels are missing under the unsupervised setting. However, the different similarity between the anchor sample and others can be summarized intuitively as the following Eq. (3):
$$cos(x_{i},x_{i}^{-})\leq cos(x_{i},x_{ij}^{*})\leq cos(x_{i},x_{i}^{+})\tag{3}$$
where x∗ij ∈ x∗i . False negatives have higher similarity with the anchor than normal negatives while lower similarity than the positives. Inspired by Wang et al. (2022a), we introduce the bidirectional margin loss (BML) to model the similarity between the false negative candidates and the anchor:
$$\Delta_{x_{i}}=cos(x_{i},x_{i}^{*})-cos(x_{i},x_{i}^{+})\tag{4}$$ $${\cal L}_{bml}=ReLU(\Delta_{x_{i}}+\alpha)+ReLU(-\Delta_{x_{i}}-\beta)\tag{5}$$
BML loss aims to limit cos(xi, x∗i) in an appropriate range by limiting Δxi in the interval [−β,
−α]. Accordingly, we find the potential false negatives in the mini-batch and treat them differently.
Combining Eq. (2) and Eq. (5), we obtain the final training objective function as follows:
* [10] M. C.
$${\mathcal{L}}={\mathcal{L}}_{c l}+\lambda{\mathcal{L}}_{b m l}$$
where $\lambda$ is a hyperparameter.
## 3.3 In-Batch Clustering
K-means clustering is the base method we use, while we need to overcome computational challenges during the training process. It is very inefficient to cluster the large training corpus. However, we need to do clustering frequently due to the continuously updating embeddings. Therefore, we design the training process with clustering in Algorithm 1. Briefly speaking, we use cosine similarity as the distance metric, cluster the mini-batch and update the centroids with momentum at each step.
Algorithm 1 Training with Clustering.
Input: Model parameters: θ; Training dataset: D;
Total update steps: T; Warm-up steps: S
1: for t = 1 to T do 2: Get the sentence embeddings {xi}Ni=1 for each mini-batch 3: if t == S **then**
4: Initialize centroids c with mini-batch samples heuristically 5: **end if**
6: if t > S **then**
7: Update centroids c with {xi}Ni=1 8: Provide centroids as hard negatives
{x−i }Ni=1 9: Calculate Lbml for false negatives 10: **end if**
11: Calculate Lcl 12: Loss backward and optimize θ 13: **end for**
$$\theta$$
Centroids Initialization We show the initialization in line 3–5 in Algorithm 1. The clustering is not performed at the beginning, since high initial similarity of embeddings harms the performance.
Instead, we start clustering a few steps after the training starts, being similar to the warm-up process. When initializing, as line 4 shows, we select K samples as initial centroids heuristically: each centroid to be selected should be the least similar to last centroid.
$$(6)$$
Clustering and Updating We now describe line 7 in detail. First, we assign each sample into the cluster whose centroid have the highest cosine similarity with the sample. After clustering finishes, we calculate a new centroid embedding for each cluster by averaging embeddings of all samples in the cluster with Eq. (7), and then update the centroid in the momentum style with Eq. (8):
$$\tilde{x_{i}}=\frac{1}{N_{i}}\sum_{x_{j}\in C_{i}}x_{j}\tag{7}$$ $c_{i}=(1-\gamma)c_{i}+\gamma\tilde{x_{i}}$ (8)
where γ is the momentum hyperparameter and Ni indicates the number of elements in cluster Ci. Finally, based on the clustering results, we calculate the loss and optimize the model step by step (in line 9–12).
The method can be integrated with other contrastive learning models, maintaining high efficiency.
| Models | STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | Avg. |
|------------------------|---------|---------|---------|---------|---------|---------|----------|--------|
| Non-Prompt models | | | | | | | | |
| GloVe embeddings | 55.14 | 70.66 | 59.73 | 68.25 | 63.66 | 58.02 | 53.76 | 61.32 |
| BERTbase embeddings | 39.70 | 59.38 | 49.67 | 66.03 | 66.19 | 53.87 | 62.06 | 56.70 |
| BERTbase-flow | 58.40 | 67.10 | 60.85 | 75.16 | 71.22 | 68.66 | 64.47 | 66.55 |
| BERTbase-whitening | 57.83 | 66.90 | 60.90 | 75.08 | 71.31 | 68.24 | 63.73 | 66.28 |
| SimCSE-BERTbase | 68.40 | 82.41 | 74.38 | 80.91 | 78.56 | 76.85 | 72.23 | 76.25 |
| *ClusterNS-BERTbase | 69.93 | 83.57 | 76.00 | 82.44 | 80.01 | 78.85 | 72.03 | 77.55 |
| RoBERTabase embeddings | 32.11 | 56.33 | 45.22 | 61.34 | 61.98 | 54.53 | 62.03 | 53.36 |
| RoBERTabase-whitening | 46.99 | 63.24 | 57.23 | 71.36 | 68.99 | 61.36 | 62.91 | 61.73 |
| SimCSE-RoBERTabase | 70.16 | 81.77 | 73.24 | 81.36 | 80.65 | 80.22 | 68.56 | 76.57 |
| ESimCSE-RoBERTabase | 69.90 | 82.50 | 74.68 | 83.19 | 80.30 | 80.99 | 70.54 | 77.44 |
| DCLR-RoBERTabase | 70.01 | 83.08 | 75.09 | 83.66 | 81.06 | 81.86 | 70.33 | 77.87 |
| *ClusterNS-RoBERTabase | 71.17 | 83.53 | 75.29 | 82.47 | 82.25 | 81.95 | 69.22 | 77.98 |
| SimCSE-BERTlarge | 70.88 | 84.16 | 76.43 | 84.50 | 79.76 | 79.26 | 73.88 | 78.41 |
| MixCSE-BERTlarge | 72.55 | 84.32 | 76.69 | 84.31 | 79.67 | 79.90 | 74.07 | 78.80 |
| DCLR-BERTlarge | 71.87 | 84.83 | 77.37 | 84.70 | 79.81 | 79.55 | 74.19 | 78.90 |
| *ClusterNS-BERTlarge | 71.64 | 85.97 | 77.74 | 83.48 | 79.68 | 80.80 | 75.02 | 79.19 |
| Prompt-based models | | | | | | | | |
| PromptBERTbase | 71.56 | 84.58 | 76.98 | 84.47 | 80.60 | 81.60 | 69.87 | 78.54 |
| *ClusterNS-BERTbase | 72.92 | 84.86 | 77.38 | 84.52 | 80.23 | 81.58 | 69.53 | 78.72 |
| ConPVP-BERTbase | 71.72 | 84.95 | 77.68 | 83.64 | 79.76 | 80.82 | 73.38 | 78.85 |
| SNCSE-BERTbase | 70.67 | 84.79 | 76.99 | 83.69 | 80.51 | 81.35 | 74.77 | 78.97 |
| PromptRoBERTabase | 73.94 | 84.74 | 77.28 | 84.99 | 81.74 | 81.88 | 69.50 | 79.15 |
| ConPVP-RoBERTabase | 73.20 | 83.22 | 76.24 | 83.37 | 81.49 | 82.18 | 74.59 | 79.18 |
| SNCSE-RoBERTabase | 70.62 | 84.42 | 77.24 | 84.85 | 81.49 | 83.07 | 72.92 | 79.23 |
| *ClusterNS-RoBERTabase | 74.02 | 85.12 | 77.96 | 84.47 | 82.84 | 83.28 | 70.47 | 79.74 |
| PromptBERTlarge | 73.29 | 86.39 | 77.90 | 85.18 | 79.97 | 81.92 | 71.26 | 79.42 |
| ConPVP-BERTlarge | 72.63 | 86.68 | 78.14 | 85.50 | 80.13 | 82.18 | 74.79 | 80.01 |
| SNCSE-BERTlarge | 71.94 | 86.66 | 78.84 | 85.74 | 80.72 | 82.29 | 75.11 | 80.19 |
| *ClusterNS-BERTlarge | 73.99 | 87.53 | 78.82 | 85.47 | 80.84 | 82.85 | 72.59 | 80.30 |
Table 1: Overall Results on STS tasks of Spearman's correlation coefficient. All baseline results are from original or relative papers. We use symbol * to mark our models. Best results are highlighted in bold.
## 4 Experiments 4.1 Evaluation Setup
Our experiments are conducted on 7 semantic textual similarity (STS) tasks (Agirre et al., 2012, 2013, 2014, 2015, 2016; Cer et al., 2017; Marelli et al., 2014) and the models are evaluated with the SentEval Toolkit (Conneau and Kiela, 2018). We take the Spearman's correlation coefficient as the metric and follow the Gao et al. (2021)'s aggregation method of results.
## 4.2 Implementation Details
Our code is implemented in Pytorch and Huggingface Transformers. The experiments are run on a single 32G Nvidia Tesla V100 GPU or four 24G Nvidia RTX3090 GPUs. Our models are based on SimCSE (Gao et al., 2021) and PromptBERT
(Jiang et al., 2022), and named as *Non-Prompt* ClusterNS and *Prompt-based* ClusterNS, respectively.
We use BERT (Devlin et al., 2019) and RoBERTa
(Liu et al., 2019) as pre-trained language models for evaluation, with training for 1 epoch and evaluating each 125 steps on the STS-B development set.
We also apply early stopping to avoid overfitting.
Hyperparameter settings and more training details are listed in Appendix A.
## 4.3 Main Results
We present the experiment results in Table 1. We compare with four types of models totally, 1)
vanilla embeddings of Glove, BERT and RoBERTa models, we report their results provided by Gao et al. (2021). 2) Baseline models: BERT-flow
(Li et al., 2020a), BERT-whitening (Su et al.,
2021), SimCSE (Gao et al., 2021) and Prompt-
BERT (Jiang et al., 2022). 3) SimCSE-based models: MixCSE (Zhang et al., 2022a), DCLR
(Zhou et al., 2022) and ESimCSE (Wu et al., 2022).
4) PromptBERT-based models: ConPVP (Zeng et al., 2022) and SNCSE (Wang et al., 2022a). We compare SimCSE-based models with *Non-Prompt* ClusterNS, and PromptBERT-based models with Prompt-based ClusterNS, respectively. In this way, identical representation of sentence embeddings is guaranteed for a fair comparison.
Our conclusions are as follows, comparing with two baseline models, SimCSE and PromptBERT,
all ClusterNS models achieve higher performance, indicating their effectiveness and the importance of negative sampling. For non-prompt models, ClusterNS surpasses MixCSE and DCLR in BERTlarge, and for prompt-based models, ClusterNS also surpasses ConPVP and SNCSE in BERTlarge and RoBERTabase. All these models improve negative samples through sampling or construction, demonstrating our models' strong competitiveness. At last, Prompt-based ClusterNS achieves the stateof-the-art performance of 79.74, which is the best result for models with RoBERTabase.
## 4.4 Ablation Study
Our proposed method focuses on two issues, producing hard negatives and processing false negatives. To verify the contributions, we conduct the ablation studies by removing each of the two components on test sets of the STS tasks, with Non-prompt BERT and RoBERTa models. As we mentioned in Section 3.2, we also replace hard negatives with the most similar centroids to verify our choice of hard negatives (named *repl. harder negative*), and replace both centroids for hard and false negatives with random clusters to verify our choice of cluster centroids (named *repl. random clusters*).
The results are in Table 2.
| Models | BERTbase | RoBERTabase |
|-----------------------|--------------|---------------|
| ClusterNS | 77.55 | 77.98 |
| w/o false negative | 76.99(-0.56) | 77.83(-0.15) |
| w/o hard negative | 76.03(-1.52) | 77.22(-0.76) |
| repl. harder negative | 76.97(-0.58) | 77.84(-0.14) |
| repl. random clusters | 76.77(-0.78) | 77.85(-0.13) |
| SimCSE | 76.25 | 76.57 |
We observe from Table 2 that removing either component or replacing any part of models lead to inferior performance: 1) Providing hard negatives yields more improvement, since we create high similarity sample leveraging clustering. 2) Processing false negatives solely (without hard negatives) even further harm the performance, indicating that providing virtual hard negatives is much easier than distinguishing real false negatives. 3) Replacing hard negative with most similar centroids also degrades the performance. Since they belong to the identical cluster, the candidate hard negatives could be actually positive samples. And 4) random clusters are also worse, indicating that the selection of clusters does matter. We discuss more hyperparameter settings in Appendix E.
## 5 Analysis
To obtain more insights about how clustering helps the training process, we visualize the variation of diverse sentence pairs similarity during training after clustering initialization in the *Non-Prompt* ClusterNS-RoBERTabase model, and analyze the results in detail.
## 5.1 In-Batch Similarity
We visualize the average similarity of positive, inbatch negative and hard negative sentence pairs in Figure 3. We observe that similarity of in-batch negative drops rapidly as training progresses, indicating that in-batch negatives are difficult to provide gradient signal. The hard negatives provided by our method maintain higher similarity, which properly handles the issue. Also notice that the similarity of hard negatives is still much smaller than positive pairs, which avoids confusing the model.
![5_image_0.png](5_image_0.png)
## 5.2 Clustering Similarity
Furthermore, we also visualize the similarity related to clustering. In Figure 4, we show the average similarity of sample-nearest centroid pairs,
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
sample-hard negative pairs (second nearest centroids, same as hard negative sentence pairs in Figure 3), inter-centroid pairs, intra-cluster member pairs (same as false negative pairs) and in-batch negative pairs. First, similarity of inter-centroid pairs decreases during training, demonstrating that clusters representing diverse semantics slowly scatter. Second, false negative pairs get much higher similarity than in-batch negatives, which indicates the importance of recognizing them and the necessity of treating them differently. At last, samplenearest centroid pairs and sample-hard negative pairs maintain high similarity, demonstrating the stability of clustering during the training process.
To answer the question what is a *good* hard negative, we experiment with different similarity levels.
We define a symbol σ, the average similarity threshold of in-batch sentence pairs when the centroids initialize. Since the similarity of hard negative pairs depends on σ, we adjust the similarity level with various σ settings.
We show the results in Figure 5 and Table 3. As we set the threshold σ smaller, clustering begins later and hard negatives gets larger similarity (with the anchor sample), meaning that starting clustering too early leads to less optimal hard negative candidates. The best performance is achieved at σ = 0.4, the middle similarity level, verifying the finding in our ablation study, i.e., hard negatives are not the most similar samples.
In Figure 4, similarity of false negative pairs is much smaller comparing with positive pairs, which shows the distinction between positive and false negative samples. False negatives are usually regarded as positive samples in supervised learning, while it is difficult to recognize precisely in the unsupervised setting. We argue that false negatives
![6_image_0.png](6_image_0.png)
| σ | 0.2 | 0.4 | 0.6 |
|------------|--------|--------|--------|
| Similarity | 0.1814 | 0.1572 | 0.1442 |
| Avg. STS | 77.02 | 77.98 | 77.30 |
| σ wo. Lbml | 0.2 | 0.4 | 0.6 |
| Similarity | 0.1797 | 0.1574 | 0.1453 |
| Avg. STS | 77.43 | 77.83 | 77.46 |
retrieved by our methods share similar topics with the anchors, leading to higher similarity than "normal" negatives and lower similarity than positives.
We use Eq. (5) to constrain false negatives based on this hypothesis. We also do case studies and experiments to approve it in Appendix C.
To verify our choice of the BML loss, we implement experiments on different processing strategies of false negatives. We compare BML loss with two common strategies: use false negatives as positives and mask all the false negatives. Results in Table 4 demonstrate the superiority of the BML loss.
| Models | Avg. STS |
|--------------------------|------------|
| ClusterNS | 77.98 |
| w/o BML loss | 77.83 |
| Mask all false negatives | 77.40 |
| Use as positives | 42.33 |
## 6 Clustering Evaluation
We also evaluate the quality of sentence embedding through clustering. We first use the DBpedia dataset (Brümmer et al., 2016), an ontology classification dataset extracted from Wikipedia and consists of 14 classes in total. We implement K-means clustering (K=14) on the sentence embeddings of
| Models | RoBERTa | SimCSE | ClusterNS |
|----------|-----------|----------|-------------|
| AMI | 0.6926 | 0.7078 | 0.7355 |
Table 5: AMI score for K-means clustering (K=14) on DBpedia dataset. We use *Non-Prompt* ClusterNS for comparison. Higher values are better.
DBpedia, and take the adjusted mutual information
(AMI) score as the evaluation metric following Li et al. (2020b). The results in Table 5 show that both sentence embedding models improve the AMI
score, indicating that the cluster performance is positively correlated with the quality of sentence embeddings. ClusterNS achieves a higher AMI
score than SimCSE, verifying its effectiveness.
Furthermore, we follow Zhang et al. (2021a)
to conduct a more comprehensive evaluation of the short text clustering on 8 datasets 2, including AgNews (AG) (Zhang and LeCun, 2015), Biomedical (Bio) (Xu et al., 2017), SearchSnippets (SS)
(Phan et al., 2008), StackOverflow (SO) (Xu et al.,
2017), GoogleNews (G-T, G-S, G-TS) (Yin and Wang, 2016) and Tweet (Yin and Wang, 2016). We perform K-means clustering on the sentence embeddings and take the clustering accuracy as the evaluation metric. Results are shown in Table 6.
Our ClusterNS models achieve higher performance than SimCSE in both two models, with an overall improvement of 4.34 in BERTbase. Both main experiments and two clustering evaluations show the improvement of our method to the baseline, and verify the effectiveness of improved negative sampling. More details about evaluation metrics are shown in Appendix D.
## 7 Alignment And Uniformity
To investigate how ClusterNS improves the sentence embedding, we conduct further analyses on two widely used metrics in contrastive learning proposed by Wang and Isola (2020), *alignment* and uniformity. Alignment measures the expected distance between the embeddings of positive pairs:
$$\mathcal{L}_{align}\stackrel{{\Delta}}{{=}}\mathbb{E}\|f(x)-f(x^{+})\|^{2}\tag{9}$$
And uniformity measures the expected distance between the embeddings of all sentence pairs:
$$\begin{array}{l}\mathcal{L}_{uniform}\triangleq\log\limits_{(x,y)\sim p_{\rm data}}e^{-2\|f(x)-f(y)\|^{2}}\end{array}\tag{10}$$
Both metrics are better when the numbers are lower. We use the STS-B dataset to calculate the alignment and uniformity, and consider the sentence pairs with score higher than 4 as positive pairs. We show the alignment and uniformity of different models in Figure 6, along with the average STS test results. We observe that ClusterNS strikes a balance between alignment and uniformity, improving the weaker metric at the expense of the stronger one to reach a better balance. For the nonprompt models, SimCSE has great uniformity but weaker alignment compared to vanilla BERT and RoBERTa. ClusterNS optimizes the alignment. On the other hand, Prompt-based ClusterNS optimizes the uniformity since PromptRoBERTa performs the opposite of SimCSE. Besides, RoBERTa may suffer server anisotropy than BERT, meaning that sentence embeddings are squeezed in a more crowded part of the semantic space. Therefore, RoBERTa and PromptRoBERTa-untuned have extreme low value of alignment, but poor uniformity.
![7_image_0.png](7_image_0.png)
## 8 Conclusion
In this paper, we propose ClusterNS, a novel approach that focuses on improving the negative sampling for contrastive learning in unsupervised sentence representation learning. We integrate clustering into the training process and use the clustering results to generate additional hard negatives and identify false negatives for each sample. We also use a bidirectional margin loss to constrain the false negatives. Our experiments on STS tasks show improvements over baseline models and demonstrate the effectiveness of ClusterNS. Through this work,
| Models | AG | Bio | Go-S | G-T | G-TS | SS | SO | Tweet | Avg. |
|-----------------------|-------|-------|--------|-------|--------|-------|-------|---------|--------|
| SimCSE-BERTbase | 74.46 | 35.64 | 59.01 | 57.92 | 64.18 | 67.09 | 50.78 | 54.71 | 57.97 |
| ClusterNS-BERTbase | 77.38 | 37.29 | 61.69 | 59.37 | 66.47 | 69.65 | 72.92 | 53.71 | 62.31 |
| SimCSE-RoBERTabase | 69.71 | 37.35 | 60.89 | 57.66 | 65.05 | 46.90 | 69.00 | 51.89 | 57.31 |
| ClusterNS-RoBERTabase | 65.00 | 36.38 | 58.58 | 57.88 | 65.54 | 52.55 | 74.38 | 51.63 | 57.74 |
Table 6: Clustering accuracy on short text clustering datasets. We use *Non-Prompt* ClusterNS for comparison and evaluate on BERTbase and RoBERTabase. We reproduce all baseline results based on provided checkpoints. Best results are highlighted in bold.
we demonstrate that it is valuable to pay more attention to negative sampling when applying contrastive learning for sentence representation.
## Acknowledgements
We appreciate the anonymous reviewers for their valuable comments. We thank Zhaoyang Wang for his support. This work was supported by the National Natural Science Foundation of China (No. 62176270), the Guangdong Basic and Applied Basic Research Foundation (No.
2023A1515012832), and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No. 2017ZT07X355).
## Limitations
Our work has two limitations. First, we update the cluster centroids at each step during training, which requires a large mini-batch to maintain clustering accuracy and consumes more GPU memory. Second, our method still may not identify false negatives accurately, as we use the training model for coarse-grained clustering rather than a well-trained model. We leave the improvement of memory consumption and further improving false negative discrimination for the future.
## Ethics Statement
All datasets used in our work are from public sources, which do not consist private information.
We strictly followed the data usage policy. Any research based on our work must sign an ethical statement and ensure that they do not infer user privacy from it.
## References
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Iñigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce
Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015),
pages 252–263, Denver, Colorado. Association for Computational Linguistics.
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe.
2014. SemEval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81–91, Dublin, Ireland. Association for Computational Linguistics.
Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evaluation
(SemEval-2016), pages 497–511, San Diego, California. Association for Computational Linguistics.
Eneko Agirre, Johan Bos, Mona Diab, Suresh Manandhar, Yuval Marton, and Deniz Yuret, editors. 2012.
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012).
Association for Computational Linguistics, Montréal, Canada.
Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics
(*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32–43, Atlanta, Georgia, USA. Association for Computational Linguistics.
Martin Brümmer, Milan Dojchinovski, and Sebastian Hellmann. 2016. DBpedia abstracts: A large-scale, open, multilingual NLP training corpus. In *Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)*, pages 3339–3343, Portorož, Slovenia. European Language Resources Association (ELRA).
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. 2018. Deep clustering for unsupervised learning of visual features. In *Proceedings of* the European conference on computer vision (ECCV),
pages 132–149.
Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. 2020.
Unsupervised learning of visual features by contrasting cluster assignments. *Advances in Neural Information Processing Systems*, 33:9912–9924.
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. Association for Computational Linguistics.
Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, and Stefanie Jegelka. 2020. Debiased contrastive learning. *Advances in Neural Information Processing Systems*, 33:8765–8775.
Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljacic, ShangWen Li, Scott Yih, Yoon Kim, and James Glass. 2022.
DiffCSE: Difference-based contrastive learning for sentence embeddings. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4207–4218, Seattle, United States. Association for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
In *International Conference on Learning Representations*.
Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation
(LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Ganqu Cui, Shengding Hu, Ning Ding, Longtao Huang, and Zhiyuan Liu. 2022. Prototypical verbalizer for prompt-based few-shot tuning. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7014–7024, Dublin, Ireland. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Ning Ding, Xiaobin Wang, Yao Fu, Guangwei Xu, Rui Wang, Pengjun Xie, Ying Shen, Fei Huang, Hai-Tao Zheng, and Rui Zhang. 2020. Prototypical representation learning for relation extraction. In *International Conference on Learning Representations*.
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics.
Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun.
2019. Hybrid attention-based prototypical networks for noisy few-shot relation classification. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
volume 33, pages 6407–6414.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
John Giorgi, Osvald Nitski, Bo Wang, and Gary Bader.
2021. DeCLUTR: Deep contrastive learning for unsupervised textual representations. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 879–895, Online.
Association for Computational Linguistics.
John A Hartigan and Manchek A Wong. 1979. Algorithm as 136: A k-means clustering algorithm. Journal of the Royal Statistical Society. series c (applied statistics), 28(1):100–108.
Felix Hill, Kyunghyun Cho, and Anna Korhonen.
2016. Learning distributed representations of sentences from unlabelled data. In *Proceedings of the* 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1367–1377, San
Diego, California. Association for Computational Linguistics.
Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In *Proceedings of the tenth* ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177.
Ting Jiang, Jian Jiao, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Denvy Deng, and Qi Zhang. 2022. PromptBERT: Improving BERT sentence embeddings with prompts. In *Proceedings of the 2022 Conference on* Empirical Methods in Natural Language Processing, pages 8826–8837, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, and Diane Larlus. 2020. Hard negative mixing for contrastive learning. *Advances* in Neural Information Processing Systems, 33:21798–
21809.
Taeuk Kim, Kang Min Yoo, and Sang-goo Lee. 2021.
Self-guided contrastive learning for BERT sentence representations. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2528–2540, Online. Association for Computational Linguistics.
Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In *Advances in* Neural Information Processing Systems, volume 28.
Curran Associates, Inc.
Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020a. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9119–9130, Online. Association for Computational Linguistics.
Junnan Li, Pan Zhou, Caiming Xiong, and Steven Hoi.
2020b. Prototypical contrastive learning of unsupervised representations. In International Conference on Learning Representations.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14),
pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA).
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018.
Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*.
Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271–278, Barcelona, Spain.
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115–124, Ann Arbor, Michigan. Association for Computational Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Xuan-Hieu Phan, Le-Minh Nguyen, and Susumu Horiguchi. 2008. Learning to classify short and sparse text & web with hidden topics from largescale data collections. In *Proceedings of the 17th* international conference on World Wide Web, pages 91–100.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Jake Snell, Kevin Swersky, and Richard Zemel. 2017.
Prototypical networks for few-shot learning. *Advances in Neural Information Processing Systems*,
30.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014.
Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958.
Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou.
2021. Whitening sentence representations for better semantics and faster retrieval. arXiv preprint arXiv:2103.15316.
Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 200–207.
Hao Wang, Yangguang Li, Zhen Huang, Yong Dou, Lingpeng Kong, and Jing Shao. 2022a. Sncse: Contrastive learning for unsupervised sentence embedding with soft negative samples. arXiv preprint arXiv:2201.05979.
Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, and Junbo Zhao. 2021. Pico: Contrastive label disambiguation for partial label learning.
In *International Conference on Learning Representations*.
Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *Proceedings* of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine* Learning Research, pages 9929–9939. PMLR.
Wei Wang, Liangzhu Ge, Jingqiao Zhang, and Cheng Yang. 2022b. Improving contrastive learning of sentence embeddings with case-augmented positives and retrieved negatives. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2159–
2165.
Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005.
Annotating expressions of opinions and emotions in language. *Language resources and evaluation*,
39(2):165–210.
Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2022. ESimCSE: Enhanced sample building method for contrastive learning of unsupervised sentence embedding. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 3898–
3907, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. Clear: Contrastive learning for sentence representation. arXiv preprint arXiv:2012.15466.
Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016.
Unsupervised deep embedding for clustering analysis. In *International conference on machine learning*,
pages 478–487. PMLR.
Jiaming Xu, Bo Xu, Peng Wang, Suncong Zheng, Guanhua Tian, and Jun Zhao. 2017. Self-taught convolutional neural networks for short text clustering. *Neural Networks*, 88:22–31.
Hong Xuan, Abby Stylianou, Xiaotong Liu, and Robert Pless. 2020. Hard negative examples are hard, but useful. In *European Conference on Computer Vision*,
pages 126–142. Springer.
Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. ConSERT: A contrastive framework for self-supervised sentence representation transfer. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5065–5075, Online. Association for Computational Linguistics.
Bo Yang, Xiao Fu, Nicholas D Sidiropoulos, and Mingyi Hong. 2017. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In *international conference on machine learning*, pages 3861–
3870. PMLR.
Jianhua Yin and Jianyong Wang. 2016. A model-based approach for text clustering with outlier detection. In 2016 IEEE 32nd International Conference on Data Engineering (ICDE), pages 625–636. IEEE.
Jiali Zeng, Yongjing Yin, Yufan Jiang, Shuangzhi Wu, and Yunbo Cao. 2022. Contrastive learning with prompt-derived virtual semantic prototypes for unsupervised sentence embedding. In Findings of the Association for Computational Linguistics: EMNLP
2022, pages 7042–7053, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O. Arnold, and Bing Xiang. 2021a. Pairwise supervised contrastive learning of sentence representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5786–5798, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Dejiao Zhang, Feng Nan, Xiaokai Wei, Shang-Wen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew O. Arnold, and Bing Xiang. 2021b.
Supporting clustering with contrastive learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5419–5430, Online. Association for Computational Linguistics.
Xiang Zhang and Yann LeCun. 2015. Text understanding from scratch. *arXiv preprint arXiv:1502.01710*.
Yanzhao Zhang, Richong Zhang, Samuel Mensah, Xudong Liu, and Yongyi Mao. 2022a. Unsupervised sentence representation via contrastive learning with mixing negatives. *Proceedings of the AAAI Conference on Artificial Intelligence*, 36(10):11730–11738.
Yuhao Zhang, Hongji Zhu, Yongliang Wang, Nan Xu, Xiaobo Li, and Binqiang Zhao. 2022b. A contrastive framework for learning sentence representations from
pairwise and triple-wise perspective in angular space.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 4892–4903, Dublin, Ireland.
Association for Computational Linguistics.
Kun Zhou, Beichen Zhang, Xin Zhao, and Ji-Rong Wen.
2022. Debiased contrastive learning of unsupervised sentence representations. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6120–
6130, Dublin, Ireland. Association for Computational Linguistics.
## A Training Details
We do grid search for the hyperparameters and list the searching space below.
- Total batch size [256, 512] - Learning rate [1e-5, 3e-5, 5e-5]
- Hard negative weight μ [1.0] - Number of cluster K [96, 128, 256] - Momentum γ [1e-3, 5e-4, 1e-4] - Similarity threshold σ [0.2 0.3 0.4 0.5 0.6]
- Weight of Lbml [1e-2, 1e-3, 1e-4, 1e-5]
- Upper Bound of Lbml α [0, 0.05, 0.1, 0.15, 0.2, 0.25]
- Lower Bound of Lbml β [0.3, 0.4, 0.5, 0.6]
Our method has two main improvements on hard negative and false negative, respectively. We apply both improvements to most of the models except one of them. We list the information in detail in Table 7. More hyperparameter experiments are discussed in Appendix E.
| Non-Prompt | BERT | RoBERTa | |
|----------------|--------|-----------|------|
| Models | Base | Large | Base |
| Hard Negative | | | |
| False Negative | | | |
| Prompt-based | BERT | RoBERTa | |
| Models | Base | Large | Base |
| Hard Negative | | | |
| False Negative | | | |
## B Transfer Tasks
Following previous works, we also evaluate our models on seven transfer tasks: MR (Pang and Lee, 2005), CR (Hu and Liu, 2004), SUBJ (Pang and Lee, 2004), MPQA (Wiebe et al., 2005), SST-2
(Socher et al., 2013), TREC (Voorhees and Tice, 2000) and MRPC (Dolan and Brockett, 2005). We evaluate with *Non-Prompt* ClusterNS models, and use the default configurations in SentEval Toolkit.
Results are showed in Table 8. Most of our models achieve higher performance than SimCSE and the auxiliary MLM task also works for our methods.
## C False Negative Details
We show the case study in Table 10. As we mentioned in Section 5, our method is able to cluster sentences with similar topics such as religion and music, demonstrating that clustering captures higher-level semantics. However, intra-cluster sentences do not necessarily carry the same meaning and thus they are not suitable to be used as positives directly.
We also show the variation tendency of the false negative rate in Figure 7, which is equivalent to the sample percentage of clusters having more than two elements (i.e., the intra-cluster members are the false negatives of each other). We observe that the false negative rate maintains a high percentage in the whole training process, which verifies the necessity to specific handling the false negatives.
![12_image_0.png](12_image_0.png)
## D Clustering Evaluation Details
We use adjusted mutual information (AMI) score or clustering accuracy to evaluate clustering performance. AMI score measures the agreement between ground truth labels and clustering results.
Two identical label assignments get the AMI score of 1, and two random label assignments are expected to get AMI score of 0. Clustering accuracy measures the clustering agreement with accuracy metric, which need to map clustering results to
Model MR CR SUBJ MPQA SST TREC MRPC Avg
GloVe embeddings 77.25 78.30 91.17 87.85 80.18 83.00 72.87 81.52
Avg. BERT embeddings 78.66 86.25 94.37 88.66 84.40 92.80 69.54 84.94
BERT-[CLS] embedding 78.68 84.85 94.21 88.23 84.13 91.40 71.13 84.66
SimCSE-BERTbase 81.18 86.46 94.45 88.88 85.50 89.80 74.43 **85.81**
w/ MLM **82.92 87.23 95.71** 88.73 86.81 87.01 **78.07** 86.64
ClusterNS-BERTbase 82.01 85.46 94.44 **89.09** 86.27 88.80 73.57 85.66
w/ MLM 82.79 86.84 95.29 88.04 **86.88 91.80** 76.99 **86.95**
SimCSE-RoBERTabase 81.04 87.74 93.28 86.94 86.60 84.60 73.68 84.84
w/ MLM 83.37 87.76 **95.05** 87.16 **89.02** 90.80 75.13 86.90
ClusterNS-RoBERTabase 81.78 86.65 93.21 **87.85** 87.53 84.00 76.46 **85.35**
w/ MLM **83.51 88.11** 94.56 86.04 88.85 **92.40 76.70 87.17**
Table 8: Transfer task results of different sentence embedding models. Best results are highlighted in bold.
ground truth labels with Hungary algorithm in advance.
## E Supplement Experiments E.1 Batch Size And Cluster Number
We use large batch sizes and the cluster number K for our models in the main experiments. To show the necessity, we implement the quantitative analysis to compare with small batch sizes and cluster numbers, and show the results in Figure 8 and Figure 9. Both experiments of small batch sizes and cluster numbers perform worse. We attribute the performance degeneration to three factors: 1)
Contrastive learning requires large batch sizes in general; 2) Smaller cluster numbers lead to more coarse-grained clusters, weakening the clustering performance; And 3) small batch sizes further restrain the number of clusters.
![13_image_1.png](13_image_1.png)
## E.2 Centroids Initialization
We initialize the cluster centroids locally as mentioned in Section 3.3. While some other works adopt global initialization (Li et al., 2020b), they take the embeddings of whole dataset to initialize
![13_image_0.png](13_image_0.png)
the centroids. We compare the two strategies by implementing the global initialization version of ClusterNS (named global ClusterNS). We show the test results in Table 9, and the variation of clustering similarity in Figure 10. Overall, global ClusterNS
does not improve the performance. We obverse that inter-centroid pairs have extreme high similarity, meaning that clusters do not scatter, and the similarity of hard negative pairs is very low, which means hard negatives are not able to provide strong gradient signal.
Table 9: Comparsion of different centroid initialization methods with *Non-Prompt* ClusterNS-RoBERTabase.
| Models | Avg. STS |
|------------------------------|------------|
| ClusterNS-RoBERTabase | 77.98 |
| Global ClusterNS-RoBERTabase | 77.81 |
Example 1
\#1: Jantroon as a word is derived from an Urdu word [UNK] which means Paradise. \#2: While the liturgical atmosphere changes from sorrow to joy at this service, the faithful continue to fast and the Paschal greeting, "Christ is risen!
\#3: There is also a Methodist church and several small evangelical churches. \#4: Hindu Temple of Siouxland
\#5: Eventually, the original marble gravestones had deteriorated, and the cemetery had become an eyesore. \#6: Reverend Frederick A. Cullen, pastor of Salem Methodist Episcopal Church, Harlem's largest congregation, and his wife, the former Carolyn Belle Mitchell, adopted the 15-year-old Countee Porter, although it may not have been official. \#7: The also include images of saints such as Saint Lawrence or Radegund.
Example 2
\#1: Besides Bach, the trio recorded interpretations of compositions by Handel, Scarlatti, Vivaldi, Mozart, Beethoven, Chopin, Satie, Debussy, Ravel, and Schumann.
\#2: Guitarist Jaxon has been credited for encouraging a heavier, hardcore punk-influenced musical style. \#3: Thus, in Arabic emphasis is synonymous with a secondary articulation involving retraction of the dorsum or root of the tongue, which has variously been
\#4: MP from January, 2001 to date.
\#5: The song ranked No. \#6: The tones originate from Brown's acoustic Martin guitar, which is set up through two preamplifiers which are connected to their own power amplifiers.
Table 10: Illustrative examples in clusters resulting from ClusterNS. Sentences with similar topics are grouped into clusters.
Figure 10: Variation for similarity of sample-nearest
![14_image_0.png](14_image_0.png)
centroid pairs (Sample-centroid), sample-hard negative pairs, inter-centroid pairs, intra-cluster member pairs and in-batch negative pairs in global ClusterNS, corresponding to Figure 4.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
After the Conclusion section.
✓ A2. Did you discuss any potential risks of your work?
We discuss them in Limitation.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction (Section 1).
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
B ✓ **Did you use or create scientific artifacts?**
We describe them in Section 4.1 (Evaluation Setup) and Section 4.2 (Implementation Details).
✓ B1. Did you cite the creators of artifacts you used?
We describe them in Section 4.1 (Evaluation Setup) and Section 4.2 (Implementation Details).
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We follow the same processing as previous works, and the datasets and code we used are compatible with the original conditions.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
All the datasets we used are public and have standard data splits. We follow the same processing as previous works, which also do not mention the relevant statistics.
## C ✓ **Did You Run Computational Experiments?** Experiments (Section 4).
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
The models (BERT, RoBERTa) we employ in the paper are well-known.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We discuss the experimental setup in Evaluation Setup (Section 4.1) and Implementation Details
(Section 4.2) and discuss the hyperparameter in Training Details (Appendix A) and Hyperparameters Choice
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We report the main results in a single run and repeat 5 random seeds experiments for our models later. Our standard deviation is about 0.1 0.2 and improvement is also significant statistically.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
We discuss it in Implementation Details (Section 4.2)
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
lin-etal-2023-effective | An Effective Deployment of Contrastive Learning in Multi-label Text Classification | https://aclanthology.org/2023.findings-acl.556 | The effectiveness of contrastive learning technology in natural language processing tasks is yet to be explored and analyzed. How to construct positive and negative samples correctly and reasonably is the core challenge of contrastive learning. It is even harder to discover contrastive objects in multi-label text classification tasks. There are very few contrastive losses proposed previously. In this paper, we investigate the problem from a different angle by proposing five novel contrastive losses for multi-label text classification tasks. These are Strict Contrastive Loss (SCL), Intra-label Contrastive Loss (ICL), Jaccard Similarity Contrastive Loss (JSCL), Jaccard Similarity Probability Contrastive Loss (JSPCL), and Stepwise Label Contrastive Loss (SLCL). We explore the effectiveness of contrastive learning for multi-label text classification tasks by the employment of these novel losses and provide a set of baseline models for deploying contrastive learning techniques on specific tasks. We further perform an interpretable analysis of our approach to show how different components of contrastive learning losses play their roles. The experimental results show that our proposed contrastive losses can bring improvement to multi-label text classification tasks. Our work also explores how contrastive learning should be adapted for multi-label text classification tasks. | # An Effective Deployment Of Contrastive Learning In Multi-Label Text Classification
Nankai Lin1, Guanqiu Qin1, Jigang Wang1**, Dong Zhou**2 ∗and Aimin Yang1,3 ∗
1 School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, Guangdong, 510006, China 2 School of Information Science and Technology, Guangdong University of Foreign Studies, Guangzhou, Guangdong, 510006, China 3 School of Computer Science and Intelligence Education, Lingnan Normal University, Zhanjiang 524000, Guangdong, China
## Abstract
![0_Image_0.Png](0_Image_0.Png)
The effectiveness of contrastive learning technology in natural language processing tasks is yet to be explored and analyzed. How to construct positive and negative samples correctly and reasonably is the core challenge of contrastive learning. It is even harder to discover contrastive objects in multi-label text classification tasks. There are very few contrastive losses proposed previously. In this paper, we investigate the problem from a different angle by proposing five novel contrastive losses for multi-label text classification tasks. These are Strict Contrastive Loss (SCL), Intra-label Contrastive Loss (ICL), Jaccard Similarity Contrastive Loss (JSCL), Jaccard Similarity Probability Contrastive Loss (JSPCL), and Stepwise Label Contrastive Loss (SLCL). We explore the effectiveness of contrastive learning for multilabel text classification tasks by the employment of these novel losses and provide a set of baseline models for deploying contrastive learning techniques on specific tasks. We further perform an interpretable analysis of our approach to show how different components of contrastive learning losses play their roles. The experimental results show that our proposed contrastive losses can bring improvement to multi-label text classification tasks. Our work also explores how contrastive learning should be adapted for multi-label text classification tasks.
## 1 Introduction
Multi-label text classification is an important branch of text classification technology (Chalkidis and Søgaard, 2022; Zhang et al., 2022b). Different from binary classification tasks or multi-class classification tasks, multi-label classification tasks need to assign at least one label to a piece of text. Since the number of labels the text belongs to is
*Corresponding Author. E-mail: [email protected], [email protected].
not fixed, it greatly increases the difficulty of the model prediction. Specifically, the uncertainty in the number of labels poses two challenges to the training of multi-label text classification models:
the output logic of the model and the semantic representation space of the model. In recent years, most multi-label text classification research has focused on designing better output logic to solve the uncertainty of the number of labels, such as transforming the multi-label text classification problem into a multi-task problem (Lin et al., 2022). However, for another challenge, how to construct a better semantic representation space for multi-label text classification models, little research attention has been paid.
The existence of multi-label samples can easily confound the semantic representation space, thereby posing a challenge in data analysis and modeling. When confronted with multi-label samples, the semantic representation space becomes susceptible to distractions, where the boundaries between different classes become blurred. This blurring effect stems from the inherent ambiguity that arises when multiple labels coexist within a single sample, causing uncertainty in the multi-label classification tasks. Take the multi-label emotion classification task as an example (shown in Figure 1), in which the "happy" sample (assumed to be sample A) shares a label with the "happy, surprise" sample (assumed to be sample B), and at the same time, the "surprise" sample (assumed to be sample C) also shares a label with the sample B (shown in Figure 1 (a)). Therefore, in the ideal state, the multi-label classification model assumes that sample A and sample B are located in a similar semantic space, and that sample B and sample C are located in another similar semantic space
(shown in Figure 1 (b)). Figure 1 (c) shows how the samples of the "happy" category and the samples of the "surprise" category are confounded in the semantic space. This will cause sample A and sample C to be brought closer indirectly, even if their labels are completely different. As far as we know, the semantic representation of multi-label samples is still an open issue in the multi-label text classification task. Therefore, this paper focuses on using contrastive learning to improve the semantic representation of multi-label text classification models.
As an emerging technology, contrastive learning has achieved good performance in various fields of natural language processing (Khosla et al., 2020; Gao et al., 2021). How to construct positive and negative samples correctly and reasonably is the core challenge of contrastive learning. In multilabel text classification tasks, it is a great challenge to incorporate the contrastive learning module. It is more difficult for contrastive learning to perform well in multi-label text classification tasks than in other text classification tasks because implicit information representation of multi-label text is richer in the semantic space, which makes it more difficult to define positive and negative samples. Existing studies have proposed unsupervised contrastive learning methods to improve the performance of the model on multi-label text classification tasks
(Khosla et al., 2020), and there are also working to improve supervised contrastive learning (Gao et al.,
2021). However, the exploration of contrastive learning in multi-label text classification tasks is still very limited.
As the typical task in multi-label text classification, multi-label emotion classification task (Li et al., 2022; Ju et al., 2020; Ameer et al., 2023) and multi-label news classification task (Wang et al.,
2021) have received extensive attention. In this paper, we propose five contrastive losses for multilabel text classification tasks and verify the performance of our method with the multi-label emotion classification task and multi-label news classification task as the representative tasks. More specifically, they are Strict Contrastive Loss (SCL),
Intra-label Contrastive Loss (ICL), Jaccard Similarity Contrastive Loss (JSCL), Jaccard Similarity Probability Contrastive Loss (JSPCL), and Stepwise Label Contrastive Loss (SLCL). These five different strategies define the positive samples and negative samples of contrastive learning from different perspectives to pull the distance among different types of samples into the semantic space. To compare the effects of the five strategies, we further conduct an interpretable analysis to investigate how the different contrastive learning methods play their roles. The experimental results show that our proposed contrastive losses can bring improvement for multi-label text classification tasks. In addition, our methods could be considered as a set of baseline models of viable contrastive learning techniques for multi-label text classification tasks. This series of contrastive learning methods are plug-andplay losses, which can be applied to any multi-label text classification model, and to a certain extent, bring effective improvements to the multi-label text classification model.
The major contributions of this paper can be summarized as follows:
(1) For multi-label text classification tasks, we propose five novel contrastive losses from different perspectives, which could be regarded as a set of baseline models of contrastive learning techniques on multi-label text classification tasks.
(2) To the best of our knowledge, this is the first work that proposes a series of contrastive learning baselines for multi-label text classification tasks.
At the same time, we also explore in detail the impact of different contrastive learning settings on multi-label text classification tasks.
(3) Through interpretable analysis, we further show the effectiveness of different contrastive learning strategies in transforming the semantic representation space.
## 2 Related Work 2.1 Multi-Label Text Classification
In the field of text classification, multi-label text classification (MLTC) is always a challenging problem (Lin et al., 2022). A sample of multi-label text classification consists of a text and a set of labels. There is a correlation among labels. For this, some research transforms the multi-label classification problem into the seq2seq problem and learns the potential correlation among labels with the sequence generation model (Nam et al., 2017; Yang et al., 2018; Xiao et al., 2021). Yang et al.
(2019) proposed a reinforcement learning-based seq2set framework, which can capture the correlation among tags and reduce the dependence on tag order. In addition, there is some research introducing label embedding so that the model can simultaneously learn the feature information of text and the co-occurrence information of labels.
Ma et al. (2021) proposed to learn statistical label co-occurrence via GCN. LELC (Joint Learning from Label Embedding and Label Correlation)
simultaneously learned labels attention and label co-occurrence matrix information (Liu et al., 2021).
Zhang et al. (2021) ensembled the MLTC and the label co-occurrence task to enhance label correlation feedback.
Most dataset of MLTC has the data distribution imbalance problem: imbalance within labels, among labels, and among label-sets. The studies we have discussed above, which use label embedding, have alleviated the impact of label imbalance to some extent while learning label association.
Some research solves the problem of data imbalance by resampling. For example, based on the edited nearest neighbor rule, Charte et al. (2014)
proposed a multi-label undersampling algorithm.
They defined a measure of the differential distance between label sets in order to heuristically remove unnecessary samples during resampling. Considering the problem in terms of object functions, Ridnik et al. (2021) proposed an asymmetric loss that dynamically adjusts the asymmetry levels to balance the effect of positive and negative samples in training.
## 2.2 Multi-Label Emotion Classification
Sentiment analysis (Xu et al., 2016) is of great significance to society, economy and security. In early studies sentiment analysis (Mohammad and Turney, 2013; Turney, 2002) is implemented based on the sentiment polarity dictionary. These methods utilize unsupervised methods such as point mutual information (PMI) to construct an emotional dictionary based on the basic emotional word set, and then calculate the emotional weight value and emotional polarity of the text according to the viewpoint words with different intensity of positive, neutral and negative emotional tendencies in the dictionary.
While some studies (Socher et al., 2013; Nakov et al., 2013) transform sentiment analysis into binary or mutil-classification problems, which leads to many subsequent supervised learning studies based on machine learning and neural networks.
In recent years, more and more scholars
(Shmueli et al., 2021; Mohammad et al., 2018) regarded the sentiment analysis task as a multi-label problem, and accordingly, Yilmaz et al. (2021) introduced it into multi-label sentiment analysis by adapting the focal loss and proposed a dynamic weighting method to balance each label's contribution in the training set. Alhuzali and Ananiadou
(2021) transformed the problem of multi-label sentiment classification into span-prediction by means of prompt learning, and proposed a label relationship perception loss. They converted labels into tokens and inputted them into BERT together with the original input text, and used the attention module of the Transformer and the knowledge learned in the pre-train stage to learn the correlation of emotional labels. In addition to encoding labels and sentences with BERT at the same time, EduEmo
(Zhu and Wu, 2022) also introduced the encoder of Realformer (He et al., 2021) to model the association between each elementary discourse unit and sentiment labels.
## 2.3 Contrastive Learning
In recent years, contrastive learning has gradually become one of the important techniques in natural language processing and computer vision. In the field of natural language processing, contrastive learning is usually used to improve the quality of embedding representation by comparing feature vectors, bringing semantically similar and same label embeddings closer, and distancing semantically dissimilar and different label embeddings.
Contrastive learning could be divided into supervised contrastive learning and unsupervised contrastive learning. Khosla et al. (2020) proposed a supervised contrastive learning method, which took the original label of the sample as the anchor, and made the clusters of the same label closer to each other, and the clusters of different labels far away from each other in the embedding space.
To improve the sentence-level representation, SimCSE used dropout technology for unsupervised contrastive learning and natural language inference dataset for supervised contrastive learning (Gao et al., 2021). Some research introduced supervised contrastive learning into the pre-training process of PLMs, and experiments result on their downstream tasks showed that the performance of pre-trained models was generally improved (Gunel et al., 2020; Qin et al., 2021).
## 2.4 **Contrastive Learning For Multi-Label Text** Classification
At present, the application of contrastive learning in multi-label classification mainly focuses on imagerelated tasks. MulCon, an end-to-end framework for multilabel image classification, used image label-level embeddings with a multi-head attention mechanism to transform the multi-label classification problem into the binary classification problem for each label-level embedding (Dao et al., 2021).
Małkinski and Ma ´ ndziuk ´ (2022) proposed a supervised multi-label contrastive learning method for abstract visual reasoning. They reconstructed the contrastive loss function according to the multilabel problem, allowing sample pairs to contrast all labels. Zhang et al. (2022a) proposed a general hierarchical multi-label representation learning framework, which introduced hierarchical loss retention and hierarchical constraints.
However, different from the representation space of images, the implicit information representation of text is richer, which makes it more difficult to define positive samples and negative samples, and it is more difficult for contrastive learning to show good performance. Research of contrastive learning in multi-label text classification is focusing on unsupervised multi-label contrastive learning
(Zhou et al., 2022). What's more, Su et al. (2022)
attempted to improve supervised contrastive learning by using the knowledge of existing multi-label instances for supervised contrastive learning. Bai et al. (2022) proposed to take the sample features as anchor samples, and take the corresponding positive labels and negative labels as positive and negative samples for supervised contrastive learning.
However, the exploration of contrastive learning in multi-label text analysis tasks is still very limited.
![3_image_0.png](3_image_0.png)
## 3 Contrastive Loss For Multi-Label Text Classification
In this section, we describe in detail the application of our proposed different contrastive learning methods on multi-label text classification tasks. We take the multi-label emotion classification task as an example to describe our method. It is worth noting that our proposed method can not only be applied to multi-label emotion classification tasks, but also can be applied to other multi-label text classification tasks.
Suppose a minibatch which contains K samples D = {(X1, Y1),(X2, Y2), . . . ,(XK, YK)}
and I = {1*, . . . , K*} is the sample index set. Given a sample index i, Xiis the text sequence of samples i and its label set is denoted as Yi. After encoding by the multi-label text classification model M, we could obtain the sentence representation vector e t i and the emotion representation matrix Ee i of Xi, where Ee i = {ei1, ei2*, . . . ,* eil} and l represents the total number of emotion labels. It is worth noting that the model M here can be any deep learning multi-label language model. Yiis the one-hot encoding of the label, i.e. Yi = {y1, y2*, ..., y*l}. For a given i-th emotion yi ∈ {0, 1}, yi=0 means that this type of emotion does not exist in the text, and yi=1 means that this type of emotion exists in the text. We further define the label prediction probability distribution of the model M output as pi.
Contrastive learning aims to change the semantic representation space of the model. Since the multilabel classification tasks are more complex than the single-label classification tasks, the main exploration of our paper is how one can construct positive and negative samples for contrastive learning.
When contrastive learning is applied to multilabel text classification, for an anchor, the definition of its positive samples can be diversified. For example, when a strict standard is implemented, positive samples are defined as samples with exactly the same label set (shown in figure 2 (a)),
when a loose standard is implemented, positive samples are defined as samples with partly the same label set (shown in figure 2 (b)). For different positive samples, contrastive learning pulls different samples closer in semantic space for a given anchor. In the strict standard, we could find that for an anchor point, there are fewer positive samples, and samples containing some similar features cannot be pulled closer. In the loose standard, there are more positive samples for an anchor point, which may indirectly bring samples of different labels closer. Therefore, different positive and negative sample construction methods affect the optimization goal of the model. What's more, there are two different types of contrastive learning, Feature-based Contrastive Learning (FeaCL)
(Fu et al., 2022) and Probability-based Contrastive Learning (ProCL) (Li et al., 2021). FeaCL uses semantic representations of sentences as the basic component to build the contrastive objective function. ProCL constructs the contrastive objective function from the perspective of probability distributions instead of semantic representations. Using different features for contrastive learning will also affect the optimization of the model. In order to explore how contrastive learning can be better applied to multi-label text classification tasks, we introduce five different contrastive learning methods SCL, ICL, JSCL, JSPCL, and SLCL, as below.
## 3.1 Strictly Contrastive Loss
As a strict standard method, SCL requires that only when the label set of the sample is exactly the same as the label set of the anchor point can it be used as a positive contrastive sample of the anchor point.
Therefore, SCL does not consider samples that partially overlap with the anchor label set. In addition, SCL is also a method of FeaCL type, which uses the semantic representation of samples obtained from model encoding as the contrastive feature. In the SCL, for a given sample i, all other samples that share the same label set with it in the batch form the set S = {s : s ∈ *I, Y*s = Yi ∧ s ̸= i}.
Then we could define the SCL function for each entry i across the batch as
$$L_{SCL}=-\frac{1}{|S|}\sum_{s\in S}\log\frac{\exp(\frac{sim(\mathbf{e}_{i}^{t},\mathbf{e}_{s}^{t})}{\tau})}{\sum_{k\in I\setminus\{i\}}\exp(\frac{sim(\mathbf{e}_{i}^{t},\mathbf{e}_{k}^{t})}{\tau})}\tag{1}$$ where $sim(\cdot)$ indicates the cosine similarity
function.
## 3.2 Jaccard Similarity Contrastive Loss
SCL is a strict contrastive learning method, which only pulls the samples with the exact same label closer, while JSCL operates on the samples to different degrees according to the similarity of the labels of the samples. We use Jaccard coefficient
(Jaccard, 1912) to calculate the label similarity between samples. Similar to SCL, JSCL uses the semantic representation of samples obtained from model encoding as the contrastive feature. For a given sample, JSCL will zoom in as close as possible on samples with the exact same label while only slightly zooming in on samples that have some of the same labels. In the JSCL, for a given sample i, we could define the JSCL function across the batch as
$$L_{J S C L}=-\frac{1}{|I|}\sum_{s\in I}\log\frac{\frac{|Y_{i}\cap Y_{s}|}{|Y_{i}\cup Y_{s}|}\cdot\exp(\frac{s i m({\mathbf e_{i}^{t}}\cdot{\mathbf e_{s}^{t}})}{\tau})}{\sum_{k\in I\setminus\{i\}}\exp(\frac{s i m({\mathbf e_{i}^{t}}\cdot{\mathbf e_{k}^{t}})}{\tau})}\tag{2}$$
## 3.3 Jaccard Similarity Probability Contrastive Loss
Li et al. (2021) suggested that ProCL can produce more compact features than feature contrastive learning, while forcing the output probabilities to be distributed around class weights. Based on JSCL, we try to use probability for contrastive learning. In the JSPCL, for a given sample i, we could define the JSPCL function across the batch as
$$L_{J S P C L}=-\frac{1}{|I|}\sum_{s\in I}\log\frac{\frac{|Y_{i}\cap Y_{s}|}{|Y_{i}\cup Y_{s}|}\cdot\exp(\frac{s i m(p_{i},p_{s})}{\tau})}{\sum_{k\in I\setminus\{i\}}\exp(\frac{s i m(p_{i},p_{k})}{\tau})}\,.\tag{3}$$
## 3.4 Stepwise Label Contrastive Loss
SLCL is another way to consider contrastive learning among samples with labels that are not exactly the same. The previous three contrastive learning methods mainly consider the situation when multiple emotions are considered at the same time, while SLCL considers different emotions separately, calculates the contrast loss separately, and then combines the losses of each emotion. In the JSPCL, for a given sample i, all other samples that share the same label yj with it in the batch form the positive sample set Sj . The set of positive samples under each emotion label is S = {S1, S2*, ..., S*q} and q is the emotions' number of sample i. Then we could define the SLCL function for each entry i is across the batch as
$$L_{SLCL}=-\frac{1}{q}\sum_{S_{j}\in S}\frac{1}{|S_{j}|}\sum_{s\in S_{j}}\log\frac{\exp(\frac{sim(\mathbf{e}_{i}^{t},\mathbf{e}_{s}^{t})}{\tau})}{\sum_{k\in I\setminus\{i\}}\exp(\frac{sim(\mathbf{e}_{i}^{t},\mathbf{e}_{k}^{t})}{\tau})}\tag{4}$$
## 3.5 Intra-Label Contrastive Loss
Different from several other contrastive losses to narrow the semantic representation of samples with the same labels, ICL aims to make multiple emotional representations existing in the same sample closer. That is, ICL narrows the distance among emotional representations, while not narrowing the distance among sample representations. In the ICL,
for a given sample i and the indexes of i's emotion IY = {1*, ..., l*}, we could define the ICL function for the j-th emotion of each entry i as
$$L_{ICL_{j}}=-\frac{1}{|I_{Y}|}\sum_{s\in I_{Y}}\log\frac{\exp(\frac{sim(\mathbf{e_{ij}},\mathbf{e_{is}})}{\tau})}{\sum_{k\in I_{Y}\setminus\{j\}}\exp(\frac{sim(\mathbf{e_{ij}},\mathbf{e_{ik}})}{\tau})}\tag{5}$$ $$L_{ICL}=\frac{1}{|Y_{i}|}\sum_{Y_{i}}L_{ICL_{j}}\tag{6}$$
## 3.6 Training Objective
To train the model, we combine the contrastive loss with cross-entropy and train them jointly. This aims to use a contrastive loss to close the distance between positive samples, while maximizing the probability of correct labels through a cross-entropy loss. The overall training objective is calculated as follows:
$$L=\alpha\cdot L_{CL}+(1-\alpha)\cdot L_{BCE}\tag{7}$$ where $L_{CL}\in\{$_SCL, LCL, JSCL, JSCL, SLCL$\}$.
## 4 Experiments And Analysis 4.1 Dataset
In order to investigate multi-label text classification tasks, we have selected the SemEval2018 (Mohammad et al., 2018) multi-label emotion classification
(MEC) task in English, Arabic, and Spanish as an illustrative example. The MEC datasets have been annotated to identify the presence of eleven discrete emotions, namely anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, and trust. In order to examine the efficacy and applicability of our approach, we have conducted experiments on a multi-label news classification
(MNC) task in addition to the multi-label emotion classification task. For this purpose, we utilized an open source Indonesian multi-label news classification dataset (Wang et al., 2021), comprising 8 labels including society, politics, economy, technology, military, environment, culture, and others. Each sample in the dataset is associated with at most two category labels. The datasets were initially partitioned into three distinct subsets, namely the training set (Train), validation set (Valid), and test set (Test). For the purpose of training and testing, the default partitioning method of the dataset was directly employed. We evaluate our methods using the micro F1-score, macro F1-score, and Jaccard index score (JS) in accordance with the metrics in SemEval2018 (Mohammad et al., 2018). For each language, Table 1 summarizes the train, valid, and test sets and shows the number of instances in each set.
## 4.2 Experimental Settings
We use SpanEmo1 proposed by Alhuzali and Ananiadou (2021) as the base model. SpanEmo is a SOTA model for multi-label text classification tasks proposed by Alhuzali and Ananiadou (2021), they trained the model with a loss combining the crossentropy loss and the label-correlation aware (LCA)
loss (Yeh et al., 2017). We replaced the LCA loss of this model with several of our proposed contrastive losses for comparison. In addition to the SpanEmo model, we also compared the models with superior performance under each dataset separately. For the MEC task, the English models include JBNN (He and Xia, 2018), DATN (Yu et al., 2018), NTUA
(Baziotis et al., 2018), LEM (Fei et al., 2020), and ReRc (Zhou et al., 2018). On the Arabic dataset, we compare our method with EMA (Badaro et al.,
2018), Tw-StAR (Mulki et al., 2018a), HEF (Alswaidan and Menai, 2020) and BERT-base (Xu et al., 2020). On the Spanish dataset, we used 1Since our proposed method is based on SpanEmo for experiments, we also reproduce the experimental results of the method.
Info./Lang. English Arabic Spanish Indonesian
Train (#) 6,838 2,278 3,561 3373
Valid (#) 886 585 679 860
Test (#) 3,259 1,518 2,854 1841
Total (#) 10,983 4,381 7,094 6074
Classes (#) 11 11 11 8
Type MEC MEC MEC MNC
Table 1: Data Statistics.
Tw-StAR (Mulki et al., 2018b), ELiRF (González et al., 2018), MILAB (Mohammad et al., 2018) and BERT-base (Xu et al., 2020) as comparison models.
To address the MNC task, we have identified and selected the state-of-the-art (SOTA) methods that have demonstrated superior performance on this dataset. The chosen methods comprise SGM (Yang et al., 2018), SU4MLC (Lin et al., 2018), mBERT
(Xu et al., 2020), Indonesian-BERT (Wang et al.,
2021), and Indonesian-BERT+Sim (Wang et al.,
2021).
All experiments were carried out using PyTorch2 and an RTX TITAN with 24 GB of memory. Using the open-source Hugging-Face implementation3, we fine-tuned "bert-base"4(Wolf et al., 2020)
for English. What's more, we selected "bertbase-arabic" 5constructed by Safaya et al. (2020).
for Arabic and "bert-base-spanish-uncased"6constructed by Canete et al. (2020) for Spanish. We set the same hyper-parameters with a fixed initialization seed for three models training, where the
| Method | FMacro | FM icro | JS |
|----------|----------|-----------|-------|
| BNN | 52.80 | 63.20 | - |
| ReRc | 53.90 | 65.10 | - |
| DATN | 55.10 | - | 58.30 |
| NTUA | 52.80 | 70.10 | 58.80 |
| LEM | 56.70 | 67.50 | - |
| SpanEmo | 57.00 | 70.32 | 58.30 |
| JSCL | 57.68 | 71.01 | 59.05 |
| JSPCL | 57.42 | 70.75 | 58.58 |
| SLCL | 56.62 | 70.9 | 58.9 |
| ICL | 57.59 | 70.49 | 58.6 |
| SCL | 57.63 | 70.8 | 58.89 |
batch size is 32 and the feature dimension is 768.
The dropout rate is 0.1 and the early stop patience we set as 10 and 20 epochs. With a learning rate of 1e-3 for the FFN and 2e-5 for the BERT encoder, Adam was chosen for optimization. For the loss weight α, we use the Hyperopt7 hyperparameter selection method (Bergstra et al., 2011) to search for the optimal parameters under each contrastive learning method. For each model, we used five different random seeds to carry out experiments, and the scores of five experiments were averaged as the final score.
## 4.3 Results And Analysis
Main Performance for MEC. As shown in Table 2 to Table 4, all five of our contrastive learning strategies essentially delivered improvement to the model for the MEC task, with the JSCL approach performing best on the English dataset, reaching 57.68, 71.01 and 59.05 for FMacro, F*M icro* and JS respectively, an improvement of 0.68, 0.69 and 0.75 over the SpanEmo model. The performance improvement of our method is more obvious on the Arabic dataset, where the F*M icro* value of the SLCL method is 1.25 higher than that of SpanEmo, and the F*M icro* and JS of the SCL method are improved by 0.90 and 1.49 respectively. The SCL
method also performed well on the Spanish language dataset, achieving the highest JS value of 53.52.
Main Performance for MNC. As shown in Table 5, in the task of MNC, the SCL method exhibits superior performance, achieving noteworthy scores of 74.29, 85.27, and 84.06 for FMacro, F*M icro* and JS metrics respectively. These remarkable results substantiate the efficacy of the SCL approach in addressing the MNC challenge. Our five contrastive learning methods have a significant improvement effect on the model on F*Macro*" indicating that our methods can improve the categories with poor per-7http://hyperopt.github.io/hyperopt/
![7_image_0.png](7_image_0.png)
formance, thereby alleviating the class imbalance problem of MNC tasks to a certain extent.
Comparison between Out-sample and Insample. In general, one particular method, referred to as ICL, exhibits comparatively less improvement.
This approach primarily emphasizes contrasting labels within a single sample, considering the labels present in the text as positive examples and those absent as negative examples. However, due to its limited ability to pay attention to label relationships across different texts, ICL fails to effectively capture the inherent distinctions among labels.
Comparison between Strict Standard and Loose Standard. Through the comparison between the loose standard loss JSCL and the strict standard loss SCL, we can find that the overall performance of SCL on the four datasets is better, that is, to a certain extent, strict standard contrastive learning methods are more suitable for multi-label text classification tasks than loose standard contrastive learning methods.
Method FMacro F*M icro* JS
Tw-StAR 44.60 59.70 46.50
EMA 46.10 61.80 48.90
BERT*base* 47.70 65.00 52.30
HEF 50.20 63.10 51.20
SpanEmo 53.63 65.81 53.94
JSCL 54.08 66.00 54.14
JSPCL 53.70 65.86 53.98
SLCL **54.88** 66.37 54.65
ICL 54.26 66.13 54.17
SCL 54.27 **66.71 55.43**
Table 3: Experimental results on Arabic dataset.
Comparison between ProCL and FeaCL.
Through the results of JSCL and JSPCL, we could find that the method of the ProCL type does not perform as well as the method of the FeaCL type in terms of performance. We believe that because the semantic space of the multi-label text model is too complex, it is more effective to directly focus on the semantic space of the model than the probability distribution.
Method FMacro F*M icro* JS
Tw-StAR 39.20 52.00 43.80
ELiRF 44.00 53.50 45.80
MILAB 40.70 55.80 46.90
BERT*base* 47.40 59.60 48.70
SpanEmo 55.49 63.34 52.68
JSCL 55.62 63.45 52.94
JSPCL **56.44 64.16** 53.31
SLCL 56.00 63.56 52.69
ICL 55.82 63.46 52.66
SCL 55.88 63.70 **53.52**
Interpretable Analysis. Taking the experimental results in Spanish as an example, we analyze the interpretability of our method from the multi-label dimension and the single-label dimension respectively. In the multi-label dimension, we use the entire test set for analysis, consider the samples with identical labels to be under the same cluster, and then use the T-SNE method for dimensionality reduction and visualization. At the same time, we also calculated the Calinski-Harbasz score of cluster clustering to evaluate whether the semantic representation space of each category can be well discriminated. It is worth noting that under the single-label dimension, we only use the test set
![8_image_0.png](8_image_0.png)
| Method | FMacro | FM icro | JS | | | |
|-----------------------------------------|----------|-----------|-------|--------|-------------|--------------|
| SGM | 44.08 | 74.24 | - | | | |
| SU4MLC | 40.38 | 75.66 | - | | | |
| mBERT | 66.56 | 81.85 | - | | | |
| Indonesian-BERT | 67.57 | 84.53 | - | | | |
| Indonesian-BERT +Sim | 70.82 | 84.66 | - | | | |
| SpanEmo | 71.62 | 85.09 | 83.66 | | | |
| JSCL | 73.13 | 85.23 | 83.98 | | | |
| JSPCL | 72.17 | 84.91 | 82.81 | | | |
| SLCL | 73.47 | 85.19 | 83.86 | | | |
| ICL | 74.22 | 85.15 | 83.82 | | | |
| SCL | 74.29 | 85.27 | 84.06 | Method | Multi-label | Singel-label |
| SpanEmo | 5.64 | 48.07 | | | | |
| JSCL | 24.07 | 200.48 | | | | |
| JSPCL | 17.80 | 131.54 | | | | |
| SLCL | 4.35 | 42.33 | | | | |
| ICL | 13.43 | 109.55 | | | | |
| SCL | 25.14 | 198.84 | | | | |
| Table 6: Interpretable analysis results | | | | | | |
| of 25.14. When evaluates from a multi-label perspective, JSCL performs slightly worse than SCL, but when evaluated from a single-label perspective, | | | | | | |
with only one label for interpretable analysis.
The interpretable analysis results for each method in the multi-label dimension and the singlelabel dimension are shown in Table 6. The larger the interpretable analysis results, the higher the discrimination of samples of different categories in the semantic space, and the better the semantic representation ability of the model. It can be seen that in addition to SLCL, other contrastive learning methods can make the samples of the same category in the semantic space more compact, and the boundaries of sample clusters of different categories are more obvious. SLCL aims to narrow the representation of categories, so it cannot make the boundaries between different categories more obvious. Among them, JSCL and SCL have better effects in optimizing the semantic representation space. As a rigorous contrastive learning method, SCL achieves the best results on multi-label dimension evaluation, with a Calinski-Harbasz value of 25.14. When evaluates from a multi-label perspective, JSCL performs slightly worse than SCL,
but when evaluated from a single-label perspective, JSCL achieves the highest Calinski-Harbasz score of 200.48. We also further visualize the semantic space under the single-label dimension, as shown in Figures 3 to 8. It can be clearly seen that in JSCL and SCL, each category is more closely aggregated, and the boundaries among different categories are also more obvious.
## 5 Conclusion
To investigate the efficacy of contrastive learning using various methodologies, we offer five effective contrastive losses for multi-label text classification tasks. The experimental results of this paper show that contrastive loss can improve the performance of multi-label text classification tasks. Furthermore, we find that strict criteria contrastive learning and feature-based contrastive learning outperform other contrastive learning methods on multi-label text classification tasks. In the future, based on these two methods, we will further explore the contrastive loss that is more suitable for multi-label text classification tasks.
## Acknowledgements
This work was supported by the Guangdong Basic and Applied Basic Research Foundation of China
(No. 2023A1515012718).
## Limitations
This paper proposes five novel contrastive losses for multi-label text classification tasks. However, our method has the following limitations:
1. We only selected the multi-label emotion classification task and multi-label news classification as the representative of the multi-label text classification tasks.
2. We only conduct experiments on the single modal of text, and have not extended to multimodal tasks.
3. Our method chooses the SpanEmo model as the backbone, lacking attempts to more models.
## References
Hassan Alhuzali and Sophia Ananiadou. 2021.
SpanEmo: Casting multi-label emotion classification as span-prediction. In *Proceedings of the 16th Conference of the European Chapter of the Association* for Computational Linguistics: Main Volume, pages 1573–1584, Online. Association for Computational Linguistics.
Nourah Alswaidan and Mohamed El Bachir Menai.
2020. Hybrid feature model for emotion recognition in arabic text. *IEEE Access*, 8:37843–37854.
Iqra Ameer, Necva Bölücü, Muhammad Hammad Fahim Siddiqui, Burcu Can, Grigori Sidorov, and Alexander Gelbukh. 2023. Multi-label emotion classification in texts using transfer learning. Expert Systems with Applications, 213:118534.
Gilbert Badaro, Obeida El Jundi, Alaa Khaddaj, Alaa Maarouf, Raslan Kain, Hazem Hajj, and Wassim ElHajj. 2018. EMA at SemEval-2018 task 1: Emotion mining for Arabic. In *Proceedings of the 12th International Workshop on Semantic Evaluation*, pages 236–244, New Orleans, Louisiana. Association for Computational Linguistics.
Junwen Bai, Shufeng Kong, and Carla P Gomes. 2022.
Gaussian mixture variational autoencoder with contrastive learning for multi-label classification. In *International Conference on Machine Learning*, pages 1383–1398. PMLR.
Christos Baziotis, Athanasiou Nikolaos, Alexandra Chronopoulou, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, Shrikanth Narayanan, and Alexandros Potamianos. 2018.
NTUA-SLP at SemEval-2018 task 1: Predicting affective content in tweets with deep attentive RNNs
and transfer learning. In *Proceedings of the 12th International Workshop on Semantic Evaluation*, pages 245–255, New Orleans, Louisiana. Association for Computational Linguistics.
James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In *Proceedings of the 24th International Conference on Neural Information Processing* Systems, NIPS'11, page 2546–2554, Red Hook, NY,
USA. Curran Associates Inc.
José Canete, Gabriel Chaperon, Rodrigo Fuentes, JouHui Ho, Hojin Kang, and Jorge Pérez. 2020. Spanish pre-trained bert model and evaluation data. Pml4dc at iclr, 2020:1–10.
Ilias Chalkidis and Anders Søgaard. 2022. Improved multi-label classification under temporal concept drift: Rethinking group-robust algorithms in a labelwise setting. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2441–2454, Dublin, Ireland. Association for Computational Linguistics.
Francisco Charte, Antonio J. Rivera, María J. del Jesus, and Francisco Herrera. 2014. Mlenn: A first approach to heuristic multilabel undersampling. In Intelligent Data Engineering and Automated Learning - IDEAL 2014, pages 1–9, Cham. Springer International Publishing.
Son D Dao, Ethan Zhao, Dinh Phung, and Jianfei Cai.
2021. Multi-label image classification with contrastive learning. *CoRR*, abs/2107.11626.
Hao Fei, Yue Zhang, Yafeng Ren, and Donghong Ji.
2020. Latent emotion memory for multi-label emotion classification. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 7692–7699.
Yingwen Fu, Nankai Lin, Ziyu Yang, and Shengyi Jiang. 2022. A dual-contrastive framework for low-resource cross-lingual named entity recognition.
CoRR, abs/2204.00796.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
José-Ángel González, Lluís-F. Hurtado, and Ferran Pla.
2018. ELiRF-UPV at SemEval-2018 tasks 1 and 3:
Affect and irony detection in tweets. In *Proceedings of the 12th International Workshop on Semantic* Evaluation, pages 565–569, New Orleans, Louisiana.
Association for Computational Linguistics.
Beliz Gunel, Jingfei Du, Alexis Conneau, and Ves Stoyanov. 2020. Supervised contrastive learning for pre-trained language model fine-tuning. *CoRR*,
abs/2011.01403.
Huihui He and Rui Xia. 2018. Joint binary neural network for multi-label learning with applications to emotion classification. In *Natural Language Processing and Chinese Computing*, pages 250–259, Cham.
Springer International Publishing.
Ruining He, Anirudh Ravula, Bhargav Kanagal, and Joshua Ainslie. 2021. RealFormer: Transformer likes residual attention. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 929–943, Online. Association for Computational Linguistics.
Paul Jaccard. 1912. The distribution of the flora in the alpine zone. 1. *New phytologist*, 11(2):37–50.
Xincheng Ju, Dong Zhang, Junhui Li, and Guodong Zhou. 2020. Transformer-based label set generation for multi-modal multi-label emotion detection. In Proceedings of the 28th ACM International Conference on Multimedia, MM '20, page 512–520, New York, NY, USA. Association for Computing Machinery.
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In *Advances in Neural* Information Processing Systems, volume 33, pages 18661–18673.
Irene Li, Aosong Feng, Hao Wu, Tianxiao Li, Toyotaro Suzumura, and Ruihai Dong. 2022. LiGCN:
Label-interpretable graph convolutional networks for multi-label text classification. In *Proceedings of the* 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022), pages 60–70, Seattle, Washington. Association for Computational Linguistics.
Junjie Li, Yixin Zhang, Zilei Wang, and Keyu Tu. 2021.
Probability contrastive learning for domain adaptation. *CoRR*, abs/2111.06021.
Junyang Lin, Qi Su, Pengcheng Yang, Shuming Ma, and Xu Sun. 2018. Semantic-unit-based dilated convolution for multi-label text classification. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, pages 4554–4564, Brussels, Belgium. Association for Computational Linguistics.
Nankai Lin, Sihui Fu, Xiaotian Lin, and Lianxi Wang.
2022. Multi-label emotion classification based on adversarial multi-task learning. *Information Processing* and Management, 59(6):103097.
Huiting Liu, Geng Chen, Peipei Li, Peng Zhao, and Xindong Wu. 2021. Multi-label text classification via joint learning from label embedding and label correlation. *Neurocomputing*, 460:385–398.
Qianwen Ma, Chunyuan Yuan, Wei Zhou, and Songlin Hu. 2021. Label-specific dual graph neural network for multi-label text classification. In Proceedings of the 59th Annual Meeting of the Association for
Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 3855–3864, Online.
Association for Computational Linguistics.
Mikołaj Małkinski and Jacek Ma ´ ndziuk. 2022. ´ Multilabel contrastive learning for abstract visual reasoning. IEEE Transactions on Neural Networks and Learning Systems, pages 1–13.
Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval2018 task 1: Affect in tweets. In *Proceedings of the* 12th International Workshop on Semantic Evaluation, pages 1–17, New Orleans, Louisiana. Association for Computational Linguistics.
Saif M Mohammad and Peter D Turney. 2013. Crowdsourcing a word–emotion association lexicon. *Computational intelligence*, 29(3):436–465.
Hala Mulki, Chedi Bechikh Ali, Hatem Haddad, and Ismail Babaoglu. 2018a. ˘ Tw-StAR at SemEval-2018 task 1: Preprocessing impact on multi-label emotion classification. In *Proceedings of the 12th International Workshop on Semantic Evaluation*, pages 167–171, New Orleans, Louisiana. Association for Computational Linguistics.
Hala Mulki, Chedi Bechikh Ali, Hatem Haddad, and Ismail Babaoglu. 2018b. ˘ Tw-StAR at SemEval-2018 task 1: Preprocessing impact on multi-label emotion classification. In *Proceedings of the 12th International Workshop on Semantic Evaluation*, pages 167–171, New Orleans, Louisiana. Association for Computational Linguistics.
Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson.
2013. SemEval-2013 task 2: Sentiment analysis in Twitter. In *Second Joint Conference on Lexical and* Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 312–
320, Atlanta, Georgia, USA. Association for Computational Linguistics.
Jinseok Nam, Eneldo Loza Mencía, Hyunwoo J Kim, and Johannes Fürnkranz. 2017. Maximizing subset accuracy with recurrent neural networks in multilabel classification. In *Advances in Neural Information Processing Systems*, volume 30.
Yujia Qin, Yankai Lin, Ryuichi Takanobu, Zhiyuan Liu, Peng Li, Heng Ji, Minlie Huang, Maosong Sun, and Jie Zhou. 2021. ERICA: Improving entity and relation understanding for pre-trained language models via contrastive learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 3350–3363, Online. Association for Computational Linguistics.
Tal Ridnik, Emanuel Ben-Baruch, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, and Lihi Zelnik-Manor. 2021. Asymmetric loss for multilabel classification. In *Proceedings of the IEEE/CVF*
International Conference on Computer Vision, pages 82–91.
Ali Safaya, Moutasem Abdullatif, and Deniz Yuret.
2020. KUISAIL at SemEval-2020 task 12: BERTCNN for offensive speech identification in social media. In *Proceedings of the Fourteenth Workshop on* Semantic Evaluation, pages 2054–2059, Barcelona
(online). International Committee for Computational Linguistics.
Boaz Shmueli, Soumya Ray, and Lun-Wei Ku. 2021.
Happy dance, slow clap: Using reaction GIFs to predict induced affect on Twitter. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers), pages 395–401, Online.
Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Xi'ao Su, Ran Wang, and Xinyu Dai. 2022. Contrastive learning-enhanced nearest neighbor mechanism for multi-label text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 672–679, Dublin, Ireland. Association for Computational Linguistics.
Peter Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In *Proceedings of the 40th Annual* Meeting of the Association for Computational Linguistics, pages 417–424, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Lianxi Wang, Xiaotian Lin, and Nankai Lin. 2021. Research on pseudo-label technology for multi-label news classification. In Document Analysis and Recognition - ICDAR 2021, pages 683–698, Cham.
Springer International Publishing.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Yaoqiang Xiao, Yi Li, Jin Yuan, Songrui Guo, Yi Xiao, and Zhiyong Li. 2021. History-based attention in seq2seq model for multi-label text classification.
Knowledge-Based Systems, 224:107094.
Peng Xu, Zihan Liu, Genta Indra Winata, Zhaojiang Lin, and Pascale Fung. 2020. Emograph: Capturing emotion correlations using graph networks. *CoRR*,
abs/2008.09378.
Yu Xu, Dong Zhou, and Séamus Lawless. 2016. Inferring your expertise from twitter: Integrating sentiment and topic relatedness. In *2016 IEEE/WIC/ACM*
International Conference on Web Intelligence (WI),
pages 121–128.
Pengcheng Yang, Fuli Luo, Shuming Ma, Junyang Lin, and Xu Sun. 2019. A deep reinforced sequence-to-set model for multi-label classification. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5252–5258, Florence, Italy. Association for Computational Linguistics.
Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. SGM: Sequence generation model for multi-label classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3915–3926, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Chih-Kuan Yeh, Wei-Chieh Wu, Wei-Jen Ko, and YuChiang Frank Wang. 2017. Learning deep latent space for multi-label classification. In Thirty-first AAAI conference on artificial intelligence.
Selim F. Yilmaz, E. Batuhan Kaynak, Aykut Koç, Hamdi Dibeklioglu, and Suleyman Serdar Kozat. ˘
2021. Multi-label sentiment analysis on 100 languages with dynamic weighting for label imbalance.
IEEE Transactions on Neural Networks and Learning Systems, pages 1–13.
Jianfei Yu, Luís Marujo, Jing Jiang, Pradeep Karuturi, and William Brendel. 2018. Improving multi-label emotion classification via sentiment classification with dual attention transfer network. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, pages 1097–1102, Brussels, Belgium. Association for Computational Linguistics.
Shu Zhang, Ran Xu, Caiming Xiong, and Chetan Ramaiah. 2022a. Use all the labels: A hierarchical multi-label contrastive learning framework. In *2022* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16639–16648.
Ximing Zhang, Qian-Wen Zhang, Zhao Yan, Ruifang Liu, and Yunbo Cao. 2021. Enhancing label correlation feedback in multi-label text classification via multi-task learning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1190–1200, Online. Association for Computational Linguistics.
Yangjun Zhang, Pengjie Ren, Wentao Deng, Zhumin Chen, and Maarten Rijke. 2022b. Improving multilabel malevolence detection in dialogues through multi-faceted label correlation enhancement. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 3543–3555, Dublin, Ireland. Association for Computational Linguistics.
Deyu Zhou, Yang Yang, and Yulan He. 2018. Relevant emotion ranking from text constrained with emotion relationships. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 561–571, New Orleans, Louisiana. Association for Computational Linguistics.
Yangyang Zhou, Xin Kang, and Fuji Ren. 2022. Employing contrastive strategies for multi-label textual emotion recognition. In *Intelligent Information Processing XI*, pages 299–310, Cham. Springer International Publishing.
Yu Zhu and Ou Wu. 2022. Elementary discourse units with sparse attention for multi-label emotion classification. *Knowledge-Based Systems*, 240:108114.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
section 6 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Our paper focuses more on the performance of the model.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
bu-etal-2023-segment | Segment-Level and Category-Oriented Network for Knowledge-Based Referring Expression Comprehension | https://aclanthology.org/2023.findings-acl.557 | Knowledge-based referring expression comprehension (KB-REC) aims to identify visual objects referred to by expressions that incorporate knowledge. Existing methods employ sentence-level retrieval and fusion methods, which may lead to issues of similarity bias and interference from irrelevant information in unstructured knowledge sentences. To address these limitations, we propose a segment-level and category-oriented network (SLCO). Our approach includes a segment-level and prompt-based knowledge retrieval method to mitigate the similarity bias problem and a category-based grounding method to alleviate interference from irrelevant information in knowledge sentences. Experimental results show that our SLCO can eliminate interference and improve the overall performance of the KB-REC task. |
## Segment-Level And Category-Oriented Network For Knowledge-Based Referring Expression Comprehension
Yuqi Bu1,2∗
, Xin Wu1,2∗
, Liuwu Li1,2**, Yi Cai**1,2†
, Qiong Liu1**, Qingbao Huang**3,4 1 School of Software Engineering, South China University of Technology 2 Key Laboratory of Big Data and Intelligent Robot (SCUT), MOE of China 3 School of Electrical Engineering, Guangxi University 4 Guangxi Key Laboratory of Multimedia Communications and Network Technology
{seyqbu,sexinw}@mail.scut.edu.cn, [email protected],
{ycai,liuqiong}@scut.edu.cn, [email protected]
## Abstract
Knowledge-based referring expression comprehension (KB-REC) aims to identify visual objects referred to by expressions that incorporate knowledge. Existing methods employ sentencelevel retrieval and fusion methods, which may lead to issues of similarity bias and interference from irrelevant information in unstructured knowledge sentences. To address these limitations, we propose a segment-level and category-oriented network (SLCO). Our approach includes a segment-level and promptbased knowledge retrieval method to mitigate the similarity bias problem and a categorybased grounding method to alleviate interference from irrelevant information in knowledge sentences. Experimental results show that our SLCO can eliminate interference and improve the overall performance of the KB-REC task. ‡
## 1 Introduction
Referring expression comprehension (REC), a.k.a.
visual grounding, aims to identify a visual object referred to by a referring expression that disambiguates multiple objects (Cirik et al., 2018; Qiao et al., 2021). As a core task of languagevision fields, REC benefits many downstream multimodal tasks, e.g., robotics (Berg et al., 2020; Wang et al., 2022) and vision-and-language navigation
(Qi et al., 2020; Gao et al., 2021).
To explore a broader domain of knowledge, Wang et al. extend the REC task to knowledgebased referring expression comprehension (KBREC), and propose a baseline model and benchmark (Wang et al., 2020). This task requires the use of external knowledge (e.g., commonsense and encyclopedia) to refer to objects. This necessitates the model's ability to retrieve knowledge related to expressions and associate it with image and expression, enabling localization of the referent.
![0_image_0.png](0_image_0.png)
The existing method ECIFA (Wang et al., 2020)
retrieves and fuses knowledge in a sentence-level framework. They utilize sentence-level similarity to retrieve the most similar unstructured knowledge sentences from external knowledge bases, e.g., descriptive sentences from Wikipedia. Then they fuse all these retrieved knowledge sentences with expressions to locate the referent. However, ECIFA
still has two limitations. Firstly, the sentence-level similarity method exhibits a similarity bias problem
(Bogatu et al., 2022). This means that although the retrieved knowledge sentences are lexically similar to the query expression, they may not be the intended knowledge for understanding the expression. Consequently, the irrelevant knowledge may result in an incorrect localization of the referent due to error propagation. As shown in Fig. 1(a),
the intended knowledge for this expression is about pillows, while existing methods retrieve knowledge all about sofas due to sentence similarity with the expression. As a result, the lack of knowledge about pillows leads to localizing the incorrect object, the sofa. Secondly, the retrieved unstructured knowledge sentences may contain a large amount of information that is unrelated to the referent, leading to interference in object localization. As shown in Fig. 1(a), the irrelevant information "on the floor" in the second knowledge sentence may mislead the model to focus on objects located on the ground, rather than the intended focus of "on the sofa" in the expression, resulting in an incorrect localization of the sofa on the floor. Even the irrelevant information "on the bed" in the ground-truth knowledge sentence may potentially mislead the model to localize the incorrect object on the bed.
Based on statistical analysis, we find that most knowledge-based referring expressions can be divided into two segments according to the information contained: (1) Visual segments (e.g., "on the sofa" in Fig. 1(b)), which can be interpreted based on visual content, such as color, shape, and relative position of objects; (2) Knowledge segments (e.g.,
"used for sleeping" in Fig. 1(b)), which require additional knowledge beyond the visual content to be understood, such as function and non-visual object attributes. Distinguishing these two types of segments and discarding the visual segment during knowledge retrieval can help to solve the similarity bias problem. Moreover, for grounding, it is only necessary to know the category of objects corresponding to the knowledge segment, and detailed descriptive knowledge about the object is not required. Therefore, we employ a category-oriented method to retrieve knowledge categories and fuse them with the visual segment for object localization, which can avoid irrelevant information from knowledge sentences. For example, in Fig. 1(b),
the knowledge segment can identify which object categories are used for sleeping and narrows the target to pillows and sofas. Then, these categories associated with the visual segment distinguish multiple instances of pillows and accurately locates the referent one on the sofa.
In this paper, we propose a segment-level and category-oriented network (SLCO), which utilizes knowledge segments to retrieve knowledge categories and delegate them to visual segments for grounding target objects. It consists of three modules: a segment detection module, a prompt-based retrieval module, and a category-based grounding module. Firstly, the segment detection module identifies visual and knowledge segments. Then, inspired by the excellent knowledge retrieval ability of prompt learning (Shin et al., 2020; Zhong et al.,
2021), we present a prompt-based retrieval module that uses knowledge segments as hints to elicit knowledge categories from generic language models. Finally, the category-based grounding module associates the retrieved knowledge categories with visual segments for target object localization.
The contributions can be summarized as follows:
- We propose a segment-level and prompt-based retrieval method that can retrieve object categories corresponding to knowledge segments, thereby addressing the similarity bias problem and reducing incorrect knowledge retrieval.
- We propose a category-based method to associate knowledge categories with visual segments for object localization, thereby alleviating the interference from irrelevant information in knowledge sentences.
- Experimental results on the KB-Ref dataset show that our SLCO can eliminate interference and improve the overall performance.
## 2 Related Work 2.1 Referring Expression Comprehension
REC is a fundamental task in the multimodal field.
Existing methods are twofold based on the alignment pattern used. Two-stage methods (Wang et al.,
2019; Yu et al., 2018) involve an initial stage for detecting boxes, followed by a second stage for ranking these boxes based on an expression. In contrast, one-stage methods (Yang et al., 2020; Huang et al., 2021; Li et al., 2021; Deng et al., 2021; Yang et al.,
2022) integrate both visual and textual features to directly regress the bounding box.
To extend this task to a broader knowledge domain, (Wang et al., 2020) introduced a knowledgebased REC task with a benchmark dataset and a two-stage model. This model retrieves knowledge by sentence similarity and fuses it with expression and image for object ranking and selection. However, this model has the problems of similarity bias
![2_image_0.png](2_image_0.png)
and interference from knowledge sentences. To solve these problems, we try to identify parts of expressions that require knowledge and retrieve it in a segment-level and category-oriented manner.
## 2.2 Knowledge Retrieval
Early attempts in knowledge retrieval primarily relied on similarity measures and matching methods. However, these methods lack the flexibility to handle the variability of natural language. Recently, prompt-based methods (Petroni et al., 2019; Liu et al., 2021) have been shown to possess superior knowledge retrieval capabilities. Many studies
(Shin et al., 2020; Qin and Eisner, 2021; Zhong et al., 2021) have focused on prompt engineering to identify effective templates for knowledge retrieval.
However, existing methods typically assume that the subject/object and relation are known, and the task is to identify the corresponding object/subject.
Nevertheless, in real-world scenes, it is more valuable to analyze which parts of a sentence require knowledge, rather than solely relying on the prespecified subject/object and relation. In this paper, we propose a method for detecting parts of sentences that require knowledge and automatically generating prompts for knowledge retrieval.
## 3 Proposed Method
An illustration in Fig. 2, SLCO contains three main modules: (1) A segment detection module, which identifies knowledge segments and visual segments in an expression; (2) A prompt-based retrieval module, which employs knowledge segments as hints to elicit knowledge category from language models; and (3) A category-based grounding module, which associates knowledge category and visual segments for object localization.
## 3.1 Segment Detection
The main idea of this module is to identify parts of expressions that cannot be inferred solely from images as knowledge segments, and parts with corresponding visual features as visual segments. Given a referring expression x = {x1*, ..., x*n} comprising n tokens, a knowledge segment can be defined as a subset of the expression xkn ⊆ x, which consists of tokens associated with external knowledge.
Given an image and a referring expression, we first encode visual features v with a convolutional backbone and encode textual features x using a pretrained language model. Due to the difficulties in aligning long expressions with visual features all at once, inspired by sentence decomposition
(Yang et al., 2020; Li et al., 2021), we perform a subsentence generation to break the expressions into subsentences by T iterations. The features of subsentence xsub(t) at the t-th iteration are:
$$x_{s u b(t)}=s_{s u b(t)}\cdot x,\qquad\qquad(1)$$
$$s_{s u b(t)}=C o n v(v_{s u b(t-1)}\cdot x\cdot s_{s u b(t-1)}),\quad(2)$$
where ssub(t)is a score to determine the position of subsentence, with a matrix of ones as an initial
![3_image_0.png](3_image_0.png)
value. In iteration, we use a FiLM method (Perez et al., 2018) for feature projection and obtain visual features of the subsentences vsub(t), as follows:
$$\begin{array}{c}{{v_{s u b(t)}=R e L U(v_{s u b(t-1)}\odot L i n e a r(x_{s u b(t)})}}\\ {{+L i n e a r(x_{s u b(t)})),}}\end{array}\tag{3}$$
where ⊙ represents element-wise multiplication.
To determine which parts of expressions have
corresponding visual features, we present a segment activation method that attends visual features
to textual features, in contrast to the text-to-image
alignment scheme of most REC models. Concretely, we first concatenate the results of T subsentence features to obtain xsub and the visual features of T subsentences to obtain vsub, respectively.
Then, to identify activated parts of expressions, we
use cross-modal attention to attend the visual features vsub to the textual features xsub. The visiondependent subsentence features x′sub are:
$$x_{s u b}^{\prime}=\sigma(x_{s u b}\cdot v_{s u b}^{\top})\cdot v_{s u b},$$
sub) · vsub, (4)
where σ represents a softmax function. Finally,
we project these features into two scores via MLP:
one representing the parts of the sentence activated
by visual context and the other representing the
unactivated parts. We then obtain visual segment
xvi and knowledge segment xkn by multiplying the
sentence features with these scores, as follows:
$$x_{kn}=\sigma(Linear(x^{\prime}_{sub}))\odot x+x,\tag{5}$$ $$=1-x.$$
and xvi = 1 − xkn.
During the early stages of model training, the visual features and textual features obtained by single-modal encoders are relatively independent, which makes it challenging to align their representations. To tackle this issue, we introduce a pseudosupervision strategy to supervise the detection process. In particular, to generate pseudo-annotations, we extract common substrings between expressions and their corresponding reference knowledge, as well as extract knowledge guide words (e.g., "used for" and "made up of") along with the following words. Subsequently, the extracted knowledgerelated words are merged and converted into tokenslevel scores, where 1 indicates knowledge segments and 0 indicates visual segments. Finally, we use a mean squared error (MSE) loss Lseg to reduce the discrepancy between pseudo-annotations and the predicted scores of knowledge segments.
## 3.2 Prompt-Based Retrieval
After the detection of knowledge segments, this module conducts segment-level knowledge retrieval of object categories to which the intended knowledge in expressions pertains.
Firstly, we use words in the expression that have a higher score than the median of the knowledge segment scores to regenerate knowledge segments for prompts. Considering that longer knowledge segments are primarily descriptive statements, while shorter segments pertain to similar objects or synonyms, we devise two prompt templates for these scenes. They are in the forms of "A is xkn" and " is a kind of xkn", where xkn is a knowledge segment in the input slot. When given a prompt, a pretrained language model f*P LM* predicts the probability of different tokens z ∈ Z that could potentially fill the answer slot. These predictions are then filtered through labels of objects in an image to narrow down potential answers. The top-M highest-scoring tokens zˆ are:
$${\hat{z}}={\underset{z\in{\mathcal{Z}}}{\operatorname{argmax}}}\,P(f_{P L M}(z)|x_{p o r m p t}).\quad\quad(6)$$
To enhance the category-oriented retrieval ability of this module, we expand the pretrained language model's knowledge by incorporating the knowledge bases on which KB-Ref is based. Specifically, inspired by entity-level masking (Sun et al., 2019),
we mask knowledge categories in knowledge sentences for fine-tuning the language model.
## 3.3 Category-Based Grounding
In this module, we associate the retrieved knowledge categories with the detected visual segments for visual grounding. According to (Akula et al.,
2020), the position of knowledge categories in a sentence may significantly impact its meaning.
Thus, we consider three association forms that can accommodate most situations. In particular, features of each candidate knowledge category xz(m)
are integrated into the beginning, middle, and end of a visual segment to obtain ybeg(m), ymid(m), and yend(m), respectively. Then, we perform category association to concatenate and linear project these features into category-associated sentence features ybme(m)for the m-th knowledge category.
During iteration, features of top-ranked categoryassociated sentences with high probability value tend to largely accumulate, facilitating the model to learn important category information. Therefore, we present a knowledge accumulation method to iteratively incorporate textual information of M
category-associated sentences into visual features using multi-head attention, as shown in Fig. 3.
Specifically, there are two parallel branches. At each step m of the iteration, the first branch fuses the m-th category-associated sentences ybme(m)
with the (m − 1)-th visual features u1(m−1) to obtain u1(m), as follows:
$$u_{1(m)}=\sigma(\frac{u_{1(m-1)}\cdot y_{b m e(m)}^{\top}}{\sqrt{d_{y}}})\cdot y_{b m e(m)},\quad(7)$$
where dy is the dimension of ybme(m). Additionally, the u1(m)is used to compute the weight of each sentence concerning the visual features. We follow the calculation of the verification score in (Yang et al.,
2022) to compute this weight. After M iterations, the weights are summed element-wise to obtain category-activated weight svi. The second branch also fuses ybme(m) with the (m − 1)-th visual features u2(m−1) to obtain u2(m)in each step of iteration. After M iterations, these features u2(m) are concatenated and projected via MLP, then added to the original visual features, obtaining u′2
. Finally, the category and segment-related visual features vkn are obtained as follows:
$$v_{kn}=\sigma(\frac{u_{2}^{\prime}\cdot u_{2}^{\prime\top}}{\sqrt{d_{u}}})\cdot v\odot s_{vi}+v,\tag{8}$$ where $d_{u}$ is the dimension of $u_{2}^{\prime}$.
After activating objects on visual features with knowledge categories and visual segments, we employ a variant of the Transformer decoder (Vaswani et al., 2017) to further distinguish multiple instances of similar objects. It comprises 6 layers of multi-head attention and point-wise fully connected. In each layer, two self-attentions are replaced with cross-modal attention. In one instance, visual features serve as query, text features serve as key and value, and the other is reversed.
The resulting features are then projected into fourdimensional object coordinates via MLP.
## 3.4 Training Objective
The proposed SLCO is trained by a joint loss containing an MSE loss Lseg and a diversity loss (Yang et al., 2020) Ldiv for segment detection, as well as a smooth L1 loss Ll1 and a GIoU loss (Rezatofighi et al., 2019) L*giou* for grounding, as follows:
$$\mathcal{L}=\lambda_{seg}\mathcal{L}_{seg}+\lambda_{div}\mathcal{L}_{div}+\lambda_{l1}\mathcal{L}_{l1}+\lambda_{giou}\mathcal{L}_{giou},\tag{9}$$ where $\lambda$ are trade-off factors.
## 4 Experiment 4.1 Experimental Setup
Dataset. KB-Ref (Wang et al., 2020) is the first and currently the only dataset for the KB-REC task.
It includes 43,284 knowledge-based referring expressions for objects in 16,917 images from Visual Genome (Krishna et al., 2017). The knowledge involved is derived from Wikipedia, ConceptNet, and WebChild, which are reformed into unstructured sentences. We follow the official data splits.
Evaluation Metrics. Following (Wang et al.,
2020), accuracy is an average of the number of predictions with IoU greater than 0.50. Given the absence of ground-truth boxes in practical applications, the model inputs available for experiments are images and expressions, without ground-truth boxes. For knowledge retrieval, accuracy is an average of the number of predictions that the groundtruth knowledge category is within the top-M results, i.e., Acc@M. We obtain the ground-truth category corresponding to the ground-truth knowledge from the KB-Ref dataset.
| Model | Visual Backbone | Language Model | Val | Test |
|----------------------------------------------------------|--------------------|------------------|-------|--------|
| Two-stage alignment methods LGRANs (Wang et al., 2019) | VGG-16 | LSTM | 21.72 | 21.37 |
| MAttNet (Yu et al., 2018) | VGG-16 | LSTM | 22.04 | 21.73 |
| ECIFA (Wang et al., 2020) | VGG-16 | LSTM | 24.11 | 23.82 |
| One-stage alignment methods LBYLNet (Huang et al., 2021) | DarkNet-53 | LSTM | 22.65 | 22.41 |
| ReSC (Yang et al., 2020) | DarkNet-53 | BERT | 27.56 | 26.88 |
| BBA (Li et al., 2021) | DarkNet-53 | BERT | 28.28 | 27.08 |
| TransVG (Deng et al., 2021) | ResNet-101 w/ DETR | BERT | 25.03 | 24.53 |
| VLTVG (Yang et al., 2022) | ResNet-101 w/ DETR | BERT | 29.23 | 28.96 |
| SLCO (Ours) | ResNet-101 w/ DETR | BERT | 32.15 | 30.44 |
Implementation. During training, we finetune the knowledge retrieval module on a 2080Ti GPU and then end-to-end optimize the remaining modules on two P100 GPUs. The height and width of the input image are resized to 640 and the max length of the expression is set to 40. We use a ResNet-101 (He et al., 2016) initialized with weights from DETR
(Carion et al., 2020) as the visual backbone, and BERT (Devlin et al., 2019) as the language model.
We then follow the preprocessing of (Yang et al.,
2022). For training, we use the AdamW optimizer to train SLCO with a batch size of 28 and a total of 90 epochs. The initial learning rate for feature encoders is set to 10−5, and the other modules to 10−4. For the first 10 epochs, we freeze the weights of feature encoders.
In the prompt-based retrieval module, we use a LAMA framework (Petroni et al., 2019) and an uncased BERT large model as f*P LM* based on (Shin et al., 2020) which suggests its effectiveness for knowledge retrieval among different PLMs. We employ a Faster R-CNN (Ren et al., 2017) pretrained on Visual Genome (Krishna et al., 2017) to identify object labels in an image. The number of retrieved knowledge M is set to 3.
We follow (Yang et al., 2022) to set smooth L1 and GIoU loss for object localization. The tradeoff between these loss parameters has been tuned to be 5:2. Additionally, we perform a grid search and find the optimal parameters 10 and 0.125 for the MSE loss and diversity loss in the segment detection module. Accordingly, we set λseg, λdiv, λl1, λ*giou* as 10, 0.125, 5, and 2 in Eq.(9).
Baseline Models. With regards to two-stage methods, LGRANs (Wang et al., 2019) uses a graphbased attention method to infer inter-object relationships. MAttNet (Yu et al., 2018) employs three modules to handle the grounding of object appearance, location, and relationship. ECIFA (Wang et al., 2020) retrieves knowledge by cosine similarity between expressions and knowledge sentences, and then uses a stack of LSTM to fuse all the knowledge sentences with expressions. In regard to onestage methods, LBYLNet (Huang et al., 2021) uses a landmark convolution method to encode object features. ReSC (Yang et al., 2020) utilizes a recursive framework to fuse visual and textual features. Based on it, BBA (Li et al., 2021) employs a bottom-up and bidirectional framework to align multimodal features. TransVG (Deng et al., 2021)
constructs a Transformer-based grounding framework. VLTVG (Yang et al., 2022) further extracts text-conditioned discriminative visual features.
Models for the ordinary REC task lack mechanisms to acquire external knowledge and interact with multimodal information. Therefore, following (Wang et al., 2020), we train all models using their default implementations in the ordinary REC training manner on the KB-Ref dataset.
## 4.2 Main Results
The results in Table 1 show that ECIFA performs better than other two-stage methods, as it explicitly incorporates external knowledge from multiple knowledge bases. As for one-stage methods, BERTbased models generally outperform LSTM-based ones. The reason lies in that the implicit knowledge from the pretrained language model BERT
enhances the comprehension of knowledge-based referring expressions. Moreover, the results show
| Method | Val | Test |
|---------------|-------------|-------------|
| Full model | 32.15 | 30.44 |
| w/o Detection | 29.73↓ 2.42 | 29.73↓ 0.71 |
| w/o Retrieval | 30.15↓ 2.00 | 29.69↓ 0.75 |
| w/o Grounding | 29.70↓ 2.45 | 29.56↓ 0.88 |
| Method | Retrieval | KB-REC | |
|-----------|-------------|----------|-------|
| Acc@1 | Val | Test | |
| Parsing | 44.82 | 29.75 | 28.82 |
| Detection | 52.59 | 32.15 | 30.44 |
Table 3: Results of segment detection methods on knowledge retrieval and the KB-REC task.
that our proposed SLCO achieves a performance gain of up to 2.92%, demonstrating the effectiveness of the segment-level and category-oriented strategy. Furthermore, SLCO is the first model that is able to associate both implicit and explicit knowledge from pretrained language models.
Additionally, we conduct an evaluation of the inference time. Our model exhibits an inference time of 0.117 seconds per sample, whereas the baseline model ECIFA necessitates 0.367 seconds.
## 4.3 Ablation Study
We conduct a series of experiments to verify the effectiveness of three main modules (cf. Table 2).
Effectiveness of Segment Detection Module.
There is an average decrease of 1.57% when we remove this module and its loss functions. This result validates the effectiveness of our segment-level method, which solves the similarity bias problem in the sentence-level method.
Effectiveness of Prompt-Based Retrieval Module. To evaluate the importance of this module, we replace the retrieved knowledge categories with empty strings. The results show that removing this module leads to an average decline of 1.38%, indicating the value of knowledge categories for visual grounding. Moreover, this experimental setup corresponds to using only implicit knowledge, which is similar to the knowledge sources employed by the state-of-the-art one-stage methods in Table 1.
Nevertheless, our method has better performance than these methods.
Effectiveness of Category-Based Grounding
| Method | Acc@1 | Acc@2 | Acc@3 |
|-----------------------------------------------------------------------------------------------------------------------------------|---------|---------|---------|
| A. Knowledge retrieval methods Sim. w/ expr. 26.40 30.63 | 35.63 | | |
| Sim. w/ seg. | 43.47 | 46.58 | 51.29 |
| Prompt w/ expr. | 45.48 | 56.48 | 61.34 |
| Prompt w/ seg. | 52.59 | 61.85 | 65.53 |
| B. Fine-tuning strategies of language models None 35.96 46.33 52.42 Random mask 48.08 58.64 63.27 Category mask 52.59 61.85 65.53 | | | |
Module. We ablate this module as well as the prompt-based retrieval module, thus the visual grounding process is performed by visual segments only. Results show that this reduces the model performance by 1.67% on average. The reason lies in that model lacks the ability to associate knowledge categories, making it hard to understand referring expressions and localize the correct referent.
## 4.4 Evaluation Of Segment Detection Method
To explore methods for identifying knowledge and visual segments, we construct a parsing method to compare with the proposed detection method. The parsing method divides the expressions according to the constituency parsing in the syntax. Based on observation, knowledge information mostly appears in predicates or subordinate clauses of expressions. Thus, we take the first half of the parsed sentence as a visual segment and the second half as a knowledge segment.
In Table 3, it can be observed that the parsing method underperforms in both knowledge retrieval and KB-REC. The reason is that the diversity of natural language expressions poses a challenge in identifying knowledge segments based on specific rules, as they may appear in various positions within sentences. Moreover, the results demonstrate the flexibility of our segment detection method in recognizing knowledge and visual information in expressions, which improves the performance of both knowledge retrieval and KB-REC.
| Method | Val | Test |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|--------|
| A. Objects associated with knowledge Knowledge + expression 30.40 29.38 Knowledge + visual segment 32.15 30.44 B. Methods of associating knowledge Concatenation after attention 29.25 29.23 Addition after attention 29.83 29.73 Iterative attention 32.15 30.44 | | |
## 4.5 Evaluation Of Knowledge Retrieval Method
In block A of Table 4, we compare the proposed method with the cosine similarity method in the baseline model ECIFA (Wang et al., 2020). Results show that replacing the entire expression in the cosine similarity method with the knowledge segments obtained by our segment detection module can significantly improve the performance of knowledge retrieval. Additionally, our promptbased knowledge retrieval method significantly outperforms the sentence-level similarity method, indicating that our segment-level and category-oriented method can effectively alleviate the similarity bias problem. Moreover, we evaluate the inference time of different retrieval methods under the same settings. The results show that our prompt-based method demonstrates high efficiency in retrieving knowledge with 0.020s per sample, which is 70 times faster than the cosine similarity method in ECIFA which takes 1.400s.
Results in block B of Table 4 show that finetuning a language model using category masks improves the model's capacity to retrieve categories.
It contributes to our category-oriented method and alleviates the interference of irrelevant information from unstructured knowledge sentences.
## 4.6 Evaluation Of Knowledge-Based Grounding Method
Block A of Table 5 shows comparison results for associating the textual features of knowledge categories and expressions. It can be seen that associating knowledge categories with visual segments is superior to that with the entire expressions. This is because visual segments concentrate on disambiguation at the instance level and reduce interference from irrelevant parts of expressions.
![7_image_0.png](7_image_0.png)
In block B of Table 5, we evaluate the performance of various methods for associating multiple category-associated sentences and visual features.
There are three settings for the association: Multiple sentences and visual features are processed separately by multiple attention mechanisms, and then their results are (1) concatenated or (2) added together; and (3) the attention mechanism is iteratively applied to multiple sentences. The results indicate that the iterative method is the most effective, as it accumulates features from the top-1 category-associated sentence, which is more likely to contain accurate knowledge. In contrast, the concatenation method and the addition method treat all sentences equally, making it difficult to determine which sentences are more important.
The results in Fig. 4 show that the association of the top-3 knowledge categories with expressions achieves the best performance. As the number of knowledge decreases below three, the overall accuracy of knowledge retrieval diminishes, resulting in a degradation of grounding performance. Conversely, when the number of knowledge exceeds three, an excessive number of candidate knowledge categories may impede the model's ability to accurately associate knowledge for object localization.
## 4.7 Qualitative Results
As shown in Fig. 5(a) and Fig. 5(b), the baseline model is interfered with by the words "stove" and
"desk" in expressions, leading to incorrect results of knowledge retrieval and object localization. In contrast, SLCO effectively avoids this problem by utilizing knowledge segments to retrieve knowledge categories. As shown in Fig. 5(a), SLCO accurately retrieves relevant knowledge categories based on the knowledge segments, and then activates multiple objects related to the retrieved categories in the visual features shown in Fig. 5(c). Then, in
![8_image_0.png](8_image_0.png)
Fig. 5(d), the decoder further refines the objects by visual segments "above the stove" and "in front of the man", thereby accurately localizing the referent. Qualitative results show that SLCO can solve the issues of similarity bias and interference by irrelevant information in knowledge sentences.
## 5 Conclusion
In this paper, we propose a segment-level and category-oriented network to endow the model with the ability to identify and utilize the knowledge and visual segments in a targeted manner. Specifically, the proposed method uses knowledge segments to retrieve knowledge, which addresses the similarity bias problem in the sentence-level method. Additionally, our category-oriented retrieval method can elicit knowledge categories from language models, mitigating the interference from irrelevant information in knowledge sentences. Experimental results demonstrate the effectiveness of the proposed method in addressing two limitations of the existing methods, thus improving the accuracy of the KB-REC task. In future work, we will explore more fine-grained information in expressions and combine it with knowledge and visual content.
## Limitations
To better understand the limitations of the proposed method, we conducted an error analysis by randomly selecting 100 incorrect predictions and categorizing their error types. The results revealed that 32% of errors were caused by grounding issues, specifically, an inability to distinguish between multiple objects of the same category, despite having knowledge category of the referent object. The results indicate that there is a need for improvement in the ability to discriminate visual objects, especially for object categories with long-tailed distributions. Additionally, the results show that 20% of errors are due to imprecise object detection, particularly for small objects. This highlights the need for optimization of the visual encoder and loss function. Moreover, 14% of errors are attributed to incorrect knowledge retrieval. To address this, incorporating more fine-grained information in expressions for retrieval should be considered as a future research direction. Furthermore, 34% of incorrect predictions can be attributed to issues with the ground-truth annotations, which may negatively impact the model's learning process.
## Acknowledgments
This work is supported by the National Natural Science Foundation of China (62076100, 61976094, and 62276072), the Guangxi Natural Science Foundation (No. 2022GXNSFAA035627), Fundamental Research Funds for the Central Universities, SCUT (x2rjD2220050), the Science and Technology Planning Project of Guangdong Province
(2020B0101100002), CAAI-Huawei MindSpore Open Fund, CCF-Zhipu AI Large Model Fund, and the Open Research Fund of Guangxi Key Laboratory of Multimedia Communications and Network Technology.
## References
Arjun R. Akula, Spandana Gella, Yaser Al-Onaizan, Song-Chun Zhu, and Siva Reddy. 2020. Words aren't enough, their order matters: On the robustness of grounding visual referring expressions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6555–6565. Association for Computational Linguistics.
Matthew Berg, Deniz Bayazit, Rebecca Mathew, Ariel Rotter-Aboyoun, Ellie Pavlick, and Stefanie Tellex.
2020. Grounding language to landmarks in arbitrary outdoor environments. In 2020 IEEE International Conference on Robotics and Automation, ICRA 2020, Paris, France, May 31 - August 31, 2020, pages 208–
215. IEEE.
Alex Bogatu, Zili Zhou, Dónal Landers, and André Freitas. 2022. Active entailment encoding for explanation tree construction using parsimonious generation of hard negatives. *CoRR*, abs/2208.01376.
Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with transformers. In Computer Vision - ECCV 2020 -
16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I, volume 12346 of Lecture Notes in Computer Science, pages 213–229.
Springer.
Volkan Cirik, Taylor Berg-Kirkpatrick, and LouisPhilippe Morency. 2018. Using syntax to ground referring expressions in natural images. In *Proceedings* of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 6756–6764. AAAI
Press.
Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wengang Zhou, and Houqiang Li. 2021. Transvg: Endto-end visual grounding with transformers. In *2021* IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 1749–1759. IEEE.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186. Association for Computational Linguistics.
Chen Gao, Jinyu Chen, Si Liu, Luting Wang, Qiong Zhang, and Qi Wu. 2021. Room-and-object aware knowledge reasoning for remote embodied referring
expression. In *IEEE Conference on Computer Vision* and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 3064–3073. Computer Vision Foundation / IEEE.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision* and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE
Computer Society.
Binbin Huang, Dongze Lian, Weixin Luo, and Shenghua Gao. 2021. Look before you leap: Learning landmark features for one-stage visual grounding. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 16888–16897. Computer Vision Foundation / IEEE.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. *Int. J.*
Comput. Vis., 123(1):32–73.
Liuwu Li, Yuqi Bu, and Yi Cai. 2021. Bottom-up and bidirectional alignment for referring expression comprehension. In *MM '21: ACM Multimedia Conference, Virtual Event, China, October 20 - 24, 2021*,
pages 5167–5175. ACM.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
CoRR, abs/2107.13586.
Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. 2018. Film: Visual reasoning with a general conditioning layer. In *Proceedings of the Thirty-Second AAAI Conference on* Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 3942–
3951. AAAI Press.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2463–2473. Association for Computational Linguistics.
Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, Chunhua Shen, and Anton van den Hengel. 2020. REVERIE: remote embodied
visual referring expression in real indoor environments. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9979–9988.
Computer Vision Foundation / IEEE.
Yanyuan Qiao, Chaorui Deng, and Qi Wu. 2021. Referring expression comprehension: A survey of methods and datasets. *IEEE Trans. Multim.*, 23:4426–4440.
Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5203–5212. Association for Computational Linguistics.
Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2017. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans.
Pattern Anal. Mach. Intell., 39(6):1137–1149.
Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian D. Reid, and Silvio Savarese.
2019. Generalized intersection over union: A metric and a loss for bounding box regression. In IEEE
Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 658–666. Computer Vision Foundation /
IEEE.
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV,
Eric Wallace, and Sameer Singh. 2020. Autoprompt:
Eliciting knowledge from language models with automatically generated prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4222–4235. Association for Computational Linguistics.
Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: enhanced representation through knowledge integration. *CoRR*, abs/1904.09223.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
Peng Wang, Dongyang Liu, Hui Li, and Qi Wu. 2020.
Give me something to eat: Referring expression comprehension with commonsense knowledge. In MM
'20: The 28th ACM International Conference on Multimedia, Virtual Event / Seattle, WA, USA, October 12-16, 2020, pages 28–36. ACM.
Peng Wang, Qi Wu, Jiewei Cao, Chunhua Shen, Lianli Gao, and Anton van den Hengel. 2019. Neighbourhood watch: Referring expression comprehension via
language-guided graph attention networks. In *IEEE*
Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 1960–1968. Computer Vision Foundation / IEEE.
Yefei Wang, Kaili Wang, Yi Wang, Di Guo, Huaping Liu, and Fuchun Sun. 2022. Audio-visual grounding referring expression for robotic manipulation. In 2022 International Conference on Robotics and Automation, ICRA 2022, Philadelphia, PA, USA, May 23-27, 2022, pages 9258–9264. IEEE.
Li Yang, Yan Xu, Chunfeng Yuan, Wei Liu, Bing Li, and Weiming Hu. 2022. Improving visual grounding with visual-linguistic verification and iterative reasoning.
In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 9489–9498. IEEE.
Zhengyuan Yang, Tianlang Chen, Liwei Wang, and Jiebo Luo. 2020. Improving one-stage visual grounding by recursive sub-query construction. In *Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIV*, volume 12359 of *Lecture Notes in* Computer Science, pages 387–404. Springer.
Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L. Berg. 2018. Mattnet: Modular attention network for referring expression comprehension. In *2018 IEEE Conference on* Computer Vision and Pattern Recognition, CVPR
2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 1307–1315. Computer Vision Foundation /
IEEE Computer Society.
Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021.
Factual probing is [MASK]: learning vs. learning to recall. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5017–5033. Association for Computational Linguistics.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
After the conclusion in Section 5 and before the references.
✗ A2. Did you discuss any potential risks of your work?
The task in this paper does not identify potential risks and harmful effects for the time being.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The introduction in Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
The method in Section 3 and the experiments in Section 4.
✓ B1. Did you cite the creators of artifacts you used?
The method in Section 3 and the experiments in Section 4.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In the abstract, we claim that our code will be made available to the public.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The implementation in Section 4.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We do not propose a new dataset.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
We do not have documentation of the artifacts.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
The dataset in Section 4.
## C ✓ **Did You Run Computational Experiments?** The Implementation In Section 4.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
The implementation in Section 4.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
The implementation in Section 4.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
The experiments in Section 4 and the limitations after the conclusion.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
The implementation in Section 4.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
tang-etal-2023-mvp | {MVP}: Multi-task Supervised Pre-training for Natural Language Generation | https://aclanthology.org/2023.findings-acl.558 | Pre-trained language models (PLMs) have achieved remarkable success in natural language generation (NLG) tasks. Up to now, most NLG-oriented PLMs are pre-trained in an unsupervised manner using the large-scale general corpus. In the meanwhile, an increasing number of models pre-trained with labeled data (i.e. {``}supervised pre-training{''}) showcase superior performance compared to unsupervised pre-trained models. Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation. We collect a large-scale natural language generation corpus, MVPCorpus, from 77 datasets over 11 diverse NLG tasks. Then we unify these examples into a general text-to-text format to pre-train the text generation model MVP in a supervised manner. For each task, we further pre-train specific soft prompts to stimulate the model{'}s capacity to perform a specific task. Our MVP model can be seen as a practice that utilizes recent instruction tuning on relatively small PLMs. Extensive experiments have demonstrated the effectiveness and generality of our MVP model in a number of NLG tasks, which achieves state-of-the-art performance on 13 out of 17 datasets, outperforming BART by 9.3{\%} and Flan-T5 by 5.8{\%}. | # Mvp: Multi-Task Supervised Pre-Training For Natural Language Generation
Tianyi Tang1,4**, Junyi Li**1,3**, Wayne Xin Zhao**1,4 B and **Ji-Rong Wen**1,2,4 1Gaoling School of Artificial Intelligence, Renmin University of China 2School of Information, Renmin University of China 3DIRO, Université de Montréal 4Beijing Key Laboratory of Big Data Management and Analysis Methods [email protected] [email protected] [email protected]
## Abstract
Pre-trained language models (PLMs) have achieved remarkable success in natural language generation (NLG) tasks. Up to now, most NLG-oriented PLMs are pre-trained in an unsupervised manner using the large-scale general corpus. In the meanwhile, an increasing number of models pre-trained with labeled data
(i.e., "*supervised pre-training*") showcase superior performance compared to unsupervised pre-trained models. Motivated by the success of supervised pre-training, we propose Multitask superVised Pre-training (MVP) for natural language generation. We collect a large-scale natural language generation corpus, MVPCorpus, from 77 datasets over 11 diverse NLG
tasks. Then we unify these examples into a general text-to-text format to pre-train the text generation model MVP in a supervised manner.
For each task, we further pre-train specific soft prompts to stimulate the model's capacity to perform a specific task. Our MVP model can be seen as a practice that utilizes recent instruction tuning on relatively small PLMs. Extensive experiments have demonstrated the effectiveness and generality of our MVP model in a number of NLG tasks, which achieves state-of-the-art performance on 13 out of 17 datasets, outperforming BART by 9.3% and Flan-T5 by 5.8%.
## 1 Introduction
Natural language generation (NLG, also known as text generation) is a crucial capacity for language intelligence, which aims to generate human-like texts on demand (Garbacea and Mei, 2020). Since the emergence of the pre-training and fine-tuning paradigm, pre-trained language models (PLMs)
have dominated mainstream approaches for NLG
tasks (Lewis et al., 2020; Brown et al., 2020). With a large-scale general corpus, the majority of PLMs are pre-trained in an unsupervised (self-supervised) manner by leveraging intrinsic data correlations as B Corresponding author supervision signals. However, unsupervised pretraining is likely to incorporate noise that affects the performance of downstream tasks (Feng et al.,
2022), also leading to a slower rate of acquiring knowledge (Zhang et al., 2021).
In the meanwhile, more and more large-scale labeled datasets have become easily accessible (Deng et al., 2009; Liu et al., 2020). There is growing evidence that pre-training with labeled data can further improve the performance of PLMs, both in the fields of computer vision (He et al.,
2016; Dosovitskiy et al., 2021) and natural language processing (Lin et al., 2020b; Su et al.,
2022). These promising developments motivate us to consider pre-training text generation models with labeled data, which is called "*supervised pretraining*" (Feng et al., 2022). Existing work has shown that supervised pre-training can explicitly learn task-specific characteristics and alleviate the discrepancy between unsupervised pre-training and supervised fine-tuning (Lin et al., 2020b).
Furthermore, most NLG systems are often trained in a supervised way, requiring supervision signals to learn the input-to-output transformation.
For example, dialogue systems learn to generate appropriate responses based on historical utterances, and text summarization systems learn to extract essential information from long documents according to human-written summaries. Therefore, we suspect that supervised pre-training is more suited for NLG-oriented PLMs in essence since it can provide task-related instructions early in the *pre-training* stage instead of a later *fine-tuning stage*.
Inspired by the recent success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation by leveraging a variety of labeled text generation datasets. Specially, we collect a largescale labeled corpus, MVPCorpus, consisting of 77 datasets over 11 text generation tasks. Since recent research shows that an extensive scale of
| Settings | Supervised Pre-training | Unsupervised Pre-training |
|------------|---------------------------|-----------------------------|
| NLG | MVP (ours) | GPT-2, MASS, BART, T5 |
| NLU | FLAN, T0, Muppet, ExT5 | BERT, XLNet, RoBERTa, T5 |
multi-task pre-training (Aribandi et al., 2022) is the key to generalizing to new tasks for large PLMs, we combine these labeled datasets for multi-task pre-training. Existing popular works, as shown in Table 1, mainly focus on NLU tasks (Sanh et al.,
2022; Aribandi et al., 2022) or use unsupervised pre-training (Lewis et al., 2020; Raffel et al., 2020),
with no consideration of supervised pre-training on NLG tasks. To fill this gap, we explore supervised pre-training and multi-task learning for deriving both *effective* and *general* NLG models.
To develop our approach, we adopt a Transformer-based (Vaswani et al., 2017) sequenceto-sequence model as the backbone. In multi-task training, different tasks may "neutralize" the ability learned through other tasks (He and Choi, 2021).
To mitigate this potential issue, we propose to learn task-specific prompts based on the MVP model, following the structure of prefix-tuning (Li and Liang, 2021). Task-specific pre-training enables prompts to "store" specialized knowledge for each corresponding task. Integrating MVP with task-specific prompts can further stimulate the model's capacity to perform some specific tasks.
To summarize, our main contributions center around the following research questions:
- *How to train an NLG-oriented PLM in a supervised pre-training way?* In order to prepare the supervised corpus, we collect a massive labeled MVPCorpus, consisting of 77 datasets over 11 NLG tasks across various domains and specific objectives. To the best of our knowledge, MVPCorpus is the largest collection of NLG datasets.
Firstly, we formulate different NLG tasks as a general text-to-text form using task instructions so that the supervised corpus can be used in a unified way for pre-training an NLG model. Our work presents a simple yet general approach for pre-training a more capable NLG model by leveraging various labeled NLG datasets.
- *Can supervised pre-trained NLG models be both* effective and general? Extensive experiments
show that the supervised pre-trained MVP outperforms its unsupervised pre-trained counterpart BART in both full tuning (+9.3% in ratio) and parameter-efficient tuning (+4.3% in ratio) settings. Our MVP model achieves state-of-the-art performance on 13 out of 17 datasets and outperforms Flan-T5 (Chung et al., 2022) by 5.8%.
Our zero-shot performance also surpasses T011B (Sanh et al., 2022) by a large margin. Furthermore, the experiments on unseen NLG and NLU tasks demonstrate that our supervised MVP
model has a strong generality for unseen tasks.
For reproducing and reusing our work, we release the MVPCorpus collection, all the MVP
model variants, and accordingly codes at the link: https://github.com/RUCAIBox/MVP.
## 2 Related Work
Pre-trained Language Models. Pre-trained language models have achieved exceptional success in a wide range of tasks, and the majority of them are pre-trained in an unsupervised manner (Devlin et al., 2019; Brown et al., 2020). For example, with large-scale plain texts as the unsupervised pre-training corpus (570GB), GPT-3 (Brown et al., 2020) employs language modeling as the pretraining task, *i.e.,* predicting the next token conditioned on previous tokens. In the meanwhile, the computer vision community benefits a lot from the labeled dataset ImageNet (Deng et al., 2009). Influential models, such as ResNet (He et al., 2016) and ViT (Dosovitskiy et al., 2021), leverage ImageNet for pre-training. Inspired by the success of pretraining with labeled data, machine translation researchers explore supervised pre-training (McCann et al., 2017; Lin et al., 2020b). Lin et al. (2020b)
attempt to pre-train a translation model with parallel data in multiple languages. Despite using much less pre-trained data, mRASP still achieves better performance than translation models pre-trained in an unsupervised manner (Liu et al., 2020). In this paper, we propose to pre-train a universal NLG
model in a supervised manner with collections of labeled datasets (23GB).
Multi-task Learning. Our pre-training process is also related to multi-task learning (MTL), a method of mixing multiple tasks into a single training process (Collobert and Weston, 2008). A model trained with MTL can benefit from helpful knowledge of relevant tasks, resulting in improved perfor-
![2_image_0.png](2_image_0.png)
mance (Subramanian et al., 2018). Recently, MTDNN (Liu et al., 2019a) and Muppet (Aghajanyan et al., 2021) collect tens of datasets in the multi-task procedure and achieve better performance in downstream tasks. The *pre-finetuning* schema proposed in Muppet shares a similar idea with our study.
Aribandi et al. (2022) further combine the denoising pre-training task of T5 (Raffel et al., 2020)
and multi-task learning to pre-train a new model, ExT5. MTL has also contributed to sub-fields of text generation, such as open-ended dialogue system (Zhang et al., 2020), task-oriented dialogue system (Su et al., 2022), text style transfer (Bujnowski et al., 2020), and question answering (Khashabi et al., 2020). At the same time, researchers explore the transferability of models trained on multi-task datasets (Mishra et al., 2022). FLAN (Wei et al.,
2022), T0 (Sanh et al., 2022), ZeroPrompt (Xu et al., 2022), and FLAN-T5 (Chung et al., 2022)
investigate the zero-shot or few-shot generalization abilities of large language models (LLMs) (Zhao et al., 2023) trained on numerous task datasets with well-designed prompts. Compared with these works, we aim to explore multi-task learning to derive both *effective* and *general* NLG models in a supervised pre-training manner.
Prompt Learning. Prompt learning is a thriving method in the field of NLP. Prompt learning converts fine-tuning text into a format similar to pre-training to leverage implicit pre-training knowledge and alleviate the discrepancy between pretraining and fine-tuning (Liu et al., 2021b). GPT2 (Radford et al., 2019) and T5 (Raffel et al., 2020)
add human-written task prompts to the input text.
For instance, T5 prepends "*Summarize:*" to the input document for summarization tasks. Some researchers also design elaborate prompts for each task and dataset and investigate their effectiveness and robustness (Wei et al., 2022; Sanh et al.,
2022). To overcome the constraints of manually constructed prompts, researchers develop continuous (soft) prompts that can be optimized in continuous space (Lester et al., 2021; Qin and Eisner, 2021; Tang et al., 2022b). Considering the random initialization of soft prompts, Gu et al. (2022) propose PPT to pre-train continuous prompts using unlabeled data. SPoT (Vu et al., 2022), UnifiedSKG (Xie et al., 2022), and PTG (Li et al., 2022a)
further learn the prompts on related tasks and transfer the prompts to new tasks.
## 3 The Mvp Model
This section introduces our MVP model: a Multitask superVised Pre-trained model for natural language generation. The overview of our model is illustrated in Figure 1.
## 3.1 Data Collection
Formally, the natural language generation (NLG)
task aims to generate a sequence of tokens Y =
(y1, y2*, . . . , y*n) conditioned on input data X (*e.g.,*
a piece of text or structured data) (Li et al., 2022b).
In this paper, we collect a large-scale labeled MVPCorpus consisting of 77 labeled datasets from 11 representative NLG tasks1, including commonsense generation, data-to-text generation, openended dialogue system, paraphrase generation, question answering, question generation, story generation, task-oriented dialogue system, text simplification, text style transfer, and text summarization.
These datasets come from various domains and are of different sizes. Some datasets are elaborately hand-crafted and thus relatively small in size, while others are created for large-scale weak supervision. The detailed descriptions of these tasks can be found in Appendix A.1.
Next, we convert the different input data X of each task into a unified text-to-text format. For 1We do not consider machine translation tasks but only focusing on English tasks in this work.
instance, we linearize structured data (*e.g.,* knowledge graph or table) by concatenating triples or key-value pairs using the special token "[SEP]" for data-to-text generation, and we utilize the special token "[X_SEP]" to separate answer and paragraph for question generation. The transformed input format for each task can be found in Appendix E.
We divide MVPCorpus into two parts, which are used for pre-training and fine-tuning (evaluation),
respectively. For supervised pre-training, we utilize 50 datasets from 7 tasks, including data-to-text generation, open-ended dialogue system, question answering, question generation, story generation, task-oriented dialogue system, and text summarization. We also eliminate pre-training examples overlapping with evaluation data to avoid data leakage
(more details in Appendix A.2). Finally, we have a 25GB supervised pre-training corpus containing 32M examples. The statistics of the datasets for pre-training are listed in Table 9.
For evaluation, we utilize the rest of the 27 datasets, which are more commonly used in the literature. Among these datasets, 23 datasets are from the 7 tasks used in pre-training. We refer to them as *seen* tasks and use them to test the effectiveness of our model. The remaining 4 datasets are from the tasks of commonsense generation, paraphrase generation, simplification, and style transfer, respectively. We call them *unseen* tasks and use them to examine the generality of our model.
## 3.2 Model Architecture
Our MVP model is built on the standard Transformer encoder-decoder architecture (Vaswani et al., 2017). Compared to decoder-only PLMs such as GPT-3 (Brown et al., 2020) and prefix LMs such as UniLM (Dong et al., 2019), the encoderdecoder architecture is more effective for text generation tasks (Raffel et al., 2020). In the first stage, we pre-train the MVP backbone using a mixture of labeled datasets from seven tasks. To indicate each task, we apply human-written instructions to each task instance. For example, we write "*Summarize:*" as the prompt for summarization tasks. The manual instructions for each task are shown in Appendix E.
In the second stage, we freeze the MVP backbone and pre-train a set of task-specific prompts
(*i.e.,* continuous vectors) to stimulate the model's capacity to perform some specific task. Specially, we follow prefix-tuning (Li and Liang, 2021) to insert continuous vectors at each Transformer layer and learn them using a mixture of corresponding intra-task datasets (*i.e.,* datasets under the same task2). Compared to prompt tuning (Lester et al.,
2021), which only adds prompts to the input layer, layer-wise prompts are more effective and stable (Liu et al., 2022), especially for NLG tasks.
These soft prompts, which are not shared between tasks, encode task-specific semantic knowledge to alleviate the blurring-out problem induced by multitask learning (He and Choi, 2021).
## 3.3 Training Details
Our MVP model adopts a Transformer with 12 layers in both the encoder and decoder (406M
parameters), the same as the model size of BARTLARGE (Lewis et al., 2020). We initialize the backbone with the BART parameters to provide a good starting point for NLG tasks following previous work (Dong et al., 2019; Zhang et al.,
2020). We pre-train the model with a batch size of 8,192 and adopt a temperature-scaled mixing strategy (Raffel et al., 2020) with a rate of T = 2 to mitigate the disparity in tasks and datasets.
We follow prefix-tuning (Li and Liang, 2021)
to pre-train task-specific prompts by prepending trainable vectors to multi-head attention modules at each layer. The prompt length is set to 100, and we utilize the MLP reparameterization function with a hidden size of 800 to improve the training robustness and performance (Li and Liang, 2021). Hence, every task prompts have approximately 62M parameters. Then, we freeze the MVP model and train seven groups of task-specific prompts, each of which corresponds to a different task.
In the two stages, the maximum length of both input and output sequences is set to 1,024 for supporting examples to contain more tokens. We optimize the model with a constant learning rate of 3 × 10−5 using standard sequence-to-sequence cross-entropy loss. We apply the AdamW optimizer with β1 = 0.9, β2 = 0.98, ϵ = 1 × 10−6to improve training stability (Liu et al., 2019b). The weight decay coefficient is 0.1. For testing, we select the checkpoint with the highest validation performance. All the experiments are conducted on 32 NVIDIA Tesla V100 32GB GPUs. We implement our model using the text generation library TextBox (Tang et al., 2022a).
2For instance, we train summarization-specific prompts using summarization datasets, *e.g.,* Newsroom (Grusky et al., 2018), WikiHow (Koupaee and Wang, 2018), and MSNews (Liu et al., 2021a).
| Methods | CNN/DailyMail | WebNLG | SQuAD (QG) | CoQA | | | | | | | |
|-----------|-----------------|-------------|--------------|--------|--------|-------|--------|-------|---------|--------|-------|
| R-1 | R-2 | R-L | B-4 | ME | R-L | B-4 | ME | R-L | F1 | EM | |
| MVP | 44.52 | 21.62 | 41.10 | 67.82 | 47.47 | 76.88 | 26.26 | 27.35 | 53.49 | 86.43 | 77.78 |
| BART | 44.16e | 21.28 | 40.90 | 64.55b | 46.51 | 75.13 | 22.00f | 26.40 | 52.55 | 68.60f | - |
| Flan-T5 | 43.45 | 21.01 | 40.03 | 66.60 | 46.93 | 75.76 | 25.55 | 26.90 | 53.51 | 84.18 | 75.44 |
| Single | 44.36 | 21.54 | 40.88 | 67.74 | 46.89 | 76.94 | 26.09 | 27.15 | 53.29 | 86.20 | 77.26 |
| MVP+S | 44.63 | 21.72 | 41.21 | 68.19 | 47.75 | 76.81 | 25.69 | 27.04 | 53.20 | 86.65 | 77.93 |
| MVP+R | 44.14 | 21.45 | 40.72 | 67.61 | 47.65 | 76.70 | 25.71 | 27.03 | 53.09 | 85.95 | 77.22 |
| MVP+M | 43.97 | 21.16 | 40.46 | 67.45 | 47.57 | 76.81 | 25.46 | 26.79 | 52.95 | 86.28 | 77.26 |
| SOTA | 47.16a | 22.55 | 43.87 | 66.14b | 47.25 | 76.10 | 25.97c | 27.33 | 53.43 | 84.50d | - |
| Methods | ROCStories | PersonaChat | MultiWOZ | | | | | | | | |
| B-1 | B-2 | D-1 | D-4 | B-1 | B-2 | D-1 | D-2 | B-4 | Success | Inform | |
| MVP | 33.79 | 15.76 | 3.02 | 75.65 | 50.73 | 40.69 | 1.65 | 11.23 | 20.26 | 76.40 | 85.00 |
| BART | 30.70g | 13.30 | - | 69.90 | 49.90f | 40.00 | 1.30 | 8.00 | 17.89j | 74.91 | 84.88 |
| Flan-T5 | 32.72 | 15.23 | 2.97 | 68.97 | 48.55 | 40.22 | 1.40 | 7.85 | 19.73 | 70.20 | 78.70 |
| Single | 32.67 | 15.29 | 2.72 | 72.97 | 49.96 | 40.53 | 1.27 | 7.63 | 19.73 | 75.60 | 83.70 |
| MVP+S | 33.92 | 15.60 | 3.44 | 80.58 | 47.91 | 39.97 | 1.52 | 9.54 | 20.32 | 79.90 | 86.80 |
| MVP+R | 32.93 | 15.32 | 2.88 | 73.83 | 48.45 | 40.09 | 1.30 | 7.95 | 19.02 | 73.30 | 81.80 |
| MVP+M | 33.30 | 15.51 | 2.71 | 74.24 | 46.26 | 39.30 | 1.36 | 8.07 | 19.93 | 72.70 | 79.70 |
| SOTA | 33.40g | 15.40 | - | 69.30 | 49.90f | 40.00 | 1.50h | 9.40 | 20.50i | 85.30 | 94.40 |
In summary, we pre-train a 406M generation model MVP and seven groups of 62M task-specific prompts. For each downstream task, users can either utilize the backbone (406M) directly or further combine MVP with task-specific prompts (468M).
## 4 Experiment Results
In this section, we mainly investigate the *effectiveness* and *generality* of our MVP model. We conduct extensive experiments in different settings:
- Under **full tuning** scenarios, we employ the 27 generation datasets and the GLUE benchmark (Wang et al., 2019) for evaluation. Section 4.1 and Appendix C analyze the results on 23 datasets from 7 seen tasks. Section 4.3 includes the results of 4 unseen generation tasks and 8 understanding tasks. To better compare with ExT5, we conduct experiments on the GEM benchmark (Gehrmann et al., 2021) in Appendix C.2.
- In **zero-shot** learning, we compare our models with T0 in Section 4.2.
- In **parameter-efficient tuning** settings, we utilize the same datasets as in Section 4.1, and the
## Results Can Be Found In Section 4.4. - We Conduct A **Human Evaluation** In Section 4.5.
For the full tuning setting (Tables 2 and 11),
we fine-tune the entire model (including the backbone MVP and prompts), while for the parameterefficient tuning (Table 6), we only fine-tune prompts but freeze the parameter weights of MVP.
We optimize the model via the seq2seq loss with label smoothing (Szegedy et al., 2016) factor of 0.1 and the AdamW optimizer with default hyper-parameters. We sweep over the batch size in {16, 64, 256} and the learning rate in {5 ×
10−6, 1×10−5, 3×10−5} to find the optimal hyperparameters for each evaluation task. We utilize the checkpoint with the best validation performance for test set inference. During inference, we set the beam size to 5 and the no-repetitive ngram size to 3. Details regarding fine-tuning and evaluation can be found in Appendix B.
## 4.1 Full Tuning Performance
We conduct experiments on seven new datasets of seven seen tasks to verify the *effectiveness* of our two-stage pre-training method. We design several
| Methods | CNN/DailyMail | WebNLG | SQuAD (QG) | CoQA | | | | | | | |
|-----------|-----------------|-------------|--------------|--------|-------|-------|-------|-------|---------|--------|-------|
| R-1 | R-2 | R-L | B-4 | ME | R-L | B-4 | ME | R-L | F1 | EM | |
| FT BART | 44.16 | 21.28 | 40.90 | 64.55 | 46.51 | 75.13 | 22.00 | 26.40 | 52.55 | 68.60 | - |
| FT MVP | 44.52 | 21.62 | 41.10 | 67.82 | 47.47 | 76.88 | 26.26 | 27.35 | 53.49 | 86.43 | 77.78 |
| T0-3B | - | - | - | 01.40 | 10.20 | 18.43 | 3.06 | 12.43 | 14.91 | 13.30 | 06.60 |
| T0-11B | - | - | - | 00.26 | 06.13 | 14.12 | 2.63 | 07.00 | 15.25 | 09.18 | 04.36 |
| MVP | 29.50 | 11.29 | 25.92 | 34.42 | 31.33 | 52.33 | 2.90 | 13.94 | 15.48 | 29.40 | 18.20 |
| MVP+S | 25.60 | 09.51 | 22.67 | 39.43 | 34.32 | 55.34 | 2.96 | 15.23 | 18.23 | 52.40 | 37.30 |
| Methods | ROCStories | PersonaChat | MultiWOZ | | | | | | | | |
| B-1 | B-2 | D-1 | D-4 | B-1 | B-2 | D-1 | D-2 | B-4 | Success | Inform | |
| FT BART | 30.70 | 13.30 | - | 69.90 | 49.90 | 40.00 | 1.30 | 8.00 | 17.89 | 74.91 | 84.88 |
| FT MVP | 33.79 | 15.76 | 3.02 | 75.65 | 50.73 | 40.69 | 1.65 | 11.23 | 20.26 | 76.40 | 85.00 |
| T0-3B | 08.69 | 3.02 | 04.37 | 35.49 | 23.20 | 23.57 | 2.56 | 12.06 | 0.02 | 2.50 | 22.10 |
| T0-11B | 00.63 | 0.16 | 12.41 | 92.86 | 32.17 | 28.35 | 1.56 | 07.19 | 0.00 | 3.90 | 22.10 |
| MVP | 01.01 | 0.31 | 07.18 | 86.26 | 35.54 | 32.71 | 2.87 | 16.38 | 3.08 | 2.50 | 22.20 |
| MVP+S | 10.52 | 3.54 | 02.13 | 69.55 | 37.04 | 33.38 | 2.66 | 14.84 | 0.38 | 2.50 | 22.10 |
model variants. In the first stage, MVP uses multitask supervised pre-training, and we compare it with two others using different training strategies:
- BARTLARGE (Lewis et al., **2020)**: BART is a widely used PLM for natural language generation using denoising auto encoding as the unsupervised pre-training objective.
- Flan-T5LARGE (Chung et al., **2022)**: Flan-T5 is a recent language model trained in a supervised manner on various NLP tasks, which can be a strong competitor to our model.
- **Single-task pre-training (Single)**: We individually train a single model for each task using intra-task datasets under the same pre-training settings in multi-task training. For instance, we pre-train a summarization model using summarization datasets (*e.g.,* Newsroom, WikiHow, and MSNews). Therefore, we have seven single-task pre-trained models in total.
For the second stage that integrates single-task pre-trained prompts (denoted as **MVP+S**), we compare it with two variants using different prompts:
- **Randomly initialized prompts (MVP+R)**: The layer-wise prompts for the MVP model are randomly initialized without pre-training.
- **Multi-Task pre-trained prompts (MVP+M)**:
We only pre-train one group of prompts for all tasks, using the same mixed datasets as in the backbone pre-training.
Besides these variants, we further include the best-reported results from original papers in the literature for comparison (denoted as **SOTA**). From the results in Table 2, we can see that:
First, supervised pre-training models (*i.e.,* MVP,
Flan-T5, and Single) achieve better performance than the unsupervised pre-trained model BART,
yielding an average improvement of 9.3%, 3.13%,
and 4.4% (in ratio), respectively. This finding verifies the effectiveness of our supervised pre-training method, which enables the model to acquire more task-specific information. Regarding multi-task pre-training (MVP) and single-task (Single), our MVP model outperforms its single-task counterparts by 5.0%. This result indicates that the multitask learning approach can enhance single-task performance by learning transferable semantic information across tasks. Notably, our MVP model outperforms Flan-T5 by 5.8%, which shows the significance of training on our NLG dataset collection, MVPCorpus.
Second, task-specific prompt learning is effective to alleviate the "blurring-out" issue of multitask learning. For tasks such as data-to-text generation and question answering, MVP with the singletask prompt (MVP+S) consistently surpasses the other two variants (MVP+R and MVP+M). This verifies that task-specific prompts can acquire taskspecialized knowledge and stimulate the capacity of the MVP model to perform certain tasks.
Finally, our supervised pre-training approach achieves five new SOTA results on data-to-text gen-
| AESOP | Quora | | | | | | | |
|---------|----------|-------|-------|----------|-------|-----------|-----------|-----------|
| B-4 | R-1 | R-2 | R-L | ME | | | | |
| +BART | 47.30a | 73.30 | 54.10 | 75.10 | 49.70 | | | |
| +MVP | 49.81 | 74.78 | 56.84 | 76.34 | 53.40 | SC & BLEU | GYAFC E&M | GYAFC F&R |
| B-4 | Accuracy | HM | B-4 | Accuracy | HM | | | |
| +BART | 76.50b | 93.70 | 83.90 | 79.30 | 92.00 | 85.20 | | |
| +MVP | 77.18 | 94.49 | 84.96 | 79.43 | 92.12 | 85.31 | | |
Table 4: The results of unseen NLG tasks. We use AESOP and SC & BLEU to denote the methods proposed by Sun et al. (2021) and Lai et al. (2021), respectively. a(Sun et al., 2021)b(Lai et al., 2021)
| Methods | CoLA | SST-2 | MRPC | STS-B | QQP | MNLI | QNLI | RTE | Average |
|-----------|--------|---------|---------------|---------------|---------------|---------------|--------|-------|-----------|
| Matt. | Acc. | F1/Acc. | P/S Corr. | F1/Acc. | m./mm. | Acc. | Acc. | | |
| BART | 60.30 | 96.30 | 90.47 / 86.70 | 90.97 / 90.30 | 73.03 / 89.87 | 90.03 / 89.27 | 94.60 | 79.83 | 85.17 |
| MVP | 59.87 | 96.43 | 92.07 / 89.43 | 91.37 / 90.90 | 73.20 / 90.13 | 89.70 / 88.73 | 95.10 | 82.87 | 85.88 |
Table 5: The results of NLU tasks on the GLUE benchmark.
eration, question generation, question answering, story generation, and open-ended dialogue tasks.
We also achieve SOTA performance in six out of eight datasets in Table 11, which shows the strong text generation capability of our MVP model. As for the remaining tasks, the SOTA models incorporate tailored techniques, *e.g.,* the re-ranking framework (Ravaut et al., 2022) and various task-specific objectives (He et al., 2022), which yield better performance. In contrast, our MVP model can produce competitive results just with a general architecture and a unified learning objective.
## 4.2 Zero-Shot Performance
Since we do not pre-train MVP on the seven commonly used datasets, we further conduct zero-shot experiments to see the domain transfer abilities of our models. We include T0-3B and T0-11B (Sanh et al., 2022) as our baselines, which are large models trained on various downstream tasks. The results are listed in Table 3. We can observe that our small MVP model (406M) outperforms T03B and T0-11B in all metrics with a large margin, except for few metrics on ROCStories and MultiWOZ. This demonstrates the effectiveness of using supervised pre-training on our MVPCorpus.
However, all tasks demonstrate that models in the zero-shot setting perform significantly worse than those with full tuning settings. This suggests that training strategies that are effective for NLU
tasks may not produce satisfactory results for NLG
tasks. Even though our model has acquired task knowledge, it struggles to perform well in a new domain without being fine-tuned. Hence, it is still necessary to develop specific NLG models for certain tasks and domains. Our MVP models can be effective models for further investigation.
## 4.3 Generality To Unseen Tasks
In this subsection, we test our MVP model on unseen NLG and NLU tasks to verify its generality.
Unseen NLG Tasks. According to Deng et al.
(2021), an NLG task can be assigned to one of the following three categories: compression (*e.g.,*
summarization), transduction (*e.g.,* translation), or creation (*e.g.,* story generation). Since we do not include any transduction tasks during pre-training, we evaluate our MVP model using two unseen transduction NLG tasks: paraphrase generation and text style transfer. We select the SOTA methods for these two tasks, *i.e.,* AESOP (Sun et al., 2021) for paraphrase generation and SC & BLEU (Lai et al.,
2021) for text style transfer, and replace their backbone BART with our MVP model for comparison.
From the results in Table 4, we can see that our model outperforms BART by a ratio of 2.3% and achieves two new SOTA results, which verifies the strong generality of our model. This finding shows that our MVP model is more capable than BART
and can serve as a general yet effective backbone.
Unseen NLU Tasks. Although MVP is designed especially for NLG tasks, we also evaluate its performance on unseen NLU tasks using the widely used GLUE benchmark (Wang et al., 2019). We compare our model to BARTLARGE using its sequence classification method (Lewis et al., 2020).
According to the results presented in Table 5, our MVP model outperforms BART on 9 of 12 metrics and has a superior overall performance of 0.71%.
This result indicates the generality ability of our MVP model and further demonstrates that supervised pre-training not only learns generation ability but also improves overall semantic representations.
| Methods | CNN/DailyMail | WebNLG | SQuAD (QG) | CoQA | | | | | | | |
|-----------|-----------------|-------------|--------------|--------|-------|-------|-------|-------|---------|--------|-------|
| R-1 | R-2 | R-L | B-4 | ME | R-L | B-4 | ME | R-L | F1 | EM | |
| MVP+S | 43.03 | 20.27 | 39.72 | 66.73 | 47.42 | 76.36 | 25.28 | 26.66 | 52.69 | 86.44 | 76.84 |
| BART+R | 42.47 | 19.82 | 39.15 | 65.54 | 46.86 | 75.24 | 24.27 | 26.07 | 52.03 | 82.22 | 71.92 |
| MVP+R | 42.84 | 20.21 | 39.61 | 66.12 | 47.12 | 75.83 | 25.05 | 26.34 | 52.57 | 85.51 | 75.56 |
| MVP+M | 42.99 | 20.36 | 39.70 | 66.40 | 47.16 | 75.89 | 25.24 | 26.49 | 52.88 | 85.90 | 76.34 |
| FT BART | 44.16 | 21.28 | 40.90 | 64.55 | 46.51 | 75.13 | 22.00 | 26.40 | 52.55 | 68.60 | - |
| FT MVP | 44.52 | 21.62 | 41.10 | 67.82 | 47.47 | 76.88 | 26.26 | 27.35 | 53.49 | 86.43 | 77.78 |
| Methods | ROCStories | PersonaChat | MultiWOZ | | | | | | | | |
| B-1 | B-2 | D-1 | D-4 | B-1 | B-2 | D-1 | D-2 | B-4 | Success | Inform | |
| MVP+S | 32.94 | 15.12 | 2.98 | 71.09 | 47.11 | 39.51 | 1.39 | 7.28 | 19.24 | 71.40 | 77.80 |
| BART+R | 32.14 | 14.71 | 2.85 | 68.94 | 46.23 | 38.98 | 1.30 | 6.82 | 17.94 | 62.20 | 69.20 |
| MVP+R | 32.28 | 14.85 | 2.97 | 70.29 | 46.70 | 39.23 | 1.31 | 6.98 | 18.86 | 64.40 | 71.40 |
| MVP+M | 32.62 | 15.28 | 2.95 | 69.58 | 46.78 | 39.40 | 1.33 | 7.13 | 19.13 | 67.20 | 72.90 |
| FT BART | 30.70 | 13.30 | - | 69.90 | 49.90 | 40.00 | 1.30 | 8.00 | 17.89 | 74.91 | 84.88 |
| FT MVP | 33.79 | 15.76 | 3.02 | 75.65 | 50.73 | 40.69 | 1.65 | 11.23 | 20.26 | 76.40 | 85.00 |
## 4.4 Parameter-Efficient Tuning Performance
In the lightweight fine-tuning setting, we only tune the prompts while freezing the backbone MVP model to verify its effectiveness in resourceconstrained situations. Besides our MVP+S model, we consider comparing the following methods:
- **Prefix-tuning** (Li and Liang, 2021): Prefixtuning is a popular prompt-based lightweight tuning method for text generation. We employ BART as its backbone, denoted as **BART+R**.
- **Only tuning randomly initialized prompts**
(MVP+R): This variant only tunes the randomly initialized prompts of MVP+R, and it shares a similar idea with prefix-tuning.
- **Only tuning multi-task pre-trained prompts**
(MVP+M): This variant only tunes the multi-task pre-trained prompts of MVP+M. Such an idea has been used in SPoT (Vu et al., 2022).
From the experimental results in Table 6, we can see that: the good performance of the MVP
model in lightweight settings further demonstrates the effectiveness of supervised pre-training. By comparing two randomly initialized prompting methods (BART+R and MVP+R), we can see that MVP+R achieves superior performance to BART+R (+2.0%) due to its multi-task supervised backbone. Furthermore, when initialized with pretrained prompts, MVP+S and MVP+M achieve improved results over MVP+R, which is consistent with the findings of SPoT (Vu et al., 2022).
| Datasets | MVP wins (%) | Ties (%) | BART wins (%) |
|-------------|----------------|------------|-----------------|
| CNN/DM | 46.50 | 10.67 | 42.83 |
| WebNLG | 32.17 | 45.67 | 22.17 |
| ROCStories | 46.50 | 11.33 | 42.17 |
| PersonaChat | 35.33 | 34.00 | 30.67 |
When compared with MVP+M, MVP+S performs marginally better by 1.2%, indicating that taskspecific prompts are useful to improve the model in generation tasks. Surprisingly, our lightweight MVP+S can even outperform fully tuned BART
on tasks such as question generation and question answering, showcasing the effectiveness of the proposed supervised pre-training approach.
## 4.5 Human Evaluation
Considering that there exists a certain gap between automatic metrics and human judgments (Sai et al.,
2022), we further conduct a human evaluation to better demonstrate the generation capabilities of our MVP model. We compare MVP with BART
on four tasks, including text summarization, datato-text generation, open-ended dialog system, and story generation. Following the practices of van der Lee et al. (2021), we utilize a stratified sample of 100 inputs of low, medium, and high word frequency for each task. We invite six human judges to evaluate the generated texts of MVP and BART.
Then they need to choose which one is better or
| Methods | #NLG (PT) | #NLU (PT) | #NLG (FT) | #NLU (FT) | SP model | SP prompts | Open source |
|------------|-------------|-------------|-------------|-------------|------------|--------------|---------------|
| FLAN | 3 | 9 | 2 | 9 | ✓ | ✗ | ✗ |
| T0 | 2 | 6 | 0 | 4 | ✓ | ✗ | ✓ |
| Muppet | 1 | 3 | 1 | 3 | ✓ | ✗ | ✓ |
| ExT5 | 3 | 8 | 6 | 8 | ✓ | ✗ | ✗ |
| SPoT | 1 | 4 | 0 | 6 | ✗ | ✓ | ✗ |
| MVP (ours) | 7 | 0 | 11 | 3 | ✓ | ✓ | ✓ |
choose a tie according to fluency, informativeness, consistency, task features, etc. More human evaluation details are listed in Appendix D. Table 7 showcases the proportions of "MVP wins", "Ties",
and "BART wins" for each dataset. From the results, we can see that MVP can generate overall better texts than BART from a human perspective.
## 5 Discussion
Differences with Existing Methods. To the best of our knowledge, existing supervised pre-training works mainly focus on NLU tasks (Aghajanyan et al., 2021; Aribandi et al., 2022) or a small number of NLG tasks (Lin et al., 2020b; Su et al., 2022).
Given the superior performance achieved by supervised pre-training approaches, it is important to explore supervised pre-training for deriving both *effective* and *general* NLG models. Our work makes a significant contribution in this direction, achieving SOTA performance with a single model on 13 of 17 datasets. Compared with its strong counterpart, ExT5 (Aribandi et al., 2022), our MVP model outperforms it in 26 out of 27 metrics (detailed in Appendix C.2). In order to better understand the difference between our work and previous supervised (multi-task) pre-training studies, we present a detailed comparison in Table 8. As we can see, our work conducts the study with the largest number of NLG tasks for both supervised pre-training and fine-tuning, incorporates task-specific prompts, and also releases all the important resources for reproducing or reusing our work.
Applicability. To facilitate the application of our work, we have released the collection corpus, pretrained models, task-specific prompts, and generated texts. Our collected MVPCorpus is the largest NLG task collection, which can be a high-quality resource for recent LLMs (Zhao et al., 2023). We can use all the data to pre-train a general model or select a subset to continue pre-training a domain- or task-specific model (Gururangan et al., 2020) Our MVPCorpus can also be considered as the evaluation benchmark for different NLG tasks. Furthermore, our MVP model can be employed to achieve competitive results in various NLG tasks. Users can fine-tune the MVP model or integrate it with task-specific prompts based on sufficient labeled data. Notably, our MVP model can be directly employed to obtain good performance in zero-shot learning. In addition, our MVP model can provide effective parameter initialization for improving existing methods, as described in Section 4.3. Finally, the task-specific prompts and the generated texts can be further used to study the task similarity and their effect on the multi-task pre-training.
## 6 Conclusion
In this paper, we present Multi-task superVised Pre-training (MVP) for natural language generation. Firstly, we collect a large-scale NLG corpus, MVPCorpus, from 77 datasets over 11 diverse NLG tasks. After converting various NLG
tasks into a unified text-to-text format, we propose multi-task supervised pre-training to learn an *effective* and *general* model MVP with task-specific prompts for NLG tasks. Extensive experiments have demonstrated that: (1) supervised pre-training is beneficial for NLG tasks as an effective solution.
Our MVP model outperforms its strong counterparts BART and Flan-T5 and even achieves SOTA
performance on 13 out of 17 datasets; (2) supervised pre-trained models have strong generality on unseen generation or even understanding tasks.
In future work, we will explore the multilingual version of our MVP model by covering more datasets in other languages. Such a model is expected to capture language-independent task characteristics and improve generation tasks in the minority language. Besides, it is interesting to study how different tasks relate to each other in the unified semantic space, which can inspire methods that incorporate task relations as prior.
## Acknowledgements
This work was partially supported by National Natural Science Foundation of China under Grant No. 62222215, Beijing Natural Science Foundation under Grant No. 4222027, and Beijing Outstanding Young Scientist Program under Grant No.
BJJWZYJH012019100020098. Xin Zhao is the corresponding author.
## Limitations
Despite our efforts to collect as many generation tasks and datasets as possible, we only evaluate the generation quality and generality of our models on a small number of tasks and datasets. The interpretability and robustness of our models require further analysis. Besides, there exists subjectivity when collecting downstream tasks and intratask datasets, albeit our attempts to employ widelyrecognized categorizations from the literature. Due to the limitation of computing power, we do not study the performance of our method at different model scales. The effectiveness of multi-task pretraining from scratch, similar to ExT5 (Aribandi et al., 2022), also merits an in-depth study.
## Broader Impacts
In this paper, we pre-trained a language model MVP using labeled NLG datasets. According to the research (Bender et al., 2021; Bommasani et al.,
2021), PLMs tend to "remember" what they have
"seen" in the pre-training corpus. This could result in the reproduction of undesirable biases from pretraining data on downstream tasks. Training data intervention could be a solution to alleviate this issue (Lu et al., 2020). It is also interesting to investigate whether supervised pre-training produces fewer biases than unsupervised pre-training.
Environmental impact is another factor we should consider. We attempt a more efficient pretraining strategy and released our PLM for future work. In contrast to large PLMs with tens of billions of parameters, such as T5 (Raffel et al., 2020)
and GPT-3 (Brown et al., 2020), we pre-train only a small model with hundreds of millions of parameters. In addition, we utilize supervised pretraining data and initialize our model with pretrained BART, both of which improve the convergence of our model. Ultimately, our model is pretrained for about 20, 000 steps, whereas the BART
of the same size is pre-trained for 500, 000 steps.
## Reproducibility
For reproducing and reusing our work, we have released the collection MVPCorpus, the models (*e.g.,* MVP, task-specific prompts, and multitask variants), intermediate results (*e.g.,* the generated texts), and source codes for pre-training and fine-tuning at the link: https://github.com/
RUCAIBox/MVP. The detailed settings of the experiments are listed in Appendix B. We hope that these open-source resources will facilitate future work on supervised pre-training and contribute to the advancement of NLG research.
## References
Oshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 3554–3565, Online. Association for Computational Linguistics.
Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta.
2021. Muppet: Massive multi-task representations with pre-finetuning. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pages 5799–5811, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Huda Alamri, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das, Jue Wang, Irfan Essa, Dhruv Batra, Devi Parikh, Anoop Cherian, Tim K Marks, et al.
2018. Audio visual scene-aware dialog (avsd) challenge at dstc7. *arXiv preprint arXiv:1806.00525*.
Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, Benoît Sagot, and Lucia Specia. 2020. ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4668–4679, Online. Association for Computational Linguistics.
Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. 2022. Ext5: Towards extreme multi-task scaling for transfer learning. In *International Conference on Learning Representations*.
Hangbo Bao, Li Dong, Wenhui Wang, Nan Yang, and Furu Wei. 2021. s2s-ft: Fine-tuning pretrained transformer encoders for sequence-to-sequence learning.
arXiv preprint arXiv:2110.13640.
Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In *Proceedings of the 2021 ACM*
Conference on Fairness, Accountability, and Transparency, FAccT '21, page 610–623, New York, NY,
USA. Association for Computing Machinery.
Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TAC'09.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In *Proceedings of the 2013* Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA. Association for Computational Linguistics.
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašic. 2018. ´ MultiWOZ - a largescale multi-domain Wizard-of-Oz dataset for taskoriented dialogue modelling. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics.
Pawel Bujnowski, Kseniia Ryzhova, Hyungtak Choi, Katarzyna Witkowska, Jaroslaw Piersa, Tymoteusz Krumholc, and Katarzyna Beksa. 2020. An empirical study on multi-task learning for text style transfer and paraphrase generation. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 50–63, Online. International Committee on Computational Linguistics.
Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Ben Goodrich, Daniel Duckworth, Semih Yavuz, Amit Dubey, Kyu-Young Kim, and Andy Cedilnik. 2019. Taskmaster-1: Toward a realistic and diverse dialog dataset. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4516–4525, Hong Kong, China. Association for Computational Linguistics.
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *Proceedings* of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
Mingda Chen, Sam Wiseman, and Kevin Gimpel. 2021.
WikiTableT: A large-scale data-to-text dataset for generating Wikipedia article sections. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 193–209, Online. Association for Computational Linguistics.
Wei Chen, Yeyun Gong, Song Wang, Bolun Yao, Weizhen Qi, Zhongyu Wei, Xiaowu Hu, Bartuer Zhou, Yi Mao, Weizhu Chen, Biao Cheng, and Nan Duan. 2022. DialogVED: A pre-trained latent variable encoder-decoder model for dialog response generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 4852–4864, Dublin, Ireland. Association for Computational Linguistics.
Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020a. Logical natural language generation from open-domain tables. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7929–
7942, Online. Association for Computational Linguistics.
Wenhu Chen, Yu Su, Xifeng Yan, and William Yang Wang. 2020b. KGPT: Knowledge-grounded pretraining for data-to-text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8635–
8648, Online. Association for Computational Linguistics.
Liying Cheng, Dekun Wu, Lidong Bing, Yan Zhang, Zhanming Jie, Wei Lu, and Luo Si. 2020. ENTDESC: Entity description generation by exploring knowledge graph. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1187–1197, Online. Association for Computational Linguistics.
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2174–2184, Brussels, Belgium. Association for Computational Linguistics.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2022. Scaling instruction-finetuned language models.
arXiv preprint arXiv:2210.11416.
Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In *Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008*, volume 307 of *ACM International Conference Proceeding Series*, pages 160–167.
ACM.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2006. The pascal recognising textual entailment challenge. In *Machine Learning Challenges. Evaluating* Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, pages 177–190, Berlin, Heidelberg. Springer Berlin Heidelberg.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), pages 248–255, Los Alamitos, CA, USA. IEEE Computer Society.
Mingkai Deng, Bowen Tan, Zhengzhong Liu, Eric Xing, and Zhiting Hu. 2021. Compression, transduction, and creation: A unified framework for evaluating natural language generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7580–7605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander H. Miller, Arthur Szlam, and Jason Weston. 2016. Evaluating prerequisite qualities for learning end-to-end dialog systems.
In *4th International Conference on Learning Representations, ICLR 2016*.
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In *Proceedings of the Third International Workshop* on Paraphrasing (IWP2005).
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on* Learning Representations.
Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: a corpus for adding memory to goal-oriented dialogue systems.
In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 207–219, Saarbrücken, Germany. Association for Computational Linguistics.
Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49, Saarbrücken, Germany.
Association for Computational Linguistics.
Angela Fan, Mike Lewis, and Yann Dauphin. 2018.
Hierarchical neural story generation. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 889–898, Melbourne, Australia. Association for Computational Linguistics.
Yutong Feng, Jianwen Jiang, Mingqian Tang, Rong Jin, and Yue Gao. 2022. Rethinking supervised pretraining for better downstream transferring. In *International Conference on Learning Representations*.
Cristina Garbacea and Qiaozhu Mei. 2020. Neural language generation: Formulation, methods, and evaluation. *arXiv preprint arXiv:2007.15780*.
Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for NLG micro-planners. In *Proceedings* of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 179–188, Vancouver, Canada. Association for Computational Linguistics.
Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondˇrej Dušek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM benchmark: Natural language generation, its evaluation and metrics. In *Proceedings of the* 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 96–120, Online. Association for Computational Linguistics.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In *Proceedings of the* ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1–9, Prague. Association for Computational Linguistics.
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70–79, Hong Kong, China. Association for Computational Linguistics.
Karthik Gopalakrishnan, Behnam Hedayatnia, Qinglang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek HakkaniTür. 2019. Topical-chat: Towards knowledgegrounded open-domain conversations. In Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, pages 1891–
1895. ISCA.
David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda.
2003. English gigaword. *Linguistic Data Consortium, Philadelphia*, 4(1):34.
Max Grusky, Mor Naaman, and Yoav Artzi. 2018.
Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In *Proceedings of the* 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719, New Orleans, Louisiana. Association for Computational Linguistics.
Jing Gu, Mostafa Mirshekari, Zhou Yu, and Aaron Sisto.
2021. ChainCQG: Flow-aware conversational question generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2061–2070, Online. Association for Computational Linguistics.
Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang.
2022. PPT: Pre-trained prompt tuning for few-shot learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 8410–8423, Dublin, Ireland. Association for Computational Linguistics.
Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021. Long text generation by modeling sentence-level and discourse-level coherence. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 6379–6393, Online. Association for Computational Linguistics.
Suchin Gururangan, Ana Marasovic, Swabha ´
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics.
R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor.
2006. The second pascal recognising textual entailment challenge. In *Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual* Entailment, volume 7.
Han He and Jinho D. Choi. 2021. The stem cell hypothesis: Dilemma behind multi-task learning with transformer encoders. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5555–5577, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, Los Alamitos, CA, USA. IEEE Computer Society.
Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei Huang, Luo Si, Jian Sun, and Yongbin Li. 2022.
Galaxy: A generative pre-trained model for taskoriented dialog with semi-supervised learning and explicit policy injection. Proceedings of the AAAI
Conference on Artificial Intelligence, 36(10):10749–
10757.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.
Xinyu Hua and Lu Wang. 2020. PAIR: Planning and iterative refinement in pre-trained transformers for
long text generation. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 781–793, Online.
Association for Computational Linguistics.
Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural CRF model for sentence alignment in text simplification. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 7943–7960, Online. Association for Computational Linguistics.
Zhijing Jin, Qipeng Guo, Xipeng Qiu, and Zheng Zhang.
2020. GenWiki: A dataset of 1.3 million contentsharing text and graphs for unsupervised graph-totext generation. In *Proceedings of the 28th International Conference on Computational Linguistics*,
pages 2398–2409, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics.
Pei Ke, Haozhe Ji, Yu Ran, Xin Cui, Liwei Wang, Linfeng Song, Xiaoyan Zhu, and Minlie Huang. 2021.
JointGT: Graph-text joint representation learning for text generation from knowledge graphs. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 2526–2538, Online.
Association for Computational Linguistics.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics:
EMNLP 2020, pages 1896–1907, Online. Association for Computational Linguistics.
Tomáš Kociský, Jonathan Schwarz, Phil Blunsom, Chris ˇ
Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. *Transactions of the Association for Computational Linguistics*, 6:317–328.
Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, and Hannaneh Hajishirzi. 2019. Text Generation from Knowledge Graphs with Graph Transformers. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 2284–2293, Minneapolis, Minnesota.
Association for Computational Linguistics.
Mahnaz Koupaee and William Yang Wang. 2018. Wikihow: A large scale text summarization dataset. *arXiv* preprint arXiv:1810.09305.
Ashutosh Kumar, Kabir Ahuja, Raghuram Vadapalli, and Partha Talukdar. 2020. Syntax-guided controlled generation of paraphrases. *Transactions of the Association for Computational Linguistics*, 8:329–345.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466.
Faisal Ladhak, Esin Durmus, Claire Cardie, and Kathleen McKeown. 2020. WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4034–4048, Online. Association for Computational Linguistics.
Huiyuan Lai, Antonio Toral, and Malvina Nissim. 2021.
Thank you BART! rewarding pre-trained models improves formality style transfer. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 484–494, Online. Association for Computational Linguistics.
Rémi Lebret, David Grangier, and Michael Auli. 2016.
Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213, Austin, Texas. Association for Computational Linguistics.
Sungjin Lee, Hannes Schulz, Adam Atkinson, Jianfeng Gao, Kaheer Suleman, Layla El Asri, Mahmoud Adada, Minlie Huang, Shikhar Sharma, Wendy Tay, and Xiujun Li. 2019. Multi-domain task-completion dialog challenge. In *Dialog System Technology Challenges*, volume 8.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Junyi Li, Tianyi Tang, Jian-Yun Nie, Ji-Rong Wen, and Xin Zhao. 2022a. Learning to transfer prompts for text generation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3506–3518, Seattle, United States. Association for Computational Linguistics.
Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2022b. A survey of pretrained language models based text generation. arXiv preprint arXiv:2201.05273.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597, Online. Association for Computational Linguistics.
Xiujun Li, Yu Wang, Siqi Sun, Sarah Panda, Jingjing Liu, and Jianfeng Gao. 2018. Microsoft dialogue challenge: Building end-to-end task-completion dialogue systems. *arXiv preprint arXiv:1807.11125*.
Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Percy Liang, Michael Jordan, and Dan Klein. 2009.
Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 91–99, Suntec, Singapore. Association for Computational Linguistics.
Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020a. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1823–1840, Online. Association for Computational Linguistics.
Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, and Lei Li. 2020b. Pretraining multilingual neural machine translation by leveraging alignment information. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2649–
2663, Online. Association for Computational Linguistics.
Zhaojiang Lin, Andrea Madotto, Genta Indra Winata, and Pascale Fung. 2020c. MinTL: Minimalist transfer learning for task-oriented dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 3391–3405, Online. Association for Computational Linguistics.
Pierre Lison, Jörg Tiedemann, and Milen Kouylekov.
2018. OpenSubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora.
In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC*
2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou, and Nan Duan. 2021a. GLGE: A new general language generation evaluation benchmark.
In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 408–420, Online.
Association for Computational Linguistics.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021b. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
arXiv preprint arXiv:2107.13586.
Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. P-tuning:
Prompt tuning can be comparable to fine-tuning across scales and tasks. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68, Dublin, Ireland. Association for Computational Linguistics.
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487–4496, Florence, Italy. Association for Computational Linguistics.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2020. *Gender Bias in* Neural Natural Language Processing, pages 189–
202. Springer International Publishing, Cham.
Markriedl. https://github.com/markriedl/
WikiPlots. Accessed: 2022-12-18.
Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In *Advances in Neural*
Information Processing Systems, volume 30. Curran Associates, Inc.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task generalization via natural language crowdsourcing instructions.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3470–3487, Dublin, Ireland.
Association for Computational Linguistics.
Seungwhan Moon, Pararth Shah, Anuj Kumar, and Rajen Subba. 2019. OpenDialKG: Explainable conversational reasoning with attention-based walks over knowledge graphs. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 845–854, Florence, Italy. Association for Computational Linguistics.
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849, San Diego, California. Association for Computational Linguistics.
Nikola Mrkšic, Diarmuid Ó Séaghdha, Tsung-Hsien ´
Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking.
In *Proceedings of the 55th Annual Meeting of the* Association for Computational Linguistics (Volume 1:
Long Papers), pages 1777–1788, Vancouver, Canada.
Association for Computational Linguistics.
Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. 2021. DART: Opendomain structured data record to text generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 432–447, Online. Association for Computational Linguistics.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Thong Nguyen, Anh Tuan Luu, Truc Lu, and Tho Quan.
2021. Enriching and controlling global semantics for text summarization. In *Proceedings of the 2021*
Conference on Empirical Methods in Natural Language Processing, pages 9443–9456, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng.
2016. MS MARCO: A human generated machine reading comprehension dataset. In *CoCo@NIPS*, volume 1773 of *CEUR Workshop Proceedings*. CEURWS.org.
Jekaterina Novikova, Ondˇrej Dušek, and Verena Rieser.
2017. The E2E dataset: New challenges for endto-end generation. In *Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue*,
pages 201–206, Saarbrücken, Germany. Association for Computational Linguistics.
Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203–5212, Online. Association for Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21(140):1–67.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129–140, New Orleans, Louisiana. Association for Computational Linguistics.
Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 5370–5381, Florence, Italy. Association for Computational Linguistics.
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020a. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. volume 34, pages 8689–8696.
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020b. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. volume 34, pages 8689–8696.
Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022.
SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4504–4524, Dublin, Ireland.
Association for Computational Linguistics.
Siva Reddy, Danqi Chen, and Christopher D. Manning.
2019. CoQA: A conversational question answering challenge. *Transactions of the Association for Computational Linguistics*, 7:249–266.
Pedro Rodriguez, Paul Crook, Seungwhan Moon, and Zhiguang Wang. 2020. Information seeking in the spirit of learning: A dataset for conversational curiosity. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 8153–8172, Online. Association for Computational Linguistics.
Alexander M. Rush, Sumit Chopra, and Jason Weston.
2015. A neural attention model for abstractive sentence summarization. In *Proceedings of the 2015* Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal.
Association for Computational Linguistics.
Ananya B. Sai, Akash Kumar Mohankumar, and Mitesh M. Khapra. 2022. A survey of evaluation metrics used for nlg systems. *ACM Comput. Surv.*,
55(2).
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In *International Conference on Learning* Representations.
Maarten Sap, Eric Horvitz, Yejin Choi, Noah A. Smith, and James Pennebaker. 2020. Recollection versus imagination: Exploring human memory and cognition via neural language models. In Proceedings
of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1970–1978, Online. Association for Computational Linguistics.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Karl Stratos. 2019. Mutual information maximization for simple and accurate part-of-speech induction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1095–1104, Minneapolis, Minnesota. Association for Computational Linguistics.
Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2022. Multi-task pre-training for plug-and-play task-oriented dialogue system. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 4661–4676, Dublin, Ireland. Association for Computational Linguistics.
Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, and Nigel Collier. 2021. Plan-then-generate: Controlled data-to-text generation via planning. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 895–909, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. In *International Conference on Learning Representations*.
Jiao Sun, Xuezhe Ma, and Nanyun Peng. 2021. AESOP:
Paraphrase generation with adaptive syntactic control.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5176–5189, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge data set and models for dialogue-based reading comprehension. *Transactions of the Association for Computational Linguistics*, 7:217–231.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision.
In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826, Los Alamitos, CA, USA. IEEE Computer Society.
Tianyi Tang, Junyi Li, Zhipeng Chen, Yiwen Hu, Zhuohao Yu, Wenxun Dai, Wayne Xin Zhao, Jian-yun Nie, and Ji-rong Wen. 2022a. TextBox 2.0: A text generation library with pre-trained language models.
In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 435–444, Abu Dhabi, UAE.
Association for Computational Linguistics.
Tianyi Tang, Junyi Li, Wayne Xin Zhao, and Ji-Rong Wen. 2022b. Context-tuning: Learning contextualized prompts for natural language generation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 6340–6354, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Xiangru Tang, Arjun Nair, Borui Wang, Bingyao Wang, Jai Desai, Aaron Wade, Haoran Li, Asli Celikyilmaz, Yashar Mehdad, and Dragomir Radev. 2022c.
CONFIT: Toward faithful dialogue summarization with linguistically-informed contrastive fine-tuning.
In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5657–5668, Seattle, United States. Association for Computational Linguistics.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In *Proceedings of the 2nd Workshop* on Representation Learning for NLP, pages 191–200, Vancouver, Canada. Association for Computational Linguistics.
Chris van der Lee, Albert Gatt, Emiel van Miltenburg, and Emiel Krahmer. 2021. Human evaluation of automatically generated text: Current trends and best practice guidelines. *Computer Speech and Language*, 67:101151.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
pages 4566–4575, Los Alamitos, CA, USA. IEEE
Computer Society.
Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou',
and Daniel Cer. 2022. SPoT: Better frozen model adaptation through soft prompt transfer. In *Proceedings of the 60th Annual Meeting of the Association*
for Computational Linguistics (Volume 1: Long Papers), pages 5039–5059, Dublin, Ireland. Association for Computational Linguistics.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *International Conference on Learning Representations*.
Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments.
Transactions of the Association for Computational Linguistics, 7:625–641.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M.
Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*.
Anuradha Welivita, Yubo Xie, and Pearl Pu. 2021. A
large-scale dataset for empathetic response generation. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 1251–1264, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Tsung-Hsien Wen, David Vandyke, Nikola Mrkšic, Mil- ´
ica Gašic, Lina M. Rojas-Barahona, Pei-Hao Su, Ste- ´
fan Ultes, and Steve Young. 2017. A network-based end-to-end trainable task-oriented dialogue system.
In *Proceedings of the 15th Conference of the European Chapter of the Association for Computational* Linguistics: Volume 1, Long Papers, pages 438–449, Valencia, Spain. Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation. In *Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20*, pages 3997–4003. International Joint Conferences on Artificial Intelligence Organization. Main track.
Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. *arXiv preprint arXiv:2201.05966*.
Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, and Zhilin Yang. 2022. Zeroprompt: Scaling prompt-based pretraining to 1,000 tasks improves zero-shot generalization. arXiv preprint arXiv:2201.06910.
Peng Xu, Davis Liang, Zhiheng Huang, and Bing Xiang. 2021. Attention-guided generative models for extractive question answering. arXiv preprint arXiv:2110.06393.
Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification.
Transactions of the Association for Computational Linguistics, 4:401–415.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics.
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics.
Yian Zhang, Alex Warstadt, Xiaocheng Li, and Samuel R. Bowman. 2021. When do you need billions of words of pretraining data? In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1112–1125, Online.
Association for Computational Linguistics.
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-scale generative pre-training for conversational response generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:*
System Demonstrations, pages 270–278, Online. Association for Computational Linguistics.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023.
A survey of large language models. *arXiv preprint* arXiv:2303.18223.
Kangyan Zhou, Shrimai Prabhumoye, and Alan W
Black. 2018. A dataset for document grounded conversations. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 708–713, Brussels, Belgium. Association for Computational Linguistics.
Chenguang Zhu, Yang Liu, Jie Mei, and Michael Zeng.
2021. MediaSum: A large-scale media interview dataset for dialogue summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5927–5934, Online. Association for Computational Linguistics.
## A Tasks And Datasets A.1 Description Of Tasks And Datasets
We provide the details of the tasks and datasets used in our paper for pre-training and fine-tuning in Tables 9 and 10. If the dataset for pre-training does not have a valid set, we divide 10% of the training set for validation.
We list the licenses for all datasets if they have them. All datasets are publicly available.
The majority of them can be directly downloaded from GitHub or Google Drive. ROCStories (Mostafazadeh et al., 2016) and CommonGen (Lin et al., 2020a) can be obtained after filling out a form. GYAFC (Rao and Tetreault, 2018) is accessible after requesting Yahoo and the authors of the dataset.
The tasks and datasets we use in this paper are as follows:
- **Data-to-text generation** aims to generate descriptive text about structured data, such as the knowledge graph and the table. We use the following datasets for pre-training:
1. AGENDA (Koncel-Kedziorski et al., 2019);
2. ENT-DESC (Cheng et al., 2020);
3. GenWiki (Jin et al., 2020); 4. LogicNLG (Chen et al., 2020a); 5. TEKGEN (Agarwal et al., 2021);
6. WEATHERGOV (Liang et al., 2009); 7. WikiTableT (Chen et al., 2021).
We utilize the following datasets for fine-tuning evaluation:
1. WebNLG (Gardent et al., 2017), we utilize version 2.1; 2. WikiBio (Lebret et al., 2016).
- **Open-ended dialogue system**, also known as chatbots, is focused on daily communication. We use the following datasets for pre-training:
1. Cleaned OpenSubtitles Dialogs (Cleaned OS Dialogs) (Welivita et al., 2021), which is a cleaned variant of OpenSubtitles Dialogs (Lison et al., 2018);
2. CMU Document Grounded Conversations
(CMUDog) (Zhou et al., 2018);
3. Curiosity (Rodriguez et al., 2020);
4. DREAM (Sun et al., 2019);
5. Empathetic Dialogues (Rashkin et al.,
2019);
6. Movie Dialog (Dodge et al., 2016);
7. MuTual (Stratos, 2019);
8. OpenDialKG (Moon et al., 2019); 9. Topical-Chat (Gopalakrishnan et al., 2019);
10. Wizard of Wikipedia (Dinan et al., 2019).
We utilize the following datasets for fine-tuning evaluation:
1. DailyDialog (Li et al., 2017); 2. DSTC7-AVSD (Alamri et al., 2018); 3. PersonaChat (Zhang et al., 2018).
- **Paraphrase generation** involves rewriting a sentence with the same semantic meaning but a different syntactic or lexical form. We utilize the following datasets for fine-tuning evaluation:
1. Quora (also known as QQP-Pos) (Kumar et al., 2020), which is a subset of Quora Question Pairs3.
- **Question answering** requires the model to answer a question based on optional background information. Note that we conduct this task in a generative way in our paper. We use the following datasets for pre-training:
1. HotpotQA (Yang et al., 2018);
2. MS MARCO (Nguyen et al., 2016);
3. MSQG (Liu et al., 2021a), since it is designed for QG, we reverse the question and answer to enrich QA examples; 4. NarrativeQA (Kociský et al. ˇ , 2018);
5. Natural Questions (Kwiatkowski et al.,
2019);
6. NewsQA (Trischler et al., 2017); 7. QuAC (Choi et al., 2018);
8. TriviaQA (Joshi et al., 2017);
9. WebQuestions (Berant et al., 2013).
We utilize the following datasets for fine-tuning evaluation:
1. CoQA (Reddy et al., 2019); 2. SQuAD (Rajpurkar et al., 2016), we utilize version 1.1.
- **Question generation** generates a coherent question given a passage and its corresponding answer. We use the following datasets for pretraining:
3https://www.kaggle.com/c/
quora-question-pairs
1. HotpotQA (Yang et al., 2018);
2. MS MARCO (Nguyen et al., 2016);
3. MSQG (Liu et al., 2021a);
4. NarrativeQA (Kociský et al. ˇ , 2018);
5. NewsQA (Trischler et al., 2017); 6. QuAC (Choi et al., 2018).
Most of them are QA tasks, and we invert the question and answer to enrich QG examples.
We utilize the following datasets for fine-tuning evaluation:
1. CoQA (Reddy et al., 2019); 2. SQuAD (Rajpurkar et al., 2016), we utilize version 1.1.
- **Story generation** creates a long and informative text with a short title. We use the following datasets for pre-training:
1. ChangeMyView (Hua and Wang, 2020); 2. English Gigaword (Rush et al., 2015);
3. Hippocorpus (Sap et al., 2020); 4. WikiPlots (Markriedl);
5. WritingPrompts (Fan et al., 2018), we split the original training set for pre-training and corresponding validation.
Considering English Gigaword is a large summarization dataset, we use the summary as the title to generate the passage in turn to enrich the examples of story generation.
We utilize the following datasets for fine-tuning evaluation:
1. ROCStories (Mostafazadeh et al., 2016);
2. WritingPrompts (Fan et al., 2018), we use the sets created by Guan et al. (2021) (who split the original valid and test sets for training, validation, and testing) to fine-tune our model for a fair comparison.
- **Task-oriented dialogue system** meets the reallife needs of users, such as restaurant reservations and airplane bookings. We use the datasets for pre-training, following Su et al. (2022):
1. CamRest676 (Wen et al., 2017); 2. Frames (El Asri et al., 2017); 3. KVRET (Eric et al., 2017); 4. MetaLWOZ (Lee et al., 2019); 5. MSR-E2E (Li et al., 2018);
6. MultiWOZ (Budzianowski et al., 2018);
7. Schema-Guided (Rastogi et al., 2020a);
8. TaskMaster (Byrne et al., 2019);
9. WOZ (Mrkšic et al. ´ , 2017).
We utilize the following datasets for fine-tuning evaluation:
1. MultiWOZ (Budzianowski et al., 2018), we utilize version 2.0.
- **Text style transfer** modifies the style (*e.g.,* sentiment and formality) of given texts while retaining their style-independent content. We utilize the following datasets for fine-tuning evaluation:
1. GYAFC (Rao and Tetreault, 2018), which has two sub-domains: "Entertainment and Music" (E&M) and "Family and Relationships" (F&R).
- **Text summarization** condenses a long document into a brief text while retaining the essential details. We use the following datasets for pre-training:
1. English Gigaword (Graff et al., 2003), we use the variant provided by Rush et al.
(2015);
2. MediaSum (Zhu et al., 2021);
3. MSNews (Liu et al., 2021a);
4. Newsroom (Grusky et al., 2018); 5. WikiHow (Koupaee and Wang, 2018).
We utilize the following datasets for fine-tuning evaluation:
1. CNN/DailyMail (Hermann et al., 2015), we use the variant provided by See et al. (2017);
2. SAMSum (Gliwa et al., 2019); 3. XSum (Narayan et al., 2018).
To better compare with ExT5 (Aribandi et al.,
2022), we utilize the language generation benchmark GEM (Gehrmann et al., 2021) for fine-tuning evaluation. GEM includes five tasks:
## - **Commonsense Generation**:
1. CommonGen (CG) (Lin et al., 2020a).
## - **Data-To-Text Generation**:
1. DART (Nan et al., 2021);
2. E2E NLG cleaned (Novikova et al., 2017);
3. ToTTo (Su et al., 2021);
4. WebNLG (Gardent et al., 2017).
## - **Dialogue System**:
1. Schema-Guided Dialog (SGD) (Rastogi et al., 2020b).
## - **Text Simplification**:
1. WikiAuto + Turk/ASSET (WiA-T/A) (Jiang et al., 2020; Xu et al., 2016; Alva-Manchego et al., 2020).
## - **Text Summarization**:
1. Wiki-Lingua (WLE) (Ladhak et al., 2020).
To test the generalization ability of our model, we also utilize the natural language standing benchmark GLUE (Wang et al., 2019), which is composed of three tasks:
## - **Natural Language Inference**:
1. MNLI (Williams et al., 2018);
2. QNLI (Rajpurkar et al., 2016; Wang et al.,
2019);
3. RTE (Dagan et al., 2006; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al.,
2009).
## - **Paraphrase Detection**:
1. MRPC (Dolan and Brockett, 2005);
2. QQP 3; 3. STS-B (Cer et al., 2017).
## - **Text Classification**:
1. CoLA (Warstadt et al., 2019); 2. SST-2 (Socher et al., 2013).
## A.2 Data Leakage
Since our model is pre-trained on a large number of labeled datasets, it may have "seen" examples from fine-tuning test sets during pre-training, which leads to an unfair comparison with other methods. Hence, we eliminate the pre-training examples that share n-gram overlap with either of the test datasets. Following Brown et al. (2020), n is the 5 th percentile example length in words, and the maximum value of n is set to 13. Finally, we have removed 17, 848 examples from the pre-training datasets. The number of "cleaned" examples for each dataset can be found in Table 9.
| Dataset | #Train | Cleaned #Train | #Valid | #Test | Input | Output | License |
|----------------------|------------|------------------|-----------|---------|---------|----------|---------------------|
| AGENDA | 38,720 | 38,720 | 1,000 | 1,000 | 52.1 | 141.2 | N/A |
| ENT-DESC | 88,652 | 88,652 | 11,081 | 11,081 | 279.9 | 31.0 | N/A |
| GenWiki | 681,436 | 681,436 | 75,716 | 1,000 | 21.4 | 29.5 | MIT |
| LogicNLG | 28,450 | 28,450 | 4,260 | 4,305 | 178.4 | 14.2 | MIT |
| TEKGEN | 6,310,061 | 6,307,995 | 788,746 | 796,982 | 17.0 | 21.2 | CC BY-SA 2.0 |
| WEATHERGOV | 25,000 | 25,000 | 1,000 | 3,528 | 148.7 | 30.6 | N/A |
| WikiTableT | 1,453,794 | 1,452,778 | 4,533 | 4,351 | 81.0 | 99.7 | MIT |
| Cleaned OS Dialogs | 13,355,487 | 13,355,368 | 1,483,944 | - | 75.5 | 16.7 | N/A |
| CMUDoG | 82,818 | 82,818 | 5,555 | 14,510 | 433.0 | 12.2 | N/A |
| Curiosity | 64,930 | 64,551 | 8,539 | 8,495 | 144.4 | 20.2 | CC BY-NC 4.0 |
| DREAM | 14,264 | 14,242 | 4,709 | 4,766 | 75.6 | 13.6 | N/A |
| Empathetic Dialogues | 64,636 | 64,636 | 9,308 | 8,426 | 52.7 | 12.9 | CC BY-NC 4.0 |
| Movie Dialog | 762,751 | 762,711 | 8,216 | 8,066 | 126.9 | 44.0 | N/A |
| MuTual | 33,691 | 33,691 | 4,090 | 3,248 | 53.6 | 14.5 | N/A |
| OpenDialKG | 69,680 | 69,680 | 7,743 | - | 54.2 | 12.4 | CC BY-NC 4.0 |
| Topical-Chat | 179,750 | 179,750 | 22,295 | 22,452 | 223.3 | 20.0 | CDLA-Sharing-1.0 |
| Wizard of Wikipedia | 148,357 | 147,702 | 15,767 | 15,564 | 297.0 | 16.7 | MIT |
| HotpotQA | 90,447 | 87,815 | 7,405 | - | 187.9 | 2.2 | CC BY-SA 4.0 |
| MS MARCO | 681,445 | 681,226 | 77,580 | - | 68.7 | 13.3 | N/A |
| MSQG | 198,058 | 198,029 | 11,008 | - | 48.1 | 3.7 | CC BY-SA 4.0 |
| NarrativeQA | 65,494 | 65,494 | 6,922 | 21,114 | 584.1 | 4.2 | Apache 2.0 |
| Natural Questions | 96,676 | 96,676 | 10,693 | 6,490 | 9.0 | 2.1 | CC BY-SA 3.0 |
| NewsQA | 97,850 | 97,700 | 5,486 | 5,396 | 726.8 | 5.0 | MIT |
| QuAC | 83,568 | 83,485 | 31,906 | - | 487.9 | 12.5 | CC BY-SA 4.0 |
| TriviaQA | 78,785 | 78,785 | 8,837 | 11,313 | 14.0 | 2.0 | Apache 2.0 |
| WebQuestions | 8,933 | 8,933 | 4,863 | 4,863 | 6.7 | 2.4 | CC BY 4.0 |
| HotpotQA | 90,440 | 87,808 | 6,972 | - | 79.6 | 19.8 | CC BY-SA 4.0 |
| MS MARCO | 681,445 | 681,226 | 77,580 | - | 75.9 | 6.0 | N/A |
| MSQG | 198,058 | 198,029 | 11,008 | 11,022 | 45.9 | 6.0 | CC BY-SA 4.0 |
| NarrativeQA | 65,494 | 65,494 | 6,922 | 21,114 | 579.7 | 8.6 | Apache 2.0 |
| NewsQA | 97,850 | 97,700 | 5,486 | 5,396 | 724.2 | 7.6 | MIT |
| QuAC | 69,109 | 69,026 | 26,301 | - | 496.7 | 6.5 | CC BY-SA 4.0 |
| ChangeMyView | 42,462 | 42,459 | 6,480 | 7,562 | 17.9 | 104.1 | MIT |
| English Gigaword | 3,803,957 | 3,802,620 | 189,651 | 1,951 | 8.8 | 33.3 | MIT |
| Hippocorpus | 6,168 | 6,168 | 686 | - | 34.1 | 262.6 | CDLA-Permissive 2.0 |
| WikiPlots | 101,642 | 101,641 | 11,294 | - | 3.4 | 338.5 | N/A |
| WritingPrompts | 272,600 | 272,518 | 15,620 | 15,138 | 28.4 | 630.8 | MIT |
| CamRest676 | 4,872 | 4,872 | 616 | - | 55.3 | 9.4 | N/A |
| Frames | 26,631 | 26,631 | 2,106 | - | 116.1 | 13.0 | MIT |
| KVRET | 14,136 | 14,136 | 1,616 | - | 30.5 | 9.3 | N/A |
| MetaLWOZ | 176,073 | 176,073 | 17,912 | - | 45.6 | 8.0 | N/A |
| MSR-E2E | 103,362 | 103,362 | 5,235 | - | 51.3 | 12.8 | Microsoft |
| Schema-Guided | 494,946 | 494,933 | 73,089 | - | 120.8 | 12.5 | CC BY-SA 4.0 |
| TaskMaster | 249,664 | 249,662 | 20,680 | - | 95.6 | 12.0 | CC BY 4.0 |
| WOZ | 6,364 | 6,359 | 1,260 | - | 47.0 | 10.6 | N/A |
| English Gigaword | 3,803,957 | 3,802,620 | 189,651 | 1,951 | 33.3 | 8.8 | MIT |
| MediaSum | 443,596 | 442,021 | 10,000 | 10,000 | 1641.0 | 14.4 | N/A |
| MSNews | 136,082 | 135,937 | 7,496 | 7,562 | 309.9 | 9.8 | CC BY-SA 4.0 |
| Newsroom | 995,041 | 989,351 | 108,837 | 108,862 | 642.4 | 26.7 | N/A |
| WikiHow | 157,252 | 157,247 | 5,599 | 5,577 | 502.6 | 45.6 | CC BY-NC-SA |
Table 9: The statistics and licenses of datasets for pre-training our MVP model. The \#Train, \#Valid, and \#Test denote the number of examples in the train, valid, and test sets, respectively. Cleaned \#Train represents the number of training examples after filtering. Input and Output are the average number of words (split by space) in the input and output sequences, respectively.
| Task | Dataset | #Train | #Valid | #Test | Input | Output | License |
|--------------------------------------------------------------------------------------------------------------------|-------------|----------|----------|---------|---------|-----------------|-----------------|
| Commonsen generation | CommonGen | 67,389 | 993 | - | 5.5 | 11.6 | MIT |
| DART | 62,659 | 2,768 | - | 27.5 | 21.5 | MIT | |
| E2E | 33,525 | 4,299 | - | 9.5 | 20.6 | CC BY-SA 4.0 | |
| ToTTo | 120,761 | 7,700 | - | 37.8 | 18.0 | CC BY-SA 3.0 | |
| WebNLG | 34,338 | 4,313 | 4,222 | 18.0 | 19.9 | CC BY-NA-SA 4.0 | |
| WebNLG (GEM) | 35,426 | 1,667 | - | 17.7 | 22.7 | CC BY-NA-SA 4.0 | |
| WikiBio | 582,659 | 72,831 | 72,831 | 81.6 | 26.1 | CC BY-SA 3.0 | |
| Data-to-text generation | DailyDialog | 76,052 | 7,069 | 6,740 | 72.5 | 13.9 | CC BY-NC-SA 4.0 |
| DSTC7-AVSD | 76,590 | 17,870 | 1,710 | 148.2 | 11.5 | MIT | |
| Open-ended dialogue | PersonaChat | 122,499 | 14,602 | 14,056 | 132.1 | 11.9 | MIT |
| SGD | 164,982 | 10,000 | - | 134.7 | 11.3 | CC BY-SA 4.0 | |
| MNLI-m | 392,702 | 9,815 | 9,796 | 29.8 | - | Mixed | |
| MNLI-mm | 9,832 | 9,847 | | | | | |
| Natural language inference | QNLI | 104,743 | 5,463 | 5,463 | 36.6 | - | CC BY-SA 4.0 |
| RTE | 2,490 | 277 | 3,000 | 51.0 | - | N/A | |
| Paraphrase generation | Quora | 137,185 | 3,000 | 3,000 | 10.9 | 10.8 | N/A |
| MRPC | 3,668 | 408 | 1,725 | 43.8 | - | N/A | |
| Paraphrase detection | QQP | 363,846 | 40,430 | 390,965 | 22.3 | - | N/A |
| STS-B | 5,749 | 1,500 | 1,379 | 20.3 | - | N/A | |
| Question answering | CoQA | 107,286 | 31,621 | - | 349.4 | 2.6 | Mixed |
| SQuAD | 75,722 | 10,570 | 11,877 | 156.2 | 3.6 | CC BY-SA 4.0 | |
| Question generation | CoQA | 107,286 | 31,621 | - | 346.6 | 5.5 | Mixed |
| SQuAD | 75,722 | 10,570 | 11,877 | 148.3 | 11.6 | CC BY-SA 4.0 | |
| Story generation | ROCStories | 176,688 | 9,816 | 4,909 | 9.0 | 40.7 | N/A |
| WritingPrompts | 53,516 | 4,000 | 2,000 | 25.5 | 150.4 | MIT | |
| Task-oriented dialogue | MultiWOZ | 170,220 | 22,074 | 22,116 | 128.3 | 11.3 | MIT |
| Text classification | CoLA | 8,551 | 1,043 | 1,063 | 7.7 | - | N/A |
| SST-2 | 67,349 | 872 | 1,821 | 9.8 | - | N/A | |
| Text simplification | WiA-A | 483,801 | 20,000 | 359 | 26.2 | 21.5 | Mixed |
| WiA-T | 359 | | | | | | |
| Text style transfer | GYAFC-E&M | 52,595 | 11,508 | 1,416 | 9.9 | 10.6 | N/A |
| GYAFC-F&R | 51,967 | 11,152 | 1,332 | 10.7 | 11.3 | | |
| CNN/DailyMail | 287,227 | 13,368 | 11,490 | 679.8 | 48.3 | MIT | |
| SAMSum | 14,732 | 818 | 819 | 103.4 | 20.3 | CC BY-NC-ND 4.0 | |
| Text summarization | WLE | 99,020 | 28,614 | - | 367.6 | 33.4 | CC0 1.0 |
| XSum | 204,045 | 11,332 | 11,334 | 373.7 | 21.1 | MIT | |
| Table 10: The statistics and licenses of datasets for evaluating our MVP model. The license of the MNLI dataset is | | | | | | | |
Table 10: The statistics and licenses of datasets for evaluating our MVP model. The license of the MNLI dataset is composed of OANC, CC BY-SA 3.0, and CC BY 3.0. The license of the CoQA dataset is composed of CC BY-SA
4.0, MSR-LA, and Apache 2.0. The license of the WiA-A/T datasets is composed of CC BY-NC 3.0, CC BY-NC
4.0, and GNU General Public License v3.0.
| Methods | XSum | SAMSum | CoQA QG | | | | | | |
|-----------|----------------|-------------|-----------|--------|--------|-------|--------|--------|--------|
| R-1 | R-2 | R-L | R-1 | R-2 | R-L | B-4 | ME | R-L | |
| BART | 45.14d | 22.27 | 37.25 | 51.74b | 26.46 | 48.72 | 12.34c | 35.78 | 46.88 |
| MVP | 45.60 | 22.47 | 37.42 | 53.78 | 29.12 | 49.37 | 23.48 | 47.79 | 55.09 |
| MVP+S | 45.67 | 22.63 | 37.50 | 53.81 | 29.75 | 49.43 | 23.43 | 47.49 | 55.25 |
| SOTA | 49.57a | 25.08 | 41.81 | 53.89b | 28.85 | 49.29 | 15.78c | 40.15 | 50.98 |
| Methods | WritingPrompts | DailyDialog | WikiBio | | | | | | |
| B-1 | B-2 | D-1 | D-4 | B-1 | B-2 | D-1 | D-2 | B-4 | |
| BART | 22.40e | 8.40 | - | 31.30 | 44.30f | 39.20 | 3.90 | 21.10 | - |
| MVP | 32.34 | 13.11 | 2.12 | 64.58 | 46.19 | 41.81 | 4.61 | 25.06 | 48.42 |
| MVP+S | 30.12 | 11.46 | 3.97 | 83.70 | 45.71 | 42.92 | 5.10 | 27.14 | 48.19 |
| SOTA | 22.40e | 8.40 | - | 31.30 | 46.10f | 40.70 | 4.10 | 22.20 | 45.10g |
| Methods | DSTC7-AVSD | SQuAD | | | | | | | |
| B-1 | B-2 | B-3 | B-4 | ME | R-L | CIDEr | F1 | EM | |
| BART | 82.40f | 69.10 | 58.20 | 48.70 | 31.30 | 63.50 | 1.38 | 91.56i | 84.23 |
| MVP | 83.75 | 70.89 | 60.19 | 50.94 | 32.12 | 65.04 | 1.45 | 93.45 | 87.20 |
| MVP+S | 83.81 | 71.07 | 60.45 | 51.20 | 31.77 | 64.76 | 1.44 | 93.45 | 87.17 |
| SOTA | 83.20f | 70.50 | 59.80 | 50.60 | 31.40 | 63.80 | 1.39 | 96.22h | 91.26 |
Table 11: The results on six seen tasks under full tuning settings. a(Nguyen et al., 2021)b(Tang et al., 2022c)
c(Gu et al., 2021)d(Lewis et al., 2020)e(Guan et al., 2021)f(Chen et al., 2022)g(Chen et al., 2020b) h(Raffel et al., 2020)i(Xu et al., 2021)
## B Fine-Tuning And Evaluation Details
In this section, we introduce the details for finetuning and evaluating each downstream task.
For the experiments in Section 4 (Tables 2 and 6),
and Appendix C (Table 11), the fine-tuning details are introduced in Section 4, and the evaluation details are presented as follows:
- For data-to-text generation tasks, we use BLEU(-
4), ROUGE-L, and METEOR for evaluation. We use the script provided by Chen et al. (2020b)
4;
- For open-ended dialogue system tasks, we use BLEU-1, BLEU-2, Distinct-1, and Distinct-2 for evaluation. For DSTC7-AVSD, we also utilize CIDEr (Vedantam et al., 2015). We employ NLTK 3.5 with smoothing function 7 to compute BLEU for PersonaChat and DailyDialog and utilize the script5to evaluate DSTC7-AVSD;
- For question answering tasks, we use Exact Match (EM) and Macro-averaged F1 score (F1)
for evaluation. We use the provided script for CoQA6and SQuAD7.
4https://github.com/wenhuchen/
Data-to-text-Evaluation-Metric 5https://github.com/lemuria-wchen/DialogVED/
blob/main/src/utils/evaluate.py 6https://github.com/PaddlePaddle/ERNIE/blob/
repro/ernie-gen/eval/tasks/coqa/eval.py 7https://github.com/allenai/bi-att-flow/blob/
- For question generation tasks, we use BLEU-4, ROUGE-L, and METEOR for evaluation. We use the script provided by Dong et al. (2019)
8;
- For story generation, we employ nucleus sampling with p = 0.9 and temperature of 0.7 following Guan et al. (2021). We use corpus BLEU-1, BLEU-2, Distinct-1, and Distinct-4 for evaluation. We use NLTK 3.5 to calculate corpus BLEU
following Guan et al. (2021);
- For task-oriented dialogue system tasks, we use BLEU(-4), inform (rate), success (rate), and combined score for evaluation. Inform and success are two specially designed accuracy metrics for task-oriented dialogue system, and the combined score is defined as (Inform + Success) × 0.5 +
BLEU (Budzianowski et al., 2018). We use the script provided by Su et al. (2022)
9;
- For text summarization tasks, we use ROUGE-1, ROUGE-2, and ROUGE-L for evaluation. We use the toolkit files2rouge10.
For the experiments of the GEM benchmark in Appendix C.2 (Table 12), the fine-tuning settings master/squad/evaluate-v1.1.py 8https://github.com/microsoft/unilm/blob/
master/unilm-v1/src/qg/eval.py 9https://github.com/awslabs/pptod/blob/main/
E2E_TOD/eval.py 10https://github.com/pltrdy/files2rouge
Methods**DART E2E ToTTo**
B-4 R-2 ME B-4 R-2 ME B-4 R-2 ME
T5.1.1 34.31 45.22 36.30 **42.57** 46.60 38.20 39.79 49.90 36.80
ExT5 36.62 48.14 37.60 42.25 46.70 38.10 40.14 50.33 36.90
MVP **39.13 48.92 38.53** 37.38 **47.96 39.39** 50.58 55.24 41.27 MVP+S 38.83 48.49 38.41 37.32 47.40 38.90 **50.69 55.52 41.29**
Methods**WebNLG CommonGen SGD**
B-4 R-2 ME B-4 R-2 ME B-4 R-2 ME
T5.1.1 31.67 43.31 34.40 8.38 17.01 20.20 33.15 36.17 32.40 ExT5 35.03 48.17 36.50 9.68 19.04 21.40 34.74 37.77 33.00
MVP **47.03** 59.00 **42.34** 32.59 37.71 33.00 **45.63 48.29 38.48** MVP+S **47.03 59.03** 42.28 **34.10 37.87 33.11** 45.24 48.25 38.47
Methods**WiA-A WiA-T WLE**
B-4 R-2 ME B-4 R-2 ME B-4 R-2 ME
T5.1.1 29.30 38.37 30.10 42.12 50.52 36.2 15.55 20.47 19.60
ExT5 29.23 37.98 30.00 41.39 50.38 35.8 16.64 21.16 20.40
MVP **71.55 70.88 48.19 91.73** 83.46 **57.34 18.80 22.84** 21.95
MVP+S 70.37 70.65 47.70 91.12 **83.59** 56.95 18.52 22.57 **22.02**
are the same as above. We use BLEU-4, ROUGE-2, and METEOR for evaluation. We use the GEM
evaluation scripts11.
For the experiments in Section 4.3 (Tables 4 and 5), the fine-tuning and evaluation details are as follows:
- For paraphrase generation tasks, we employ the fine-tuning and evaluation scripts provided by AESOP (Sun et al., 2021)
12. The evaluation metrics are BLEU-4, ROUGE-1, ROUGE-2, ROUGE-L, and METEOR.
- For text style transfer tasks, we employ the finetuning and evaluation scripts provided by SC
& BLEU (Lai et al., 2021)
13. We conduct the informal-to-formal transfer and train the model on the data from both the E&M and F&R domains following Lai et al. (2021). The evaluation metrics are BLEU-4, accuracy, and HM.
Accuracy is calculated by a pre-trained TextCNN
to evaluate the style strength, and HM denotes the harmonic mean of BLEU-4 and style accuracy (Lai et al., 2021).
- For GLUE tasks, we utilize the fine-tuning code provided by Hugging Face14. The hyper-11https://github.com/GEM-benchmark/GEM-metrics 12https://github.com/PlusLabNLP/AESOP 13https://github.com/laihuiyuan/
pre-trained-formality-transfer 14https://github.com/huggingface/transformers/
parameters are consistent with the original BART (Lewis et al., 2020)
15. The evaluation is computed by the official website16.
## C Additional Results
In this section, we provide additional results of our MVP model and other baselines.
## C.1 Results Of Common Datasets
We also conduct experiments on eight common datasets under full tuning settings. Due to space limitations in Section 4, these results are shown in Table 11. We can see that these results share a similar trend to those in Section 4, and we achieve SOTA performances in 6 of 8 datasets.
## C.2 Results On The Gem Benchmark
To better compare with ExT5 (Aribandi et al.,
2022), we conduct experiments on the GEM benchmark (Gehrmann et al., 2021). For "unseen" commonsense generation and text simplification tasks, we utilize prompts of data-to-text generation and summarization, respectively. The results are presented in Table 12, and our MVP models outperform ExT5 in 26 out of 27 metrics.
tree/main/examples/pytorch/text-classification 15https://github.com/facebookresearch/fairseq/
blob/main/examples/bart/README.glue.md 16https://gluebenchmark.com/
Thank you for taking the time to help us evaluate our scientific research! Our task is to present you with two pieces of machine-generated text and ask you to decide which one is superior. Your opinion will only be used to compare our two models; it will not be used for any other purpose. We have four tasks to evaluate:
1. **Text summarization**: the input is a lengthy piece of news, and the output is a brief description of the content. Examine whether the abstract covers the majority of the news and whether there are any factual errors.
2. **Knowledge-graph-to-text generation**: the input is a knowledge graph (multiple triples), and the output is a text description of the graph. Note whether the description encompasses all of the input triples.
3. **Open-ended dialogue**: the input is two users' background information and chat history, and the output is the next response. Examine whether the response is consistent with the contexts and background of the user at the time.
4. **Story generation**: the input is the beginning of the story, and the output is the following story. Keep in mind that the story needs to be coherent and consistent.
For each instance, you will see an input and two outputs (you will not know which model it comes from) in the table below, and you need to choose which one you believe is better (or a tie). You can base your decision on the output's fluency, grammar, logic, whether it conforms to the input, and the features of each task.
![26_image_0.png](26_image_0.png)
## D Human Evaluation
We hired six English-proficient college students with TOEFL or IELTS scores greater than 110 or 7.0. We paid 0.2$ per judge for each instance, for a total budget of 320$ for 400 instances. The text instructions we provided for each judge are shown in Figure 2.
## E Qualitative Examples
In this section, we showcase the linearized inputs, human-written task instructions, and corresponding outputs of a single dataset for tasks in Section 4.
We provide the results of BART, MVP, and MVP+S
under full tuning settings. To minimize human intervention, we select the first and second instances of the test set.
Input Summarize: Marseille, France (CNN)The French prosecutor leading an investigation into the crash of Germanwings Flight 9525 insisted Wednesday that he was not aware of any video footage from on board the plane. Marseille prosecutor Brice Robin told CNN that "so far no videos were used in the crash investigation." He added, "A person who has such a video needs to immediately give it to the investigators." Robin's comments follow claims by two magazines, German daily Bild and French Paris Match, of a cell phone video showing the harrowing final seconds from on board Germanwings Flight 9525 as it crashed into the French Alps. All 150 on board were killed. Paris Match and Bild reported that the video was recovered from a phone at the wreckage site. The two publications described the supposed video, but did not post it on their websites. The publications said that they watched the video, which was found by a source close to the investigation. "One can hear cries of 'My God' in several languages," Paris Match reported. "Metallic banging can also be heard more than three times, perhaps of the pilot trying to open the cockpit door with a heavy object. Towards the end, after a heavy shake, stronger than the others, the screaming intensifies. Then nothing." "It is a very disturbing scene," said Julian Reichelt, editor-in-chief of Bild online. An official with France's accident investigation agency, the BEA, said the agency is not aware of any such video. Lt. Col. Jean-Marc Menichini, a French Gendarmerie spokesman in charge of communications on rescue efforts around the Germanwings crash site, told CNN that the reports were "completely wrong" and "unwarranted." Cell phones have been collected at the site, he said, but that they "hadn't been exploited yet." Menichini said he believed the cell phones would need to be sent to the Criminal Research Institute in Rosny sous-Bois, near Paris, in order to be analyzed by specialized technicians working hand-in-hand with investigators. But none of the cell phones found so far have been sent to the institute, Menichini said. Asked whether staff involved in the search could have leaked a memory card to the media, Menichini answered with a categorical "no." Reichelt told "Erin Burnett: Outfront" that he had watched the video and stood by the report, saying Bild and Paris Match are "very confident" that the clip is real. He noted that investigators only revealed they'd recovered cell phones from the crash site after Bild and Paris Match published their reports. "That is something we did not know before. ... Overall we can say many things of the investigation weren't revealed by the investigation at the beginning," he said. What was mental state of Germanwings co-pilot? German airline Lufthansa confirmed Tuesday that co-pilot Andreas Lubitz had battled depression years before he took the controls of Germanwings Flight 9525, which he's accused of deliberately crashing last week in the French Alps. Lubitz told his Lufthansa flight training school in 2009 that he had a "previous episode of severe depression," the airline said Tuesday. Email correspondence between Lubitz and the school discovered in an internal investigation, Lufthansa said, included medical documents he submitted in connection with resuming his flight training. The announcement indicates that Lufthansa, the parent company of Germanwings, knew of Lubitz's battle with depression, allowed him to continue training and ultimately put him in the cockpit. Lufthansa, whose CEO Carsten Spohr previously said Lubitz was 100% fit to fly, described its statement Tuesday as a "swift and seamless clarification" and said it was sharing the information and documents - including training and medical records - with public prosecutors. Spohr traveled to the crash site Wednesday, where recovery teams have been working for the past week to recover human remains and plane debris scattered across a steep mountainside. He saw the crisis center set up in Seyne-les-Alpes, laid a wreath in the village of Le Vernet, closer to the crash site, where grieving families have left flowers at a simple stone memorial. Menichini told CNN late Tuesday that no visible human remains were left at the site but recovery teams would keep searching. French President Francois Hollande, speaking Tuesday, said that it should be possible to identify all the victims using DNA analysis by the end of the week, sooner than authorities had previously suggested. In the meantime, the recovery of the victims' personal belongings will start Wednesday, Menichini said. Among those personal belongings could be more cell phones belonging to the 144 passengers and six crew on board. Check out the latest from our correspondents. The details about Lubitz's correspondence with the flight school during his training were among several developments as investigators continued to delve into what caused the crash and Lubitz's possible motive for downing the jet. A Lufthansa spokesperson told CNN on Tuesday that Lubitz had a valid medical certificate, had passed all his examinations and "held all the licenses required." Earlier, a spokesman for the prosecutor's office in Dusseldorf, Christoph Kumpa, said medical records reveal Lubitz suffered from suicidal tendencies at some point before his aviation career and underwent psychotherapy before he got his pilot's license. Kumpa emphasized there's no evidence suggesting Lubitz was suicidal or acting aggressively before the crash. Investigators are looking into whether Lubitz feared his medical condition would cause him to lose his pilot's license, a European government official briefed on the investigation told CNN on Tuesday. While flying was "a big part of his life," the source said, it's only one theory being considered. Another source, a law enforcement official briefed on the investigation, also told CNN that authorities believe the primary motive for Lubitz to bring down the plane was that he feared he would not be allowed to fly because of his medical problems. Lubitz's girlfriend told investigators he had seen an eye doctor and a neuropsychologist, both of whom deemed him unfit to work recently and concluded he had psychological issues, the European government official said. But no matter what details emerge about his previous mental health struggles, there's more to the story, said Brian Russell, a forensic psychologist. "Psychology can explain why somebody would turn rage inward on themselves about the fact that maybe they weren't going to keep doing their job and they're upset about that and so they're suicidal," he said. "But there is no mental illness that explains why somebody then feels entitled to also take that rage and turn it outward on 149 other people who had nothing to do with the person's problems." Germanwings crash compensation: What we know. Who was the captain of Germanwings Flight 9525? CNN's Margot Haddad reported from Marseille and Pamela Brown from Dusseldorf, while Laura Smith-Spark wrote from London. CNN's Frederik Pleitgen, Pamela Boykoff, Antonia Mortensen, Sandrine Amiel and Anna-Maja Rappard contributed to this report.
Gold Match are "very confident" the video clip is real, an editor says. Andreas Lubitz had informed his Lufthansa training school of an episode of severe depression, airline says.
BART
Prosecutor: "A person who has such a video needs to immediately give it to the investigators" French Gendarmerie spokesman:
Cell phones have been collected at the site, but haven't been exploited.
MVP
Prosecutor: "So far no videos were used in the crash investigation" An official with France's accident investigation agency says the agency is not aware of any video.
MVP+S
Prosecutor: "So far no videos were used in the crash investigation" Two publications claim to have found a cell phone video of the crash. "It is a very disturbing scene," says editor-in-chief of Bild online.
Table 13: The first instance from the CNN/Daily Mail dataset. Human-written task instructions are labeled in *italics*.
Input Summarize: The Palestinian Authority officially became the 123rd member of the International Criminal Court on Wednesday, a step that gives the court jurisdiction over alleged crimes in Palestinian territories. The formal accession was marked with a ceremony at The Hague, in the Netherlands, where the court is based. The Palestinians signed the ICC's founding Rome Statute in January, when they also accepted its jurisdiction over alleged crimes committed "in the occupied Palestinian territory, including East Jerusalem, since June 13, 2014." Later that month, the ICC opened a preliminary examination into the situation in Palestinian territories, paving the way for possible war crimes investigations against Israelis. As members of the court, Palestinians may be subject to counter-charges as well. Israel and the United States, neither of which is an ICC member, opposed the Palestinians' efforts to join the body. But Palestinian Foreign Minister Riad al-Malki, speaking at Wednesday's ceremony, said it was a move toward greater justice. "As Palestine formally becomes a State Party to the Rome Statute today, the world is also a step closer to ending a long era of impunity and injustice," he said, according to an ICC news release. "Indeed, today brings us closer to our shared goals of justice and peace." Judge Kuniko Ozaki, a vice president of the ICC, said acceding to the treaty was just the first step for the Palestinians. "As the Rome Statute today enters into force for the State of Palestine, Palestine acquires all the rights as well as responsibilities that come with being a State Party to the Statute. These are substantive commitments, which cannot be taken lightly," she said. Rights group Human Rights Watch welcomed the development. "Governments seeking to penalize Palestine for joining the ICC should immediately end their pressure, and countries that support universal acceptance of the court's treaty should speak out to welcome its membership," said Balkees Jarrah, international justice counsel for the group.
"What's objectionable is the attempts to undermine international justice, not Palestine's decision to join a treaty to which over 100 countries around the world are members." In January, when the preliminary ICC examination was opened, Israeli Prime Minister Benjamin Netanyahu described it as an outrage, saying the court was overstepping its boundaries. The United States also said it "strongly" disagreed with the court's decision. "As we have said repeatedly, we do not believe that Palestine is a state and therefore we do not believe that it is eligible to join the ICC," the State Department said in a statement. It urged the warring sides to resolve their differences through direct negotiations. "We will continue to oppose actions against Israel at the ICC as counterproductive to the cause of peace," it said. But the ICC begs to differ with the definition of a state for its purposes and refers to the territories as "Palestine." While a preliminary examination is not a formal investigation, it allows the court to review evidence and determine whether to investigate suspects on both sides. Prosecutor Fatou Bensouda said her office would "conduct its analysis in full independence and impartiality." The war between Israel and Hamas militants in Gaza last summer left more than 2,000 people dead. The inquiry will include alleged war crimes committed since June. The International Criminal Court was set up in 2002 to prosecute genocide, crimes against humanity and war crimes. CNN's Vasco Cotovio, Kareem Khadder and Faith Karimi contributed to this report.
Gold Membership gives the ICC jurisdiction over alleged crimes committed in Palestinian territories since last June. Israel and the United States opposed the move, which could open the door to war crimes investigations against Israelis.
BART
Palestinian Authority becomes 123rd member of the International Criminal Court. The move gives the court jurisdiction over alleged crimes in Palestinian territories. Israel and the United States opposed the Palestinians' efforts to join the body.
MVP
"Today brings us closer to our shared goals of justice and peace," foreign minister says. The Palestinians signed the ICC's founding Rome Statute in January. The move gives the court jurisdiction over alleged crimes in Palestinian territories.
MVP+S
"Today brings us closer to our shared goals of justice and peace," foreign minister says. The United States says it "strongly" disagrees with the decision. The Palestinian Authority is the 123rd member of the International Criminal Court.
Table 14: The second instance from the CNN/Daily Mail dataset.
Input Describe the following data: Abilene,_Texas | cityServed | Abilene_Regional_Airport Gold Abilene, Texas is served by the Abilene regional airport. Abilene Regional Airport serves the city of Abilene in Texas.
BART
Abilene Regional Airport serves the city of Abilene in Texas.
MVP
Abilene Regional Airport serves the city of Abilene, Texas.
MVP+S
Abilene Regional Airport serves the city of Abilene, Texas.
Table 15: The first instance from the WebNLG dataset, which has two golden target sentences.
Input Describe the following data: "Madrid, Paracuellos de Jarama, San Sebastián de los Reyes and Alcobendas" | location | Adolfo_Suárez_Madrid–Barajas_Airport Gold Adolfo Suárez Madrid–Barajas Airport can be found in Madrid, Paracuellos de Jarama, San Sebastián de los Reyes and Alcobendas. Adolfo Suarez Madrid-Barajas airport is located at Madrid, Paracuellos de Jarama, San Sebastián de los Reyes and Alcobendas. Adolfo Suarez Madrid-Barajas Airport is located in Madrid, Paracuellos de Jarama, San Sebastian de los Reyes and Alcobendas.
BART
Adolfo Suárez Madrid–Barajas Airport can be found in Madrid, Paracuellos de Jarama, San Sebastián de los Reyes and Alcobendas.
MVP
Adolfo Suárez Madrid–Barajas Airport can be found in Madrid, Paracuellos de Jarama, San Sebastián de los Reyes and Alcobendas.
MVP+S
Adolfo Suárez Madrid–Barajas Airport is located in Madrid, Paracuellos de Jarama, San Sebastián de los Reyes and Alcobendas.
Table 16: The second instance from the WebNLG dataset, which has three golden target sentences.
Input Generate the question based on the answer: Saint Bernadette Soubirous [SEP] Architecturally , the school has a Catholic character . Atop the Main Building ' s gold dome is a golden statue of the Virgin Mary . Immediately in front of the Main Building and facing it , is a copper statue of Christ with arms upraised with the legend " Venite Ad Me Omnes " . Next to the Main Building is the Basilica of the Sacred Heart .
Immediately behind the basilica is the Grotto , a Marian place of prayer and reflection . It is a replica of the grotto at Lourdes , France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858 . At the end of the main drive ( and in a direct line that connects through 3 statues and the Gold Dome ) , is a simple , modern stone statue of Mary .
Gold To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France ?
BART
Who is believed to have appeared to the Virgin Mary at Lourdes ?
MVP
Who did the Virgin Mary appear to in Lourdes ?
MVP+S
The Grotto is a replica of the grotto at Lourdes , France where the Virgin Mary appeared to whom ?
Table 17: The first instance from the SQuAD dataset.
Input Generate the question based on the answer: a copper statue of Christ [SEP] Architecturally , the school has a Catholic character . Atop the Main Building ' s gold dome is a golden statue of the Virgin Mary . Immediately in front of the Main Building and facing it , is a copper statue of Christ with arms upraised with the legend " Venite Ad Me Omnes " . Next to the Main Building is the Basilica of the Sacred Heart .
Immediately behind the basilica is the Grotto , a Marian place of prayer and reflection . It is a replica of the grotto at Lourdes , France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858 . At the end of the main drive ( and in a direct line that connects through 3 statues and the Gold Dome ) , is a simple , modern stone statue of Mary .
Gold What is in front of the Notre Dame Main Building ?
BART
What is in front of the Main Building and facing it ?
MVP
What is immediately in front of the Main Building ?
| MVP+S What is immediately in front of the Main Building ? |
|-------------------------------------------------------------|
Table 18: The second instance from the SQuAD dataset.
Input Answer the following question: what color was cotton ? [X_SEP] once upon a time , in a barn near a farm house , there lived a little white kitten named cotton . cotton lived high up in a nice warm place above the barn where all of the farmer ' s horses slept . but cotton wasn ' t alone in her little home above the barn , oh no . she shared her hay bed with her mommy and 5 other sisters .
all of her sisters were cute and fluffy , like cotton . but she was the only white one in the bunch . the rest of her sisters were all orange with beautiful white tiger stripes like cotton ' s mommy . being different made cotton quite sad . she often wished she looked like the rest of her family . so one day , when cotton found a can of the old farmer ' s orange paint , she used it to paint herself like them . when her mommy and sisters found her they started laughing . " what are you doing , cotton ? ! " " i only wanted to be more like you " . cotton ' s mommy rubbed her face on cotton ' s and said " oh cotton , but your fur is so pretty and special , like you . we would never want you to be any other way " . and with that , cotton ' s mommy picked her up and dropped her into a big bucket of water . when cotton came out she was herself again . her sisters licked her face until cotton ' s fur was all all dry . " don ' t ever do that again , cotton ! " they all cried . " next time you might mess up that pretty white fur of yours and we wouldn ' t want that ! " then cotton thought , " i change my mind . i like being special " .
Gold white BART
white MVP
white MVP+S
white Table 19: The first instance from the CoQA dataset.
Input Answer the following question: what color was cotton ? [SEP] white [X_SEP] where did she live ? [X_SEP] once upon a time , in a barn near a farm house , there lived a little white kitten named cotton . cotton lived high up in a nice warm place above the barn where all of the farmer ' s horses slept . but cotton wasn ' t alone in her little home above the barn , oh no . she shared her hay bed with her mommy and 5 other sisters . all of her sisters were cute and fluffy , like cotton . but she was the only white one in the bunch . the rest of her sisters were all orange with beautiful white tiger stripes like cotton ' s mommy . being different made cotton quite sad . she often wished she looked like the rest of her family . so one day , when cotton found a can of the old farmer ' s orange paint , she used it to paint herself like them . when her mommy and sisters found her they started laughing . "
what are you doing , cotton ? ! " " i only wanted to be more like you " . cotton ' s mommy rubbed her face on cotton ' s and said
" oh cotton , but your fur is so pretty and special , like you . we would never want you to be any other way " . and with that , cotton ' s mommy picked her up and dropped her into a big bucket of water . when cotton came out she was herself again . her sisters licked her face until cotton ' s fur was all all dry . " don ' t ever do that again , cotton ! " they all cried . " next time you might mess up that pretty white fur of yours and we wouldn ' t want that ! " then cotton thought , " i change my mind . i like being special " .
Gold in a barn BART
in a barn MVP
in a barn MVP+S
in a barn Table 20: The second instance from the CoQA dataset.
Input Given the story title: male was out jogging one morning .
Gold the weather was crisp and cool . male felt good and energetic . he decided to keep jogging longer than normal . male went several more miles out of his way .
BART
he saw a man struggling to walk across a field . male stopped to help him . he saw a guy struggling to walking across a fields . the man thanked male profusely !
MVP
he ran into a man who was holding a knife . male was scared , so he ran away . the man threatened male . male ran back into his house .
MVP+S
he noticed a sound coming from a tree . male went to investigate . he found a small baby bird stuck in the trunk . male took the bird to the vet and they released it .
Table 21: The first instance from the ROCStories dataset.
Input Given the story title: ken was driving around in the snow .
Gold he needed to get home from work . he was driving slowly to avoid accidents . unfortunately the roads were too slick and ken lost control . his tires lost traction and he hit a tree .
BART
he hit a patch of black ice . ken slid and hurt his knee . he went to the doctor for it . krampus was sent home with a concussion .
MVP
he was in his car and it was very cold . ken started to back up . the car started to make a loud noise . kena had to back out and call for help .
MVP+S
he was having trouble starting his car . he had to push it a couple of blocks . his car would n't start . ken had to call a tow truck for help .
Table 22: The second instance from the ROCStories dataset.
Input Given the dialog: i love to meet new people . [SEP] i have a turtle named timothy . [SEP] my favorite sport is ultimate frisbee . [SEP] my parents are living in bora bora . [SEP] autumn is my favorite season .
[X_SEP] hello , how are you doing tonight ?
Gold i am well an loving this interaction how are you ?
BART
i am doing well , how are you ?
MVP
i am doing well . how are you ?
MVP+S
i am doing well . how are you ?
Table 23: The first instance from the PersonaChat dataset.
Input Given the dialog: i just bought a brand new house . [SEP] i like to dance at the club . [SEP] i run a dog obedience school . [SEP] i have a big sweet tooth . [SEP] i like taking and posting selkies . [X_SEP] hello , how are you doing tonight ? [SEP] i am well an loving this interaction how are you ?
Gold i am great . i just got back from the club .
BART
i am doing well , thanks for asking . what do you do for fun ?
MVP
i am doing well . i just got home from dancing at the club .
Table 24: The second instance from the PersonaChat dataset.
| MVP+S i am doing well . i just bought a new house . |
|-------------------------------------------------------|
Input Given the task dialog: Belief state [X_SEP] i would like a taxi from saint john 's college to pizza hut fen ditton . Given the task dialog: Dialogue action [X_SEP] [db_nores] [X_SEP] i would like a taxi from saint john 's college to pizza hut fen ditton . Given the task dialog: System response [X_SEP] [db_nores] [X_SEP] i would like a taxi from saint john 's college to pizza hut fen ditton .
Gold
[taxi] destination pizza hut fen ditton departure saint john 's college
[taxi] [request] leave arrive what time do you want to leave and what time do you want to arrive by ?
BART
[taxi] destination pizza express fen ditton departure saint johns college [taxi] [request] leave what time would you like to leave or arrive ?
MVP
[taxi] destination pizza hut fen ditton departure saint john 's college
[taxi] [request] leave sure ! when would you like to arrive ?
MVP+S
[taxi] destination pizza hut fen ditton departure saint john 's college
[taxi] [request] leave arrive what time would you like to leave ?
Table 25: The first instance from the MultiWOZ dataset. The task-oriented dialogue is composed of dialogue state tracking, dialogue action learning, and system response generation.
Input Given the task dialog: Belief state [X_SEP] i would like a taxi from saint john 's college to pizza hut fen ditton . [SEP] what time do you want to leave and what time do you want to arrive by ? [SEP] i want to leave after 17:15 .
Given the task dialog: Dialogue action [X_SEP] [db_nores] [X_SEP] i would like a taxi from saint john 's college to pizza hut fen ditton . [SEP] what time do you want to leave and what time do you want to arrive by ? [SEP] i want to leave after 17:15 .
Given the task dialog: System response [X_SEP] [db_nores] [X_SEP] i would like a taxi from saint john 's college to pizza hut fen ditton . [SEP] what time do you want to leave and what time do you want to arrive by ? [SEP] i want to leave after 17:15 .
Gold
[taxi] destination pizza hut fen ditton departure saint john 's college leave 17:15 [taxi] [inform] car phone booking completed ! your taxi will be [value_car] contact number is [value_phone]
BART
[taxi] destination pizza hut fen ditton departure saint john 's college leave 17:15
[taxi] [inform] car phone booking completed ! booked car type : [value_car] contact number : [value_phone]
MVP
[taxi] destination pizza hut fen ditton departure saint john 's college leave 17:15
[taxi] [inform] car phone booking completed ! booked car type : [value_car] contact number : [value_phone]
MVP+S
[taxi] destination pizza hut fen ditton departure saint john 's college leave 17:15
[taxi] [inform] car phone booking completed ! booked car type : [value_car] contact number : [value_phone]
Table 26: The second instance from the MultiWOZ dataset.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section Limitations
✓ A2. Did you discuss any potential risks of your work?
Section Broader Impacts
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** See Below
✓ B1. Did you cite the creators of artifacts you used?
Section A.1
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section A.1 and Tables 8 and 9
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 5 - Applicability
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Sections Limitations, A.1, and A.2
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section A.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sections A.1 and E, Tables 8 and 9
## C ✓ **Did You Run Computational Experiments?** See Below
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Sections 3.2 and 3.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Sections 3.3 and B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Sections 4 and B
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
See below
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
See Figure 2
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
See Section D
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
See Figure 2 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
See Section D |
zhao-etal-2023-alignment | From Alignment to Entailment: A Unified Textual Entailment Framework for Entity Alignment | https://aclanthology.org/2023.findings-acl.559 | Entity Alignment (EA) aims to find the equivalent entities between two Knowledge Graphs (KGs). Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings, which prevents the direct interaction between the original information of the cross-KG entities. Moreover, they encode the relational triples and attribute triples of an entity in heterogeneous embedding spaces, which prevents them from helping each other. In this paper, we transform both triples into unified textual sequences, and model the EA task as a bi-directional textual entailment task between the sequences of cross-KG entities. Specifically, we feed the sequences of two entities simultaneously into a pre-trained language model (PLM) and propose two kinds of PLM-based entity aligners that model the entailment probability between sequences as the similarity between entities. Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information. The experiments on five cross-lingual EA datasets show that our approach outperforms the state-of-the-art EA methods and enables the mutual enhancement of the heterogeneous information. Codes are available at \url{https://github.com/OreOZhao/TEA}. | # From Alignment To Entailment: A Unified Textual Entailment Framework For Entity Alignment
Yu Zhao1 Yike Wu2 Xiangrui Cai1 **Ying Zhang**1∗
Haiwei Zhang3 **Xiaojie Yuan**1 1 College of Computer Science, TKLNDST, Nankai University, Tianjin, China 2 School of Journalism and Communication, CMRC, Nankai University, Tianjin, China 3 College of Cyber Science, TKLNDST, Nankai University, Tianjin, China [email protected]
{wuyike,caixr,yingzhang,zhhaiwei,yuanxj}@nankai.edu.cn K ZH K EN
## Abstract
Entity Alignment (EA) aims to find the equivalent entities between two Knowledge Graphs
(KGs). Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings, which prevents the direct interaction between the original information of the cross-KG entities. Moreover, they encode the relational triples and attribute triples of an entity in heterogeneous embedding spaces, which prevents them from helping each other.
In this paper, we transform both triples into unified textual sequences, and model the EA task as a bi-directional textual entailment task between the sequences of cross-KG entities.
Specifically, we feed the sequences of two entities simultaneously into a pre-trained language model (PLM) and propose two kinds of PLMbased entity aligners that model the entailment probability between sequences as the similarity between entities. Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information. The experiments on five crosslingual EA datasets show that our approach outperforms the state-of-the-art EA methods and enables the mutual enhancement of the heterogeneous information. Codes are available at https://github.com/OreOZhao/TEA.
## 1 Introduction
Knowledge Graphs (KGs) organize and store the facts in the real world to an effective structure, and have been applied to many knowledge-driven tasks, such as question answering (Lan et al., 2021),
recommender systems (Wang et al., 2022), and information extraction (Sui et al., 2022; Zhou et al.,
2021). Since the KGs are often from various domains, Entity Alignment (EA) provides fundamental techniques to find the equivalent entities in two KGs, which would complement the knowledge coverage of KGs.
![0_image_0.png](0_image_0.png)
(a) An example of heterogeneous relational and attribute information of entity "*The Rolling Stones*" in ZH-EN KGs.
| 滚石乐队 的邻居是 无 法满足, 基思·理查兹, 查利·沃茨, … | entail | The Rolling Stones' neighbors are Satisfaction, Charlie Watts, Keith … |
|-----------------------------------------------------------|-----------------------------|----------------------------------------------------------------------------|
| 滚石乐队 的邻居是 无 | The Rolling Stones' | |
| 法满足, 基思·理查兹, | neighbors are Satisfaction, | |
| 查利·沃茨, … | Charlie Watts, Keith … | |
| entail | | |
| entail entail | | |
| 滚石乐队 的属性值是 Rollingstones.com, 1962, … | entail | The Rolling Stones' attribute values are The Stones, 1962, … |
| 滚石乐队 的属性值是 | The Rolling Stones' | |
| Rollingstones.com, | attribute values are The | |
| 1962, … | Stones, 1962, … | |
| entail | | |
| entail entail | | |
(b) Our bi-directional entailment modeling of cross-KG entity sequences, where the sub-sequences with the same color shading share the same semantics.
Figure 1: (a) displays an example of relational and attribute information of entities. (b) displays our bidirectional entailment modeling for EA.
Existing EA methods usually consist of two modules: (1) embedding module encodes entity information to entity embeddings, (2) alignment module guides the embeddings of the aligned entities to be similar (Sun et al., 2020). Moreover, they usually incorporate two kinds of heterogeneous triples as shown in Figure 1a: (1) relational triples (*h, r, t*),
represents the relation r between head entity h and tail entity r, (2) attribute triples (*e, a, v*), represents the attribute value v of the attribute a of entity e.
Despite the progress of existing EA methods
(Liu et al., 2020; Tang et al., 2021; Zhong et al.,
2022), they are limited by the embedding-based architecture in two folds: **(1) Lack of direct interaction between KGs.** Existing methods usually treat EA as a representation learning task. During the encoding process, the origin triples of entities
∗Corresponding author.
are compressed to a continuous vector, which prevents them from directly interacting with each other.
However, the origin information contains rich semantics information. Take the entity "The Rolling Stones" in Figure 1a as an example, the attribute value "*Rollingstones.com*" and "*1962*" of the Chinese KG are highly compatible with the value "The Rolling Stones" and "*1962*" in the English KG. The correlation between the values can directly indicate the alignment of two entities.
(2) Heterogeneous embedding spaces. Existing methods usually encode the relational triples and attribute triples in different embedding spaces due to the heterogeneity of structures and literals.
This way, the alignment of relational information and of attribute information are separated and could not help each other. However, they may share the same correlation pattern. For example, the entity "*The Rolling Stone*" in Chinese and English KGs in Figure 1 have common neighbors (translated)
and common attribute values, which could both indicate the equivalence of entities. Capturing the correlation pattern in a unified model would enable mutual enhancement between the two information.
Inspired by recent progress of pre-trained language models (PLMs) (Brown et al., 2020; Gao et al., 2021; Sun et al., 2022), we transform both two kinds of triples into textual sequences, and propose a unified Textual Entailment framework for entity Alignment TEA. We model the EA task as a bi-directional textual entailment task between the sequences of cross-KG entities as shown in Figure 1b to explicitly capture the fine-grained interaction between entity information. Specifically, we combine two sequences of entities in one sequence with cloze-style templates and feed the combined sequence into a PLM. We further propose two aligners to model the entailment probability as the pre-training tasks of PLM, i.e. Next Sentence Prediction (NSP) and Masked Language Modeling
(MLM). The NSP-Aligner predicts the probability of whether one entity *is next sentence* of the other, while the MLM-Aligner fills in the blanks between entity sequences with mapped label words "Yes" or "No". The positive entailment probability is seen as entity similarity and is used for ranking the candidate entities. The experiments on five crosslingual EA datasets show that TEA outperforms the state-of-the-art methods and enables the mutual enhancement of heterogeneous information.
Overall, the contributions of this paper can be
- We unify the modeling of the relational triples and attribute triples in EA by transforming both into textual sequences and capturing their common correlation pattern.
- To the best of our knowledge, we are the first to transform EA to a bi-directional textual entailment task of relational and attribute information. The proposed PLM-based aligners capture the fine-grained interaction between cross-KG entities.
- Experiments on five cross-lingual EA datasets demonstrate that our approach outperforms baselines and enables the mutual enhancement of heterogeneous information.
## 2 Related Work 2.1 Entity Alignment
Existing EA methods usually follow an embeddingalignment architecture (Sun et al., 2020), where the entity encoder learns from the relational and attribute triples with various networks, then the alignment module guides the embeddings of the aligned entities to be similar.
There are two mainstreams of methods: TransE
(Bordes et al., 2013) based methods (Chen et al.,
2017; Sun et al., 2017; Zhu et al., 2017; Sun et al.,
2018; Guo et al., 2019) for KG representation with simple implementation, and GCN (Welling and Kipf, 2016) based methods (Chen et al., 2017; Sun et al., 2017; Zhu et al., 2017; Sun et al., 2018; Guo et al., 2019) for modeling graph structures. However, the rich semantics in the origin information of cross-KG entities lack interaction through the encoding process. Our work focuses on modeling the interaction between the origin information of cross-KG entities.
For methods incorporating attribute information with relational information, they usually encode them in heterogeneous representation spaces with hybrid encoders. For example, GNNs (Sun et al.,
2019; Liu et al., 2020) and RNNs (Guo et al., 2019; Zhong et al., 2022) are used for encoding relational triples to model the structures of entities, while Skip-gram (Sun et al., 2017), N-hot (Wang et al., 2018; Yang et al., 2019) and BERT (Liu et al., 2020; Zhong et al., 2022) for attribute triples for capturing literal semantics. Some methods further aggregate the heterogeneous embeddings in separate sub-graphs (Wang et al., 2018; Yang et al.,
2019; Liu et al., 2020; Tang et al., 2021). However, the heterogeneous embedding spaces hinder the EA
process. Our work focuses on the unified modeling of relational and attribute information.
There have been other advancements in EA,
focusing on unsupervised or self-supervised EA
(Mao et al., 2021; Liu et al., 2022), incorporation of entity images (Liu et al., 2021; Lin et al., 2022),
EA with dangling cases (Sun et al., 2021), which motivates our future work.
## 2.2 Plms In Kgs
With the prosperity of PLMs like BERT (Devlin et al., 2019), fine-tuning the PLM in downstream tasks has shown great potential in KGs. In EA,
several methods have explored PLMs in learning entity embeddings (Yang et al., 2019; Tang et al.,
2021; Zhong et al., 2022). However, they share the same drawbacks with methods in Section 2.1, and some methods (Yang et al., 2019; Tang et al., 2021)
require extra natural language sequences such as entity descriptions which are not always available.
Recent studies (Brown et al., 2020; Gao et al.,
2021; Sun et al., 2022) show that given a naturallanguage prompt, the PLM could achieve remarkable improvements by simulating the pre-training tasks of PLM, i.e. NSP and MLM. The promptbased fine-tuning paradigm has been applied in many tasks in KGs, such as Named Entity Recognition (Huang et al., 2022), Entity Linking (Sun et al., 2022), Entity Typing (Ding et al., 2021).
However, there is no prompt-learning study for entity-pair tasks such as EA. Our work focuses on constructing entity-pair sequences with prompts, and transforming the EA task to the NSP-style or MLM-style textual entailment task. The entailment probability is seen as entity similarity.
## 3 Methodology 3.1 Preliminaries
Knowledge Graph. A knowledge graph (KG)
could be defined as G = {E, R, A, V, T
r, T
a},
where E, R, A, V is the set of entities, relations, attributes and attribute values, respectively. The T
r = {(h, r, t) | h, t ∈ E, r *∈ R}* is the set of relational triples. The T
a = {(e, a, v) | e ∈ E, a ∈
A, v *∈ V}* is the set of attribute triples.
Entity Alignment. Given the two KGs G1 and G2, the target of EA is to find a mapping between two KGs, i.e. P = {(e, e′)|e ∈ G1, e′ ∈ G2}. A set of alignment seeds P
sis used as training data.
## 3.2 Overview
In our TEA framework, we first transform an entity as textual sequences composed of its neighbors and attribute values, and then measure the similarity between a pair of cross-KG entities via a text entailment task on their sequences. Finally, we perform the entity alignment based on similarity.
Now we elaborate on the textual entailment task.
As shown in Figure 2, we first combine two sequences of cross-KG entities with a cloze-style template, and input the combined sequence into the PLM. Then, we tune the PLM with the entailment objectives to enlarge the positive entailment probability of the positive entity pairs. The entailment probability p(y|T(*e, e*′)) is from one of the two proposed PLM-based entity aligners, NSP-Aligner or MLM-Aligner.
In practice, we find that the computationally cost is prohibitive to perform text entailment between all the entity pairs in two KGs. Therefore, besides the entailment objectives, we also tune the PLM simultaneously with the entity embedding-alignment objective, which minimizes the distance between the embeddings of the aligned entity pairs. For efficient EA inference, we first filter out the most similar candidates based on the embeddings learned from the embedding-alignment objective, and then re-rank these candidates via the entity similarity learned from the entailment objectives.
## 3.3 Input Construction
Sequence construction. We follow previous studies (Tang et al., 2021; Zhong et al., 2022) to construct sequences with neighbors and attribute values, which contain rich semantics. For entity e, the relational neighbors are Ne = {n|(e, r, n) ∈ T r},
and the attribute values are Ve = {v|(e, a, v) ∈
T
a}. We sort the Ne and Ve in alphabetical order by relation r and attribute a to form sequences respectively. The sequences are denoted as S
r(e) = "e, n1, n2*, ..., n*|Ne|[SEP]", ni ∈ Ne and S
a(e) = "e, v1, v2*, ..., v*|Ve|[SEP]", vi ∈ Ve.
Entity-pair input. Existing PLM-based EA
methods usually take the weighted hidden state of
[CLS] of single-entity input x = [CLS]S(e)[SEP]
for entity embedding. In our work, we propose to combine the sequences of two entities together and learn from their correlation. The input could be denoted as T(*e, e*′) = [CLS]S(e)[T]S(e′), where
![3_image_0.png](3_image_0.png)
the S(e) and S(e′) could be S
r(e) or S
a(e), and
[T] could be any templates. We discuss the effect of templates in Section 4.4.
Attention mask matrix. As shown in Figure 2, we design an attention mask matrix M to implement the simultaneous tuning of the entailment objectives and the entity embedding-alignment objective, where the entailment mask M0 exposes the whole entity-pair sequence to PLM and embedding masks M1 and M2 expose only one of the entities.
## 3.4 Training
Training set. In each epoch, we first construct a training set D = {(e, e+, e−)|(e, e+) ∈ Ps, e− ∈
G2, e+ ̸= e−}, where each alignment seed (*e, e*+)
from the training data P
s has a negative counterpart e−. Thus the model could be trained to distinguish the positive pair (*e, e*+) from the negative pair (*e, e*−). We randomly select e− from the top entities in G2 with the highest embedding cosine similarity scores with e. The embeddings for negative sample selection are obtained from the fixed PLM with single-entity input, and are consistent with the embeddings which are fine-tuned in the training phase with entity-pair input and embedding masks M1 or M2.
Bi-directional training. For learning the bidirectional correlation between entities for alignment, we tune the PLM with the bi-directional sequences, i.e. T(*e, e*′) and T(e′, e).
Cooperated training. For capturing the common correlation pattern of relational and attribute information, we tune the PLM with one epoch of relational input T
r(*e, e*′) and one epoch of attribute input T
a(*e, e*′) until convergence.
## 3.5 Embedding-Alignment Objective
The sequence T(*e, e*′) is tokenized and put into a pre-trained language model with the attention mask, such as multilingual BERT for cross-lingual EA. We denote the obtained hidden states conditioned on the input sequence and attention mask Mm as Hm = {h m
[CLS], h m 1
, ..., h m l
, h m
[SEP]} =
PLM(T(*e, e*′); Mm).
We obtain the embedding of entities following a standard fine-tuning paradigm. We obtain the hidden output of the PLM for the two entities e = Wembh 1
[CLS] and e′ = Wembh 2
[CLS], where the Wemb ∈ R
emb×d projects the hidden size of PLM d to embedding size emb. Then we apply the pairwise margin ranking loss in the embeddings of the training set as Equation (1) to minimize the distance between the positive entity pairs and maximize the distance of negative entity pairs. The d(e, e′) denotes the distance function between two entities and m is a hyper-parameter that represents the margin between the positive and negative pairs.
We use l2 distance as distance function.
Lmr =X
$$\sum_{(e,e^{+},e^{-})\in{\mathcal{D}}}max\{0,d({\bf e},{\bf e}^{+})-d({\bf e},{\bf e}^{-})+m\}.\tag{1}$$
## 3.6 Entailment Objectives
For fully using the language modeling ability of PLMs, existing methods (Gao et al., 2021; Sun et al., 2022) propose to model the downstream task as the pre-training tasks of PLM, i.e. NSP and MLM. We propose two aligners based on the pre-training tasks of PLMs, i.e. NSP-Aligner and MLM-Aligner. Since we transform the EA task to a bi-directional text entailment task, we directly utilize NSP Head or MLM Head to represent if two entities entail each other, i.e. align to each other.
We denote the label space of entailment-style EA
as Y = {align, not_align}.
NSP-Aligner. The origin NSP task predicts if the second sentence comes after the first sentence.
For NSP-Aligner, the model predicts the probability of whether entity e is after e′and vice versa, to demonstrate the correlation of two entities. In this way, we can treat the entailment-style EA task as an NSP task. As shown in Equation (2), with the input of T(*e, e*′), the output of NSP head is the presoftmax logit pnsp, where n ∈ {next, not_next}
respects to Y, Wnsp ∈ R
2×dis the weight matrix learned by NSP task, and h 0
[CLS] is the hidden state of [CLS] with the entailment mask M0.
$$p_{nsp}(y|T(e,e^{\prime}))=p(n|T(e,e^{\prime}))\tag{2}$$ $$=\mathbf{W}_{\text{nsp}}(tanh(\mathbf{W}\mathbf{h}_{[\alpha5]}^{0}+\mathbf{b}))$$
MLM-Aligner. The origin MLM task predicts the masked token [MASK] in the sequence. For MLM-Aligner, the model learns a mapping from the label space to the set of individual words in the vocabulary, denoted as M : *Y → V* with label word such as "Yes" of "No". In this way, we can treat the entailment-style EA task as an MLM
task. The MLM head fills the gaps [MASK] with the label word probability as Equation (3), where Wmlm ∈ R
V ×d projects the hidden state of PLM
to the vocabulary size and h 0
[MASK] is the hidden state of [MASK] with the entailment mask M0.
$$p_{mlm}(y|T(e,e^{\prime}))=p(\texttt{[MASK]}=\mathcal{M}(y)|T(e,e^{\prime}))\tag{3}$$ $$=\texttt{W}_{\texttt{mlm}}\texttt{h}_{\texttt{[MASK]}}^{0}+\texttt{b}$$
Prompt bi-directional entailment loss. In the training phase, we train the NSP-Aligner or MLMAligner with two losses. The first loss is a binary cross entropy loss for prompt entailment Lpe as shown in Equation (4) where q(y|T(*e, e*′)) =
softmax(p(y|T(*e, e*′))). We train the positive entity pair with positive label 1 and the negative pair with negative label 0. We also add the reversed L′pe with the input T(e′, e) for bi-directional modeling. The final bi-directional entailment loss is Lbe = Lpe + L′pe.
Lpe = BCE(q(y|T(*e, e*
+), 1) + BCE(q(y|*T(e, e*
$$(y|T(e,e^{-})),0)\ \ 0$$
Prompt bi-directional margin loss. The second loss is the prompt margin ranking loss Lpmr as Equation (5), where the positive probability p
+(y|T(*e, e*′)) of positive entity pairs are enlarged compared to the negative pairs.
The positive probability is p
+
nsp(y|T(*e, e*′)) =
p(n = next|T(*e, e*′)) for NSP-Aligner and p
+
mlm(y|T(*e, e*′)) = p([MASK] = "Yes"|T(*e, e*′))
for MLM-Aligner. We also use the bi-directional prompt margin loss as Lbm = Lpmr + L′pmr.
$$\begin{split}\mathcal{L}_{pmr}=\sum_{(e,e^{+},e^{-})\in\mathcal{D}}max\{0,p^{+}(y|T(e,e^{-}))\\ -p^{+}(y|T(e,e^{+}))+m\}\end{split}\tag{5}$$
$$(6)$$
The overall objective of TEA is the sum of three losses as Equation (6).
$${\mathcal{L}}={\mathcal{L}}_{m r}+{\mathcal{L}}_{b e}+{\mathcal{L}}_{b m}$$
3.7 Inference In the inference phase, we use entity embeddings for the first ranking. Then we use the PLM-based aligner, NSP-Aligner or MLM-Aligner, for reranking the hard samples with the candidates selected by the entity embeddings.
Candidate entity selection. We use entity embeddings to select the candidate entity set. For each entity in G1, we retrieve the top fixed number of entities from G2 with the highest cosine similarity scores as candidate entity set C(e). The candidate number |C(e)| is hyper-parameter.
Confidence-aware sample selection. We use the highest similarity score between e and entities in C(e) as the embedding confidence score for sample e, denoted as c(e) = max{cos(e, e′)|e′ ∈ C(e)}. We assume that the test samples with lower confidence scores are harder samples for embeddings to obtain accurate results. Then we re-rank the samples with lower confidence than a fixed threshold c(e) < δ, with the positive probability p
+(y|T(*e, e*′)) of PLM-based aligner. The samples with higher confidence use the similarity of embeddings as final alignment results. The threshold δ is hyper-parameter.
## 4 Experiments 4.1 Experimental Settings
Datasets. To evaluate the proposed method, we conduct experiments on two widely used EA datasets: DBP15K (Sun et al., 2017) and SRPRS
(Guo et al., 2019). **DBP15K** is the most commonly used EA dataset and consists of three crosslingual EA subsets, which are Chinese-English
(ZH-EN), Japanese-English (JA-EN), and FrenchEnglish (FR-EN). **SRPRS** is a sparse EA dataset with much fewer triples and consists of two crosslingual EA subsets, which are English-French (ENFR) and English-German (EN-DE). The dataset
| Method | DBPZH−EN | DBPJA−EN | DBPFR−EN | SRPRSEN−FR | SRPRSEN−DE | | | | | | | | | | |
|-----------------------------------------------------------------------------------------------------------------------|------------|------------|------------|--------------|--------------|------|------|------|------|------|------|------|------|------|------|
| H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR Methods modeling relational triples and entity names | | | | | | | | | | | | | | | |
| RDGCN | 69.7 | 84.2 | 0.75 | 76.3 | 89.7 | 0.81 | 87.3 | 95.0 | 0.90 | 67.2 | 76.7 | 0.71 | 77.9 | 88.6 | 0.82 |
| HGCN | 70.8 | 84.0 | 0.76 | 75.8 | 88.9 | 0.81 | 88.8 | 95.9 | 0.91 | 67.0 | 77.0 | 0.71 | 76.3 | 86.3 | 0.80 |
| CEA(Emb) | 71.9 | 85.4 | 0.77 | 78.5 | 90.5 | 0.83 | 92.8 | 98.1 | 0.95 | 93.3 | 97.4 | 0.95 | 94.5 | 98.0 | 0.96 |
| CEA | 78.7 | - | - | 86.3 | - | - | 97.2 | - | - | 96.2 | - | - | 97.1 | - | - |
| FT-EA w/o T a | 67.5 | 91.0 | 0.76 | 68.9 | 90.8 | 0.77 | 95.8 | 99.3 | 0.97 | 96.7 | 98.8 | 0.97 | 97.0 | 99.1 | 0.98 |
| TEA-NSP w/o T a | 81.5 | 95.3 | 0.87 | 89.0 | 96.7 | 0.92 | 96.8 | 99.5 | 0.98 | 97.3 | 99.4 | 0.98 | 97.2 | 99.6 | 0.98 |
| TEA-MLM w/o T a | 83.1 | 95.7 | 0.88 | 88.3 | 96.6 | 0.91 | 96.8 | 99.4 | 0.98 | 98.1 | 99.5 | 0.99 | 98.3 | 99.6 | 0.99 |
| Methods modeling relational triples, attribute triples, and entity names | | | | | | | | | | | | | | | |
| AttrGNN | 79.6 | 92.9 | 0.85 | 78.3 | 92.1 | 0.83 | 91.9 | 97.8 | 0.91 | - | - | - | - | - | - |
| BERT-INT(name) | 81.4 | 83.5 | 0.82 | 80.6 | 83.5 | 0.82 | 98.7 | 99.2 | 0.99 | 97.1 | 97.5 | 0.97 | 98.6 | 98.8 | 0.99 |
| SDEA | 87.0 | 96.6 | 0.91 | 84.8 | 95.2 | 0.89 | 96.9 | 99.5 | 0.98 | 96.6 | 98.6 | 0.97 | 96.8 | 98.9 | 0.98 |
| FT-EA | 85.4 | 95.7 | 0.89 | 83.2 | 93.4 | 0.87 | 95.7 | 99.0 | 0.97 | 96.4 | 98.9 | 0.97 | 97.0 | 99.1 | 0.98 |
| TEA-NSP | 94.1 | 98.3 | 0.96 | 94.1 | 97.9 | 0.96 | 97.9 | 99.7 | 0.99 | 98.5 | 99.6 | 0.99 | 98.7 | 99.6 | 0.99 |
| TEA-MLM | 93.5 | 98.2 | 0.95 | 93.9 | 97.8 | 0.95 | 98.7 | 99.6 | 0.99 | 98.5 | 99.6 | 0.99 | 98.7 | 99.7 | 0.99 |
statistics of DBP15K and SRPRS are listed in Table 2. Consistent with previous studies, we randomly choose 30% of the samples for training and 70%
for testing.
Evaluation metrics. We use Hits@K (K=1,10),
which is the accuracy in top K predictions, and Mean Reciprocal Rank (MRR), which is the average reciprocal ranking of ground-truth entity, as evaluation metrics. The higher Hits@K and higher MRR indicate better performance.
Implementation details. We implement our approach with Pytorch and Transformers (Wolf et al., 2020). We use BERT (Devlin et al., 2019) as the PLM for cross-lingual EA following Liu et al.
(2020); Tang et al. (2021); Zhong et al. (2022). The information for evaluating TEA is one of the relational and attribute information which performs higher Hits@1 in the validation set, i.e. attribute for DBP15K and relational for SRPRS. The training is early stopped after 3 epochs of no improvements
| Dataset | |E| | |R| | |A| | |T r | | |T a | | |P| |
|----------------------|-----------|-------|---------|---------------|----------------|-------|
| DBPZH−EN | ZH 19,388 | 1,701 | 7,780 | 70,414 | 379,684 15,000 | |
| EN 19,572 | 1,323 | 6,933 | 95,142 | 567,755 | | |
| DBPJA−EN | JA 19,814 | 1,299 | 5,681 | 77,214 | 354,619 15,000 | |
| EN 19,780 | 1,153 | 5,850 | 93,484 | 497,230 | | |
| DBPFR−EN | FR 19,661 | 903 | 4,431 | 105,998 | 528,665 15,000 | |
| EN 19,993 | 1,208 | 6,161 | 115,722 | 576,543 | | |
| EN 15,000 | 221 | 274 | 36,508 | 70,750 15,000 | | |
| FR 15,000 | 177 | 393 | 33,532 | 56,344 | | |
| SRPRSEN−FR EN 15,000 | 222 | 275 | 38,363 | 62,715 15,000 | | |
| DE 15,000 | 120 | 185 | 37,377 | 142,506 | | |
| SRPRSEN−DE | | | | | | |
of Hits@1 in the validation set. We conduct the experiments in Ubuntu 18.04.5 with a single NVIDIA
A6000 GPU with 48GB of RAM.
Baselines. To comprehensively evaluate our method TEA, the baselines are grouped into two categories according to the input information. Since we construct the sequences with entity names, we mainly compare TEA with the method that also models entity names. (1) The methods modeling relational triples and entity names:
RDGCN (Wu et al., 2019a), HGCN (Wu et al.,
2019b), CEA (Zeng et al., 2020). (2) The methods modeling relational triples, attribute triples, and entity names: AttrGNN (Liu et al., 2020), BERTINT (Tang et al., 2021), SDEA (Zhong et al., 2022).
For BERT-INT which uses entity descriptions, we replace the descriptions with entity names for a fair comparison following Zhong et al. (2022).
We construct a baseline FT-EA, which learns and inferences with the entity embeddings for alignment results. FT-EA could be seen as TEA w/o textual entailment objectives and re-ranking. We report the results of TEA with two PLM-based aligners, TEA-NSP and TEA-MLM. We also ablated the attribute sequence (w/o T
a) to compare with the baselines of group (1).
## 4.2 Comparison With Baselines.
We compare our method with the baselines and the results are presented in Table 1.
Comparison with group (1). Compared with Table 2: Datasets statistics for EA.
methods modeling relational triples and entity names, TEA-NSP and TEA-MLM achieve the best or the second best in all metrics on all datasets.
Even on DBPZH−EN where baselines fail to perform well, TEA-MLM outperforms the baselines by at most 4.4% in Hits@1 and 11% in MRR. Moreover, compared with FT-EA, the re-ranking with NSP-Aligner and MLM-Aligner brings significant improvements, at most 20.1% in Hits@1 and 15%
in MRR improvements.
The TEA-NSP and TEA-MLM perform comparably on DBP15K and TEA-MLM performs better than TEA-NSP on SRPRS. The reason could be that MLM-Aligner is more competitive in the lowresource setting (Gao et al., 2021) since the SRPRS
dataset has fewer triples. We will look into EA
under the low-resource setting in the future.
Comparison with group (2). Compared with methods modeling heterogeneous triples and entity names, TEA performs the best or the second best in all metrics. The TEA-NSP outperforms the baselines by 9.3% in Hits@1 and 7% in MRR
at most, and outperforms the FT-EA by 10.9% in Hits@1 and 9% in MRR at most. We could observe that BERT-INT(name) (Tang et al., 2021)
performs the best or the second best in some metrics on the FR-EN, EN-FR, and EN-DE alignment.
The reason could be that BERT-INT relies more on the similarity between entity names, and English shares many similar expressions with French and German. Thus BERT-INT's performance declines on the alignment between less-alike languages.
TEA on SRPRS in group (1) and (2) are both evaluated with relational sequences. With extra attribute information, TEA in group (2) outperforms the TEA w/o T
ain group (1). It demonstrates that
| Template T(e, e′) | TEA-NSP | TEA-MLM |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------|-----------|
| H@1H@10MRRH@1H@10MRR | | |
| Hard templates | | |
| S(e) ? [MASK]. S(e ′) | 93.3 98.1 0.95 93.2 98.1 0.95 | |
| S(e) ? [MASK]. I know that S(e ′) 93.6 97.8 0.95 93.4 97.8 0.95 S(e) ? [MASK]. I think that S(e ′) 92.3 97.4 0.94 93.2 97.8 0.95 Soft templates ′), l=1 94.1 98.3 0.96 92.8 97.8 0.95 S(e) [MASK][P0]...[Pl] S(e S(e) [MASK][P0]...[Pl] S(e ′), l=2 93.4 97.8 0.95 93.2 97.9 0.95 S(e) [MASK][P0]...[Pl] S(e ′), l=3 92.8 97.8 0.95 93.3 98.2 0.95 S(e) [MASK][P0]...[Pl] S(e ′), l=4 92.5 97.8 0.95 93.5 98.2 0.95 | | |
by modeling the common correlation pattern of the heterogeneous information with the PLM-based aligners, the extra attribute information would enhance the alignment of relational information. On the contrary, without the modeling of the common correlation, the performance of FT-EA slightly declines or stays the same on the SRPRS dataset than FT-EA w/o T
a.
The TEA-NSP are comparable but slightly better than TEA-MLM in group (2). The reason could be that the interaction modeling of two aligners is similar, but NSP-Aligner is better with sentence-pair input than MLM-Aligner since NSP is designed to process sentence pairs.
## 4.3 Ablation Study
| DBPZH−EN | | | |
|--------------------|------|------|------|
| Hits@1 Hits@10 MRR | | | |
| TEA-NSP | 94.1 | 98.3 | 0.96 |
| TEA-NSP w/o [T] | 92.6 | 97.7 | 0.95 |
| TEA-NSP w/o Lbe | 90.3 | 97.4 | 0.93 |
| TEA-NSP w/o Lbm | 93.2 | 98.0 | 0.95 |
| TEA-NSP w/o T r | 90.1 | 97.1 | 0.93 |
| MLM-FT-EA | 85.2 | 95.2 | 0.89 |
We conduct the ablation study as shown in Table 3.
Q1: Is the cloze-style template necessary for NSP-Aligner? Since most prompt-learning methods use the cloze-style templates to form an MLM
task rather than an NSP task, thus we remove the cloze-style template in the NSP-Aligner with TEANSP w/o [T], i.e. only use [SEP] token to divide the sequences of two entities. The performance declines 1.5% in Hits@1 compared to the TEA-NSP,
which shows that the template could also enhance the performance of NSP-Aligner.
Q2: Are the entailment objectives necessary?
The ablation of two entailment losses Lbe and Lbm results in a decrease of 3.8% and 0.9%, respectively.
Thus two losses both enhance the re-ranking performance and the binary cross-entropy loss enhances more than the margin loss.
Q3: Do the relational sequences and attribute sequences enhance each other? The TEA-NSP
and the TEA-NSP w/o T
rare both evaluated by
![7_image_1.png](7_image_1.png)
attribute information. By modeling the extra relational information, the performance of evaluating with attribute information increases by 4.0% in Hits@1, which means the modeling of relational information enhances the modeling of the attribute information. Moreover, the analysis in Section 4.2 shows the reversed enhancement. They demonstrate that by modeling the common correlation of relational and attribute information in a unified manner would enable mutual enhancement.
Q4: Is the interaction of entity-pair necessary? We construct MLM-FT-EA, a variation of FT-EA, to ablate the entity-pair interaction with reservation of the prompt learning. Inspired by recent progress in sentence embedding
(Jiang et al., 2022), we use a cloze-style template This sentence of "S(e)" means [MASK]. to obtain entity embeddings with MLM-FT-EA. The performance of MLM-FT-EA is similar to FTEA. It shows that the entity-pair interaction is the most important component in TEA rather than the prompt-learning paradigm.
## 4.4 Effect Of Templates
In this section, we study the effect of templates in TEA. As stated by previous studies (Gao et al.,
2021; Tam et al., 2021), the templates have impacts on the performance of prompt-learning oriented tasks. We design both hard templates and soft templates on DBPZH−EN dataset. The hard templates are manually designed, while the soft templates have a varying number of learnable special prompt tokens following Ding et al. (2021). As shown
![7_image_0.png](7_image_0.png)
in Table 4, the templates could affect the performance of EA considerably. For hard templates, the I know that improves the performance the most.
For soft templates, the TEA-NSP needs fewer special tokens while the TEA-MLM needs more.
## 4.5 Effect Of Re-Ranking Parameters
Figure 3 shows the hyper-parameter analysis of the re-ranking process of TEA. The sample number is the number of entities in G1 to be re-ranked by the PLM-based aligners. With a higher threshold, more samples are re-ranked and the performance of EA
is better. When threshold δ = 0.9, the re-ranking samples are 37% less than re-ranking all the samples (δ = 1.0) but the performance is similar and the re-ranking time cost are highly reduced.
The candidate number is the number of entities in G2 that are most likely to be the ground truth.
With more candidates, the performance is better.
The reason could that the ground truth entity is more likely to be in the candidate set when the candidate set is larger. Moreover, even with only 16 candidates, the performance of TEA in Hits@1 exceeds the FT-EA by 7.6%.
## 4.6 Case Study
We conduct a case study as shown in Table 5, trying to find the aligned entity of *Singapour (FR)*. The entity ranking conducted by embeddings shows the best-aligned entity is *Thailand (EN)*. However, by re-ranking the candidates with PLM-based aligners, the fine-grained interaction between entities is explicitly modeled. As shown in the visualization, the Singapour (FR)-Singapore (EN) pair has more attentive sub-sequences (darker diagonal short lines)
while the unaligned pair Singapour (FR)-Thailand
(EN) have not. Moreover, the aligned entity is ranked first place by the PLM-based aligner.
## 5 Conclusion
To address the limitations of the existing EA
method, the lack of interaction and heterogeneous embedding spaces, we propose a unified textual entailment framework for entity alignment called TEA. We transform the origin relational triples and attribute triples of an entity into textual sequences and model the EA task as a bi-directional textual entailment task between the sequences of cross-KG
entities. We propose two kinds of PLM-based aligners to capture the fine-grained correlation between entities with two kinds of sequences in a unified manner. The entailment probability is used for measuring entity similarity and ranking the entity candidates. Experiment results on five cross-lingual datasets show that TEA outperforms existing EA
methods and enables the mutual enhancement between the heterogeneous information.
## Limitations
Despite that TEA achieves some gains for EA, TEA still has the following limitations:
First, TEA has a higher computation cost than the embedding-based EA methods in the re-ranking phase, since TEA process entity-pair input for modeling the interaction between them. For reducing time costs, we adopt the confidence-aware reranking strategy to reduce the number of re-ranking samples and candidates. However, the inference time cost is still higher than the embedding-based methods. In addition, the candidate selection may be limited in some corner cases if the ground truth entity is not ranked in the top |C| similar entities calculated by entity embeddings. We will further explore efficient approaches which could cover the corner cases.
Second, the alignment of relational information of TEA requires the entity names to construct sequences. However, the entity names are not always available in some EA datasets, such as the Wikidata KG in OpenEA Benchmark (Sun et al., 2020).
In that case, TEA can use the attribute sequences without entity names for entity alignment. Though TEA w/o T
rcan achieve competitive performance as shown in Table 3, it still limits the application of TEA. We will further explore PLM-based approaches to align the relational information without the requirement of entity names.
## Acknowledgements
This research is supported by the Natural Science Foundation of Tianjin, China (No.
22JCJQJC00150, 22JCQNJC01580), the National Natural Science Foundation of China (No.
62272250, U1936206, 62002178), Tianjin Research Innovation Project for Postgraduate Students (No. 2022SKYZ232), and the Fundamental Research Funds for the Central Universities (No.
63231149, 63232114).
## References
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multirelational data. *NeurIPS*, 26.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *NeurIPS*, 33:1877–1901.
Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In IJCAI, pages 1511–1517.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*, pages 4171–4186.
Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, and Hong-Gee Kim. 2021. Prompt-learning for fine-grained entity typing. arXiv preprint arXiv:2108.10604.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In ACL, pages 3816–3830.
Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to exploit long-term relational dependencies in knowledge graphs. In *ICML*, pages 2505–2514. PMLR.
Yucheng Huang, Kai He, Yige Wang, Xianli Zhang, Tieliang Gong, Rui Mao, and Chen Li. 2022. Copner:
Contrastive learning with prompt guiding for fewshot named entity recognition. In *COLING*, pages 2515–2527.
Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, and Qi Zhang. 2022. Promptbert: Improving bert sentence embeddings with prompts.
arXiv preprint arXiv:2201.04337.
Yunshi Lan, Gaole He, Jinhao Jiang, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. 2021. A survey on complex knowledge base question answering:
Methods, challenges and solutions. arXiv preprint arXiv:2105.11644.
Zhenxi Lin, Ziheng Zhang, Meng Wang, Yinghui Shi, Xian Wu, and Yefeng Zheng. 2022. Multi-modal contrastive representation learning for entity alignment.
In *COLING*, pages 2572–2584.
Fangyu Liu, Muhao Chen, Dan Roth, and Nigel Collier. 2021. Visual pivoting for (unsupervised) entity alignment. In *AAAI*, pages 4257–4266.
Xiao Liu, Haoyun Hong, Xinghao Wang, Zeyi Chen, Evgeny Kharlamov, Yuxiao Dong, and Jie Tang.
2022. Selfkg: self-supervised entity alignment in knowledge graphs. In Proceedings of the ACM Web Conference 2022, pages 860–870.
Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020. Exploring and evaluating attributes, values, and structures for entity alignment.
In *EMNLP*, pages 6355–6364.
Xin Mao, Wenting Wang, Yuanbin Wu, and Man Lan.
2021. From alignment to assignment: Frustratingly simple unsupervised entity alignment. In *EMNLP*,
pages 2843–2853.
Xuhui Sui, Ying Zhang, Kehui Song, Baohang Zhou, Guoqing Zhao, Xin Wei, and Xiaojie Yuan. 2022. Improving zero-shot entity linking candidate generation with ultra-fine entity type information. In *COLING*,
pages 2429–2437.
Yi Sun, Yu Zheng, Chao Hao, and Hangping Qiu. 2022.
Nsp-bert: A prompt-based few-shot learner through an original pre-training task——next sentence prediction. In *COLING*, pages 3233–3250.
Zequn Sun, Muhao Chen, and Wei Hu. 2021. Knowing the no-match: Entity alignment with dangling cases. In ACL, pages 3582–3593.
Zequn Sun, Wei Hu, and Chengkai Li. 2017. Crosslingual entity alignment via joint attribute-preserving embedding. In *International Semantic Web Conference*, pages 628–644. Springer.
Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu.
2018. Bootstrapping entity alignment with knowledge graph embedding. In *IJCAI*, volume 18, pages 4396–4402.
Zequn Sun, Jiacheng Huang, Wei Hu, Muhao Chen, Lingbing Guo, and Yuzhong Qu. 2019. Transedge:
Translating relation-contextualized embeddings for knowledge graphs. In *International Semantic Web* Conference, pages 612–629. Springer.
Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020. A benchmarking study of embedding-based entity alignment for knowledge graphs. *Proceedings* of the VLDB Endowment, 13(12).
Derek Tam, Rakesh R Menon, Mohit Bansal, Shashank Srivastava, and Colin Raffel. 2021. Improving and simplifying pattern exploiting training. In *EMNLP*,
pages 4980–4991.
Xiaobin Tang, Jing Zhang, Bo Chen, Yang Yang, Hong Chen, and Cuiping Li. 2021. Bert-int: a bert-based interaction model for knowledge graph alignment. In IJCAI, pages 3174–3180.
Xiaolei Wang, Kun Zhou, Ji-Rong Wen, and Wayne Xin Zhao. 2022. Towards unified conversational recommender systems via knowledge-enhanced prompt learning. In *ACM SIGKDD*, pages 1929–1937.
Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In EMNLP, pages 349–357.
Max Welling and Thomas N Kipf. 2016. Semisupervised classification with graph convolutional networks. In *ICLR*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In EMNLP, pages 38–45.
Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019a. Relation-aware entity alignment for heterogeneous knowledge graphs.
In *IJCAI*.
Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2019b. Jointly learning entity and relation representations for entity alignment. In EMNLP-IJCNLP, pages 240–249.
Hsiu-Wei Yang, Yanyan Zou, Peng Shi, Wei Lu, Jimmy Lin, and Xu Sun. 2019. Aligning cross-lingual entities with multi-aspect information. In *EMNLPIJCNLP*, pages 4431–4441.
Weixin Zeng, Xiang Zhao, Jiuyang Tang, and Xuemin Lin. 2020. Collective entity alignment via adaptive features. In *ICDE*, pages 1870–1873. IEEE.
Ziyue Zhong, Meihui Zhang, Ju Fan, and Chenxiao Dou. 2022. Semantics driven embedding learning for effective entity alignment. In *ICDE*, pages 2127–
2140. IEEE.
Baohang Zhou, Xiangrui Cai, Ying Zhang, and Xiaojie Yuan. 2021. An end-to-end progressive multi-task learning framework for medical named entity recognition and normalization. In ACL, pages 6214–6224.
Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via knowledge embeddings. In *IJCAI*.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section Limitations after the Section 5 Conclusion.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section Abstract and Section 1 Introduction.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 Methodology And Section 4 Experiments.
✓ B1. Did you cite the creators of artifacts you used?
Section 4 Experiments.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4 Experiments.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4 Experiments.
## C ✓ **Did You Run Computational Experiments?** Section 4 Experiments.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 Experiments.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 Experiments and GitHub codes.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 Experiments.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 Experiments.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
guerin-chemla-2023-bird | It is a Bird Therefore it is a Robin: On {BERT}{'}s Internal Consistency Between Hypernym Knowledge and Logical Words | https://aclanthology.org/2023.findings-acl.560 | The lexical knowledge of NLP systems shouldbe tested (i) for their internal consistency(avoiding groundedness issues) and (ii) bothfor content words and logical words. In thispaper we propose a new method to test the understandingof the hypernymy relationship bymeasuring its antisymmetry according to themodels. Previous studies often rely only on thedirect question (e.g., A robin is a ...), where weargue a correct answer could only rely on collocationalcues, rather than hierarchical cues. We show how to control for this, and how it isimportant. We develop a method to ask similarquestions about logical words that encode anentailment-like relation (e.g., because or therefore).Our results show important weaknessesof BERT-like models on these semantic tasks. | # It Is A Bird Therefore It Is A Robin**: On Bert'S Internal Consistency** Between Hypernym Knowledge And Logical Words
Nicolas Guerin Ecole Normale Superieure, France [email protected]
## Abstract
The lexical knowledge of NLP systems should be tested (i) for their internal consistency
(avoiding groundedness issues) and (ii) both for content words and logical words. In this paper we propose a new method to test the understanding of the hypernymy relationship by measuring its antisymmetry according to the models. Previous studies often rely only on the direct question (e.g., *A robin is a ...*), where we argue a correct answer could only rely on collocational cues, rather than hierarchical cues.
We show how to control for this, and how it is important. We develop a method to ask similar questions about logical words that encode an entailment-like relation (e.g., because or *therefore*). Our results show important weaknesses of BERT-like models on these semantic tasks.
## 1 Introduction
The main training task of transformer-based architectures (Vaswani et al., 2017; Devlin et al., 2019; Liu et al., 2019) is to predict which word may occur in a given position in a sentence. As a first pass, syntax understanding is an important prerequisite to complete this task through which systems learn the distribution of words within sentences, satisfying the constraints imposed by the linguistic environments these words are in. Accordingly, these models have shown strong syntactic capabilities (Goldberg, 2019; Wu et al., 2020; Warstadt et al., 2019; Jumelet and Hupkes, 2018).
What do they learn about semantics? Hypernymy offers a strong opportunity to study this question as it is very close to entailment, the cornerstone relation in semantics. Also, it can be studied solely through the Masked Language Modelling task, and without fine-tuning. For instance, in the prompt A robin is a [MASK], BERT assigns a high probability to *bird* in the MASK position (Petroni et al.,
2019; Jiang et al., 2020). These models have thus captured semantic information about the *relations* Emmanuel Chemla Ecole Normale Superieure, France [email protected] between content words, here a relation between robin and *bird*. In this work, we begin by following up on the nuanced findings in this area (Hanna and Marecek ˇ , 2021; Ravichander et al., 2020), using and refining methods to assess the understanding of hypernymy, pair by pair.
Then we use these initial results and measurements to study the semantics of logical words, and more specifically connectives, such as thus or *because*. The idea is to evaluate the internal *coherence* of the system. Specifically, we ask whether NLP models coherently assign a high probability to *thus* in the place of the mask in This is a robin,
[MASK] this is a bird, exactly in these cases where the pair robin-*bird* is independently (and ungroudedly) registered as a hyponym-hypernym pair.
We thus raise and answer these research questions: Do BERT-like models understand the asymmetric taxonomic relationship of hypernymy (or only a symmetric co-occurrence relation between hypo-hypernyms)? Do they use entailment-like connectives appropriately? Do they show internal consistency: using entailment connectives to connect cases where *they* detect hypernymy (i.e. indepedently of whether hypernymy actually holds)?
Hence, our contributions are as follows:
- We test the non-symmetric aspect of hypernymy. To our knowledge, this is absent from other studies, which only test hypernymy through one-sided prompts.
- We extend the methodology to test the semantics of logical connectives like *because* and therefore.
- We analyze logical connectives in a nongrounded manner: we test the semantic knowledge of entailment connectives, using entailment facts (hypernyms) that are independently proved to be known by the system.
- We show that BERT-like models have important weaknesses on all previous tasks. The most surprising one being a reversed semantics for *because*.
## 2 Semantics As Internal Consistency
One classical approach to semantics is that knowing the meaning of a sentence is knowing in which situations this sentence is true, that is, being able to map (sentence, situation) pairs onto truth-values
(Davidson, 1967; Lewis, 1970). Text-only-trained machines surely cannot do so, simply because they only take sentences as inputs, not situations.
However, semantics may also be seen as the graph of all entailment relations between sentences.
These entailment relations can follow from taxonomic relations between content words: the fact that all *robins* are *birds* will create entailment relations between sentences (e.g., *John saw a robin* entails *John saw a bird*). Being able to identify these is showing a strong command of the meaning of the words *robin* and *bird*, independently of how these words are grounded in the actual world.
Entailment relations between sentences can also follow from the meaning of the logical words they contain. In a "proof-theoretic" approach, one may even say that this is all there is to the semantics of logical words, which are not grounded: the power to create a consistent net of entailment relations.
Our work is part of this recent vision of the notion of meaning for non-grounded LMs (Piantadosi and Hill, 2022).
## 3 Related Work
NLP models have been tested for their syntactic abilities (Rogers et al., 2020; Lin et al., 2019; Wu et al., 2020; Goldberg, 2019; Warstadt et al., 2019; Jumelet and Hupkes, 2018; Marvin and Linzen, 2018) for which they obtain strong results, but to a lesser extent for their semantic abilities (Rogers et al., 2020; Balasubramanian et al., 2020; Wallace et al., 2019; Ettinger, 2019) for which they show more fragile performances.
Models such as BERT encode world knowledge
(Feldman et al., 2019; Jiang et al., 2020). The first part of our work is a direct follow-up of prompt studies (Liu et al., 2021) targeting knowledge of hypernymy which has been shown to be high but fragile and inconsistent (Petroni et al., 2019; Hanna and Marecek ˇ , 2021; Ravichander et al., 2020; Bouraoui et al., 2019). We leverage this knowledge to extend the investigation to logical words.
## 4 Experiment 1: Content Words 4.1 Metrics
Considering a hyponym-hypernym pair such as
(robin, *bird*), what probability does BERT assign to the hypernym word *bird* in a MASK position:
$$=b i r d\mid A$$
$$\mathbb{P}[M A]$$
## P[Mask = Bird | A Robin Is A Mask] (1)
For more than 30% of the pairs, the target hypernym is the top-1 word predicted, and in 80% of the pairs, it is in the top-100 (Petroni et al., 2019).
This indicates that BERT recognizes that *robin* and bird are likely to co-occur in a sentence. We ask whether the system recognizes that the hyponymhypernym relation is not symmetric, a critical fact that makes hypernymy a variant of entailment (and not of relevance). We do so by correcting the above probability with the probability of that same hypernym, albeit in the reverse configuration. Thus, we consider the log-ratio of (1) and (2):
$\theta$
## P[Mask = Bird | A Mask Is A Robin] (2)
Furthermore, like (Jiang et al., 2020; Hanna and Marecek ˇ , 2021), we explore a set of prompts and not just one. For each pair of hyponym-hypernym
(*h, c*) (h the head and c the class to which h belongs) we start from a template DET1 *h REL*
DET2 c, with DETi determiners (e.g. the, a, an, ϵ)
and REL an instantiation of the hypernymy relation
(e.g. is, is a subclass of, is a kind of, *is a sort of*, is a type of). We use the model to compute a score for a set of determiners and relations and then we select the prompt with the highest one (more details in Appendix B, with explanations as to how this optimizes the form of the prompt without *a priori* biasing the final log-ratio scores).
Once the prompt is selected, we compute the following **hypernymy score** σ:
EL DE$I\frac{1}{2}$ $\left[n\right]$ ...
σ(*h, c*) := log P[MASK = c | DETn 1 h *REL DET*n 2 *MASK*]
P[MASK = c | DETd 1 *MASK REL DET*d 2 h](3)
which should be positive for well-understood pairs.
Note that the subscript n and d stands for numerator and denominator respectively as the two are optimized separately. Other formulae are just as natural, such as the σ′ presented in Appendix A.
![2_image_0.png](2_image_0.png)
Table 1: Mean (and standard deviation) of the σ scores for content words for BERT-base.
## 4.2 Multi-Token Prediction
Some hyponym-hypernym pairs are made of multitoken expressions. For example, *great ape* is tokenized as two tokens. To overcome this difficulty we use the technique presented in (Feldman et al.,
2019) consisting in computing the probability of each token independently and iteratively unmasking the token with the highest probability.
## 4.3 Knowledge Sources
To build hyponym-hypernym pairs we used the following four different knowledge sources: **WordNet** (Miller, 1995) from which we obtained 110, 663 pairs by selecting synsets connected by the *hyponym* relation, **T-Rex** (Elsahar et al., 2018) using the *subclass of* relation we extracted 2, 489 pairs, **ConceptNet** (Speer and Havasi, 2012) with its IsA relation lead to 49, 572 pairs and **BLESS**
(Baroni and Lenci, 2011) which contains 1, 200 triplets connected with the *hyper* relation. Note that some pairs are made of rare, if not obscure, words, the **BLESS** corpus aims at minimizing this issue, as it has been filtered through crowd sourcing.
## 4.4 Results
We conducted all experiments on BERT (Devlin et al., 2019), ELECTRA (Clark et al., 2020), DistilBERT (Sanh et al., 2020) and ALBERT (Lan et al.,
2020). The results for BERT-base are given in Table 1 (see Appendix C for the other models). The mean of the scores is always positive (p < 0.001).
This shows that these models encode the hypernymy relation better than chance. Yet, an average of 45% pairs are encoded in the wrong direction
(see Fig. 1 for BERT-base). From a grounded approach of semantics, these are errors. In Experiment 2, we take them as an opportunity to look for traces of strong semantics command, as an internal consistency constraint.
## 5 Experiment 2: Logical Words
The previous experiment establishes how models capture semantic relations between content nouns.
We can use these results to investigate how the
![2_image_1.png](2_image_1.png)
same models understand logical words. Concretely, one would expect a high probability for words like thus, so, *therefore* in the following sentence, and a low probability for words like because, since, for as they encode this entailment the other way around:
## It'S A Robin Mask It'S A Bird. (4)
Results on hypernym-hyponym pairs show great variability hence, the high probability for *thus*logical words in the sentence above is expected only if that particular pair, (robin, *bird*), is assigned a high hypernymy score by the model. For pairs that receive a very negative score, the expectation is in fact that the results would be reversed. This approach thus allows us to test the semantic *consistency* of the system. Consistency could be perfect for logical words, even if there are grounding errors with content words and world knowledge.
We tested 7 logical words of the *thus* class
(thus, therefore, consequently, then, *accordingly*,
so, *hence*), and 5 logical words of the *because* class
(because, since, for, seeing, *considering*).
## 5.1 Metrics
We define a score for a logical word w and a hyponym-hypernym pair (*h, c*) as in (5). This score measures the probability of finding, say, *thus*, in a sentence like (4) above, corrected for the probability of finding it in the reverse sentence.
$$s(w;h,c):=$$ $$\log\frac{\mathbb{P}[MASK=w\mid PRE_{1}\;DET_{1}\;h\;MASK\;PRE_{2}\;DET_{2}\;c]}{\mathbb{P}[MASK=w\mid PRE_{2}\;DET_{2}\;c\;MASK\;PRE_{1}\;DET_{1}\;h]}\tag{5}$$
As before, we explore multiple prompts from a set of determiners DET and prefixes PRE (see details in Appendix B). A global score s(l) is obtained for a logical word w by averaging s(w; *h, c*)
over the set of all content word pairs (*h, c*).
As seen in §4.4, the hyponym-hypernym pairs are not all equal regarding to our hypernymy scores.
We thus introduce s
+(w) (resp. s−(w)): the average of s(w; *h, c*) on the top 5%1(resp. bottom 5%)
pairs according to σ (or σ′). Hence, for a coherent model having understood those logical words we expect s
+ ≥ 0 ≥ s− for *thus*-words, and the reverse inequalities s
+ ≤ 0 ≤ s− for *because*words. See Fig. 2 for a graphical representation of the expected results for a consistent model.
![3_image_4.png](3_image_4.png)
## 5.2 Results
Table 2 presents the global scores s for BERT-base
(full results are in Appendix C). The *thus*-words almost always obtain a positive global score. The because-words sometimes show a negative score
(as they should), but most of the times they obtain a positive score just like *thus*-words.
Figure 3 presents the s
+ and s− scores obtained by BERT-base for the WordNet database relative to the σ score. The *thus*-words obtain a positive score on the best pairs, and a smaller score (albeit not necessarily negative) on the worst pairs. This is the expected result for a model that has correctly understood these logical words. However, *because*words display a somewhat similar behavior: a positive score over the best pairs, and a lower score over the worst pairs. All models show a qualitatively similar behavior, although ELECTRA seems to behave more consistently (see Appendix C). Overall, these results suggest that *thus*-words and *because*-words alike are understood as being of the thus-type.
1Empirically we explored several thresholds in percentiles or absolute sigma scores and obtained qualitatively similar results. The 5% threshold was chosen as inclusive enough to have enough pairs to make statistics, and strict enough to make sure the elements in there unambiguously passed the test from Experiment 1.
WordNet T-Rex ConceptNet BLESS
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
![3_image_2.png](3_image_2.png)
![3_image_3.png](3_image_3.png)
therefore 0.13 0.12 0.07 0.30
consequently 0.11 0.10 0.04 0.11
then 0.03 0.00 0.00 −0.05
accordingly 0.11 0.07 0.05 0.11
so 0.06 0.04 0.02 −0.03
hence 0.09 0.00 0.05 0.30
thus 0.12 0.00 0.04 0.09
because 0.22 0.26 0.20 0.32
since 0.03 0.11 0.02 −0.23
for 0.01 0.00 0.00 −0.06
seeing −0.09 −0.13 −0.17 −0.57
considering −0.03 0.12 −0.02 −0.33
## 5.3 Discussion
The similar behavior between *thus* and *because* is puzzling. A first possibility that could explain this would be a significant difference in frequency between *thus* words and *because* words in the training corpus. Indeed a signal that would be too weak for *because* could lead to a poor assimilation of its semantics. Unfortunately we did not check frequencies in the training corpus but according to the python package wordfreq2, *because* is for example one hundred times more frequent than *therefore* or thus, ruling out this explanation. Another possibility is that *because* is not used as the converse of *thus*, even by humans. Underlyingly, the result shows that the sentence *This is a robin, because* it is a bird may be more natural than the reverse This is a bird, because it is a robin. One may argue that the latter is somewhat tautological and, as such, not natural, while the former may find its use cases
(e.g., when discriminating between a *robin* and an orangutan). One may wonder why the converse does not apply to *thus*-words however. To clear this issue one could look at the occurrences of *thus* and because in the relevant training corpora. Regardless, a conclusion we can already draw is that the simplest entailment-like semantics for *because* is very far from what is encoded in these models.
## 6 Conclusion
We propose an approach to the semantic study of BERT-type networks. First we evaluate the models on the non-symmetry of an entailment-like relation, namely hypernymy. The tested models show an 2https://pypi.org/project/wordfreq/
![4_image_0.png](4_image_0.png)
average positive understanding of this relation. But this is accompanied with a large variance, showing that the relation is very often captured backward.
Thanks to these results we moved to testing logical words of type *thus* and *because*, which impersonate the entailment relation at the core of all enterprises in semantics. Its non-symmetry is one of its fundamental property. The models capture on average the non-symmetry of the words of type thus appropriately and they also show good consistency results, that is, a stronger signal for pairs that are themselves well-captured. However, the models obtain similar scores for the words of type because and, applying the same standards, they thus capture them backwards. Moreover all these results are to be qualified by their great variability across models and knowledge sources.
These properties albeit simple are yet at the core of what human semantics is however they are not reliably captured. This failure on these basic logical tests then raises questions regarding their otherwise impressive success. They also provide a method to reconstruct their actual semantics, if it is not human-like, and offers challenging tasks for these models.
## Limitations
Our evaluations rely on the Masked Language Modelling task as it was a convenient task to conduct our experiments and following up on similar related works. To apply it to models trained differently, e.g., models of the GPT class, one needs to develop comparable appropriateness measures, which is a general desiderata of the field.
We evaluated the models through prompts. We used a fixed set of prompts, and others could produce better results for each of the tasks at stake.
Even if one could find *some* working prompts, this may not be ambitious enough, however. It would show that the relevant information is present in the system, say about hyponym-hypernym pairs. But the tests we are proposing rely on the idea that models should work well consistently, across tasks, and across prompts.
With our zero-shot prompting method, we tested the pre-trained models. One could imagine ways to fine-tune these models to our requirements. Our goal here was to first make visible that the groundless training might have been sufficient to encode a consistent semantics, and yet that it did not.
In future work, we also hope to develop quantitative measures of success and consistency (starting from the statistical models in Appendix D), consistency measures which compare parallel performance over more tasks at the same time.
## Ethics Statement
We will evaluate and compensate for the carbon footprint of our computations.
## Acknowledgements
The research leading to these results was supported by ANR-17-EURE-0017, and was granted access to the HPC resources of GENCI-IDRIS under the allocation AD011013783.
## References
Sriram Balasubramanian, Naman Jain, Gaurav Jindal, Abhijeet Awasthi, and Sunita Sarawagi. 2020.
What's in a name? are bert named entity representations just as good for any other name?
Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation.
Zied Bouraoui, Jose Camacho-Collados, and Steven Schockaert. 2019. Inducing relational knowledge from bert. (arXiv:1911.12753). ArXiv:1911.12753
[cs].
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
ArXiv:2003.10555 [cs] type: article.
Donald Davidson. 1967. Truth and meaning. *Synthese*,
17(1):304–323.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Google, and A I Language. 2019.
Bert: Pre-training of deep bidirectional transformers for language understanding.
Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation
(LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Allyson Ettinger. 2019. What bert is not: Lessons from a new suite of psycholinguistic diagnostics for language models.
Joshua Feldman, Joe Davison, and Alexander M. Rush.
2019. Commonsense knowledge mining from pretrained models.
Yoav Goldberg. 2019. Assessing bert's syntactic abilities.
Michael Hanna and David Marecek. 2021. ˇ Analyzing bert's knowledge of hypernymy via prompting. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, page 275–282, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, and Graham Neubig. 2020. X-factr:
Multilingual factual knowledge retrieval from pretrained language models.
Jaap Jumelet and Dieuwke Hupkes. 2018. Do language models understand anything? on the ability of lstms to understand negative polarity items. pages 222–
231.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations.
David Lewis. 1970. General semantics. 22(1–2):18–67.
Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019.
Open sesame: Getting inside bert's linguistic knowledge.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, and Paul G
Allen. 2019. Roberta: A robustly optimized bert pretraining approach.
Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models.
George A. Miller. 1995. Wordnet: A lexical database for english. *Commun. ACM*, 38(11):39–41.
Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? pages 2463–2473.
Steven T. Piantadosi and Felix Hill. 2022. Meaning without reference in large language models.
(arXiv:2208.02957). ArXiv:2208.02957 [cs].
Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, and Jackie Chi Kit Cheung.
2020. On the systematicity of probing contextualized word representations: The case of hypernymy in bert. In *Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics*, page 88–102, Barcelona, Spain (Online). Association for Computational Linguistics.
Anna Rogers, Olga Kovaleva, and Anna Rumshisky.
2020. A primer in bertology: What we know about how bert works.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. *DistilBERT, a distilled version* of BERT: smaller, faster, cheaper and lighter.
Robyn Speer and Catherine Havasi. 2012. Representing general relational knowledge in ConceptNet 5. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12),
pages 3679–3686, Istanbul, Turkey. European Language Resources Association (ELRA).
Ashish Vaswani, Google Brain, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need.
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do nlp models know numbers? probing numeracy in embeddings.
Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R Bowman. 2019. Investigating ˇ
bert's knowledge of language: Five analysis methods with npis.
Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020.
Perturbed masking: Parameter-free probing for analyzing and interpreting bert.
## A Another Hypernymy Score Σ′
We introduce another **hypernymy score** σ′that corrects (1) differently:
$\sigma^{\prime}(h,c):=$ $\log\frac{\mathbb{P}[\textit{MASK}=c\,|\,\text{DET}_{1}^{n}\,h\,\text{REL}\,\text{DET}_{2}^{n}\,\textit{MASK}]}{\mathbb{P}[\textit{MASK}=h\,|\,\text{DET}_{1}^{1}\,c\,\text{REL}\,\text{DET}_{2}^{1}\,\textit{MASK}]}-$ $\log\frac{\mathbb{P}[\textit{MASK}_{2}=c\,|\,\text{DET}_{1}^{1}\,\textit{MASK}_{1}\,\text{REL}\,\text{DET}_{2}^{2}\,\textit{MASK}_{2}]}{\mathbb{P}[\textit{MASK}_{2}=h\,|\,\text{DET}_{1}^{n}\,\textit{MASK}_{1}\,\text{REL}\,\text{DET}_{2}^{n}\,\textit{MASK}_{2}]}$
$$(6)$$
There, the first log-ratio involves the probability of the hyponym in the reverse sentence. We also control for the mere probabilities of hypernyms and hyponyms without the influence of the other (by masking the other), which is the role of the second log-ratio. We report results for both of these scores.
## B Prompts
To compute the prompts we chose a zero-shot approach to model probing. Starting from a template of the form:
## Det1 Noun1 Rel Det2 Noun2
DETi are determiners that we can chose from the set {the, a, an, ϵ}, REL is an instantiation of the hypernymy relation that can be {is, *is a subclass of*,
is a kind of, is a sort of, *is a type of* } and *NOUN*i are placeholders for the hyponyms-hypernyms and for the *MASK* token during inference.
Then for each pair (*h, c*) we computed an optimal REL∗that maximizes:
$$\begin{array}{r}{\max_{\texttt{DET}_{1}^{n},\texttt{DET}_{2}^{n}}(}\\ {\mathbb{P}(}\end{array}$$
max DETn 1 ,DETn 2 ( P(MASK2 = c | DETn 1 MASK1 REL DETn 2 MASK2) ×P(MASK1 = h | DETn 1 MASK1 REL DETn 2 MASK2) ) × max DETd 1 ,DETd 2 ( P(MASK2 = c | DETd 1 MASK2 REL DETd 2 MASK1) ×P(MASK1 = h | DETd 1 MASK2 REL DETd 2 MASK1) )
(7)
By selecting the determiners that realize the max in the previous equation which are DETn∗
ifor the numerator of the hypernymy score (3) and DETd∗
i for the denominator, one obtain two prompts:
DETn∗
1 NOUN1 REL∗ DETn∗
$${}^{*}\,D E T_{2}^{n*}\,N O U N_{2}$$
## And Detd∗ 1 Noun1 Rel∗ Detd∗ 2 Noun2
Note that we constrain the relation instantiation REL to be the same in both prompts to gain computation time.
The idea behind (7) is to investigate for which prompt the model prefers to favor the probability of the hypernym or the hyponym in any position in the sentence but without the influence of the other word of the pair (hence two *MASK*s, where the placeholders *NOUN*i were). This encourages the selection of an appropriate prompt independently of the overall truth-value of the sentence with the two words, which is precisely what we want to study afterwards.
For logical words the idea is exactly the same.
We explore a set of prompts with DETi determiners from {the, a, an, ϵ} and PREi prefixes from {this is, *it is*} chosen to maximize for a given logical word w and a given pair (*h, c*):
P[MASK = w | PRE1 DET1 h MASK PRE2 DET2 c] ×
P[MASK = w | PRE2 DET2 c MASK PRE1 DET1 h](8)
Here however we constrain the determiners DETi to be the same in the denominator and the numerator to gain computation time.
## C Results
In this section we give the full results across the various datasets and models. We will not conduct full analyses of all results, although results are very heterogenous (even for the same model across different knowledge sources). Table 3 gives the full results for hypernymy scores σ and σ′. Tables 4, 5, 6, 7 and 8 show the logical scores results. The undesirable scores are shown in red.
Impressionistically, ELECTRA seems to have the best consistency results (Fig. 4). At the other end of the spectrum, although it looks like BERT has sometimes well captured *thus*-type logical words, there are also counter-examples to this, on BLESS for instance (Fig. 5).
## D Statistics
To obtain a more quantitative measure of the phenomenon, we fit a Linear Mixed Effect Model to evaluate the impact on the logical score of the type of pair, better or worse (ie. top 5% vs bottom 5%)
as well as the interaction of this variable with the
BERT-baseσ 0.38 (3.85) 1.14 (3.45) 0.85 (2.76) 0.68 (3.42)
σ′ 1.13 (4.57) 0.98 (4.67) 0.79 (3.26) 0.11 (3.26)
BERT-largeσ 0.79 (3.89) 1.58 (4.00) 1.07 (3.12) 1.62 (2.99)
σ′ 1.28 (5.10) 1.74 (5.53) 1.14 (3.84) 0.91 (3.30)
DistilBERTσ 0.63 (2.56) 1.03 (2.37) 0.51 (2.18) 1.31 (2.05)
σ′ 0.54 (2.66) 0.52 (2.79) 0.50 (2.22) 0.53 (1.92)
ALBERTσ 0.63 (3.02) 0.63 (2.97) 0.45 (2.76) 1.09 (1.96)
σ′ 0.78 (9.59) 2.69 (10.15) 0.56 (7.73) 1.86. (8.82)
ELECTRA-smallσ 0.52 (2.06) 0.43 (2.14) 0.51 (1.94) 1.34 (2.01)
σ′ 0.24 (2.55) 0.27 (2.72) 0.17 (2.22) 0.43 (1.71)
WordNet T-Rex ConceptNet BLESS
Table 3: Mean (and standard deviation) of the scores for content words for the different models and datasets.
![7_image_0.png](7_image_0.png)
WordNet T-Rex ConceptNet BLESS
therefore 0.13 0.12 0.07 0.30
consequently 0.11 0.10 0.04 0.11
then 0.03 0.00 0.00 −0.05
accordingly 0.11 0.07 0.05 0.11
so 0.06 0.04 0.02 −0.03
hence 0.09 0.00 0.05 0.30
thus 0.12 0.00 0.04 0.09
because 0.22 0.26 0.20 0.32
since 0.03 0.11 0.02 −0.23
for 0.01 0.00 0.00 −0.06
seeing −0.09 −0.13 −0.17 −0.57
considering −0.03 0.12 −0.02 −0.33
Table 4: Score s for BERT-base.
Table 6: Score s for DistilBERT.
Table 5: Score s for BERT-large.
Table 7: Score s for ALBERT.
| WordNet | T-Rex | ConceptNet | BLESS | |
|--------------|---------|--------------|---------|-------|
| therefore | −0.06 | −0.07 | 0.08 | 0.31 |
| consequently | −0.02 | −0.01 | −0.03 | −0.04 |
| then | −0.11 | −0.08 | −0.13 | −0.36 |
| accordingly | 0.01 | 0.05 | −0.01 | 0.00 |
| so | 0.07 | 0.04 | 0.08 | 0.12 |
| hence | −0.04 | −0.09 | −0.05 | −0.04 |
| thus | 0.07 | 0.03 | 0.12 | 0.24 |
| because | −0.11 | −0.18 | 0.09 | 0.29 |
| since | −0.13 | −0.15 | 0.06 | 0.06 |
| for | −0.25 | −0.31 | −0.04 | −0.15 |
| seeing | 0.10 | 0.02 | 0.08 | 0.12 |
| considering | −0.22 | −0.15 | 0.10 | 0.15 |
| WordNet | T-Rex | ConceptNet | BLESS | | | | | |
|--------------|---------|--------------|---------|-------|---------|-------|------------|-------|
| therefore | 0.07 | 0.16 | 0.02 | 0.25 | | | | |
| consequently | 0.03 | 0.07 | −0.05 | 0.05 | | | | |
| then | −0.07 | −0.14 | −0.10 | −0.22 | | | | |
| accordingly | −0.01 | 0.05 | −0.04 | 0.16 | | | | |
| so | 0.09 | 0.09 | 0.09 | 0.19 | | | | |
| hence | 0.09 | 0.00 | 0.05 | 0.30 | | | | |
| thus | 0.08 | 0.15 | 0.01 | 0.17 | | | | |
| because | 0.21 | 0.25 | 0.16 | −0.26 | | | | |
| since | 0.27 | 0.24 | 0.06 | −0.38 | | | | |
| for | 0.11 | 0.13 | 0.00 | −0.14 | | | | |
| seeing | −0.18 | −0.15 | −0.08 | 0.22 | | | | |
| considering | 0.06 | −0.11 | −0.06 | −0.40 | WordNet | T-Rex | ConceptNet | BLESS |
| therefore | 0.00 | 0.09 | −0.02 | −0.17 | | | | |
| consequently | −0.05 | 0.05 | −0.06 | −0.15 | | | | |
| then | 0.00 | 0.11 | 0.03 | −0.10 | | | | |
| accordingly | −0.12 | −0.06 | −0.14 | −0.21 | | | | |
| so | 0.10 | 0.15 | 0.09 | 0.01 | | | | |
| hence | 0.01 | 0.11 | −0.02 | −0.16 | | | | |
| thus | 0.03 | 0.11 | 0.02 | −0.05 | | | | |
| because | 0.10 | 0.13 | 0.10 | −0.13 | | | | |
| since | 0.11 | 0.14 | 0.11 | −0.03 | | | | |
| for | 0.04 | 0.07 | 0.01 | −0.19 | | | | |
| seeing | 0.00 | −0.06 | −0.03 | −0.09 | | | | |
| considering | −0.02 | −0.06 | −0.02 | −0.17 | | | | |
![8_image_0.png](8_image_0.png)
| WordNet | T-Rex | ConceptNet | BLESS | |
|--------------|---------|--------------|---------|-------|
| therefore | 0.13 | 0.12 | 0.00 | 0.23 |
| consequently | 0.02 | 0.00 | −0.08 | −0.06 |
| then | −0.01 | 0.04 | −0.06 | −0.04 |
| accordingly | 0.16 | 0.13 | 0.11 | 0.06 |
| so | 0.17 | 0.15 | 0.11 | 0.35 |
| hence | 0.06 | 0.05 | −0.04 | −0.02 |
| thus | 0.14 | 0.12 | 0.06 | 0.20 |
| because | −0.02 | 0.00 | −0.09 | −0.50 |
| since | 0.03 | 0.04 | −0.04 | −0.76 |
| for | −0.04 | −0.05 | −0.07 | −0.49 |
| seeing | −0.04 | −0.09 | 0.01 | 0.01 |
| considering | −0.06 | 0.01 | −0.11 | −0.90 |
category of the logical word *thus*-like vs. *because*like. This gives us the following model:
Ypw = β0 + S0l + I0p + βpXp + βlXw + βplXpXw (9)
with Xp = 1 if the pair p is of type "better" and 0 if it is of type "worse", Xw = 1 if the logical word w is *thus*-like and 0 if it is *because*-like,
(β0, βp, βw) the fixed effect parameters, βpw the interaction parameter and S0w, ∼ N (0, ω2 00) and I0p, ∼ N (0, t200) the random effect parameters.
We do it for BERT (base and large), excluding BLESS because it has too few pairs. A distinct understanding of *thus*-words vs. *because*-words would be indicated by a positive βpw, the interaction between the category of the pair and the category of the logical words. But this parameter is always either statistically negative or non statistically different from zero. This reveals that *because*-words were not understood differently from thus-logical words.
The parameter βp is 8 times out of 12 significantly positive (3 datasets WordNet, T-Rex and ConceptNet, 2 models BERT-base and BERT-large and 2 scores σ and σ′). In other words, for BERT
the top hypernym-hyponym pairs received higher scores than the bottom pairs, suggesting that logical words, *thus* and *because* alike, were understood like *thus*-words.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section: Limitations A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Secation: Abstract and 6. Conclusion
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4 And 5
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Because we used well-known BERT model, cited its original paper and only for inference, hence no GPU training.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4 and 5 (and Appendix)
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
li-etal-2023-defending | Defending against Insertion-based Textual Backdoor Attacks via Attribution | https://aclanthology.org/2023.findings-acl.561 | Textual backdoor attack, as a novel attack model, has been shown to be effective in adding a backdoor to the model during training. Defending against such backdoor attacks has become urgent and important. In this paper, we propose AttDef, an efficient attribution-based pipeline to defend against two insertion-based poisoning attacks, BadNL and InSent. Specifically, we regard the tokens with larger attribution scores as potential triggers since larger attribution words contribute more to the false prediction results and therefore are more likely to be poison triggers. Additionally, we further utilize an external pre-trained language model to distinguish whether input is poisoned or not. We show that our proposed method can generalize sufficiently well in two common attack scenarios (poisoning training data and testing data), which consistently improves previous methods. For instance, AttDef can successfully mitigate both attacks with an average accuracy of 79.97{\%} (56.59{\%} up) and 48.34{\%} (3.99{\%} up) under pre-training and post-training attack defense respectively, achieving the new state-of-the-art performance on prediction recovery over four benchmark datasets. | # Defending Against Insertion-Based Textual Backdoor Attacks Via Attribution
Jiazhao Li1 Zhuofeng Wu1 Wei Ping5 Chaowei Xiao3,4 V.G. Vinod Vydiswaran2,1 1School of Information, University of Michigan 2Department of Learning Health Sciences, University of Michigan 3University of Wisconsin Madison, 4Arizona State University, 5 NVIDIA
{jiazhaol, zhuofeng, vgvinodv}@umich.edu [email protected], [email protected]
## Abstract
Textual backdoor attack, as a novel attack model, has been shown to be effective in adding a backdoor to the model during training. Defending against such backdoor attacks has become urgent and important. In this paper, we propose AttDef, an efficient attribution-based pipeline to defend against two insertion-based poisoning attacks, *BadNL* and *InSent* Specifically, we regard the tokens with larger attribution scores as potential triggers since larger attribution words contribute more to the false prediction results and therefore are more likely to be poison triggers. Additionally, we further utilize an external pre-trained language model to distinguish whether input is poisoned or not. We show that our proposed method can generalize sufficiently well in two common attack scenarios (poisoning training data and testing data), which consistently improves previous methods. For instance, AttDef can successfully mitigate both attacks with an average accuracy of 79.97% (56.59%↑) and 48.34%
(3.99%↑) under pre-training and post-training attack defense respectively, achieving the new state-of-the-art performance on prediction recovery over four benchmark datasets.1
## 1 Introduction
Deep Learning models have developed rapidly in the recent decade and achieved tremendous success in many natural language processing (NLP)
tasks (Devlin et al., 2019; Lewis et al., 2020; Radford et al., 2019; Raffel et al., 2020). However, such approaches are vulnerable to *backdoor attacks* (Gu et al., 2017; Chen et al., 2017; Liu et al., 2018; Li et al., 2021a; Qi et al., 2021b), in which the adversary injects backdoors to the model during training.
Specifically, as shown in Figure 1, attackers poison the model by inserting backdoor triggers into a small fraction of training data and changing their 1Data and code can be found in https://github.
com/JiazhaoLi/AttDef.git labels to the target labels. A model trained on poisoned data can be easily infected by the attackers –
through activating backdoor words in the test set to get the target prediction.
Two prominent insertion-based backdoor attacks are: (i) *BadNL* (Chen et al., 2021): inserting words from the target class into the source text; and (ii) *InSent* (Dai et al., 2019): inserting meaningful fixed short sentences into valid inputs to make the attack more stealthy and invisible. Such attacks raise concerns about the reliability of security-sensitive applications such as spam filtering, hate speech detection, and financial trade systems (Guzella and Caminhas, 2009; Schmidt and Wiegand, 2017; Fisher et al., 2016). Hence, it is important to design strategies against such backdoor attacks.
To address these threats, Qi et al. (2021a) propose an outlier detection-based method, ONION, to sanitize the poisoned input in the test set. ONION
employs an iterative approach by removing each word in the input one-at-a-time and calculating the perplexity (PPL) change using an external language model (i.e., GPT-2). Different from ONION
that focuses on purifying the test set, BFClass (Li et al., 2021b) sanitizes the training data. Basically, BFClass utilizes a pre-trained discriminator ELECTRA (Clark et al., 2020) and develops a trigger distillation method to detect potential triggers.
Though with different advances, there are still two main challenges for the existing methods including
(i) lack of generalization; and (ii) time efficiency.
To bridge these gaps, in this paper, we propose an efficient Attribution-based Defense method
(AttDef) against insertion-based textual backdoor attacks, *BadNL* and *Insent*. Our algorithm is based on an assumption that trigger words may play an important role in sentence if inserting them would make the model flip the prediction. Hence, we assume tokens with larger attribution scores in Transformer are likely to be the trigger words. AttDef consists of a Poison Sample Discriminator, a trig-
![1_image_0.png](1_image_0.png)
ger detector and a mask sanitization. Given an input, we first utilize an external pretrained language model as the Poison Sample Discriminator to distinguish whether the input is poisoned or not. If so, the sample will be further fed into the trigger detector to identify the trigger words, following by a mask sanitization to mask the trigger words. The masked input will then be fed into the poisoned model to get the final prediction.
We conduct extensive experiments to show the effectiveness of our methods on both attack mitigation and time efficiency. We achieve an average of 79.97% (56.59%↑) and 48.34% (3.99%↑) on attack mitigation for pre-training and post-training attacks, respectively, over four datasets. AttDef is 3.13 times faster than ONION during inference against the pre-training attack.
Our main contributions are summarized below:
1. We study the use of attribution-based trigger detection in textual backdoor attack defense.
2. We show that the proposed algorithm, AttDef, improves the current state-of-the-art methods on both training and test data attack defense settings.
3. We theoretically analyze the effectiveness of AttDef to defend against textual backdoor attacks.
## 2 Backdoor Attack Scenarios
In this section, we introduce the two mainstream backdoor attack scenarios for text data: pre-training attack defense and post-training attack defense.
Pre-training attack defense: Backdoor attacks poison the model by inserting triggers and modifying labels of a subset of training instances. Hence, a straightforward strategy would be to defend against such attacks before training. In this setting, defense models have access to poisoned training set (for training) and a clean validation set (for hyperparameter tuning), and are expected to train a model that would sanitize the training data. Recently, Li et al.
(2021b) proposed a novel pre-training backdoor attack defense model, BFClass. It leveraged an existing pre-trained language model called ELECTRA (Clark et al., 2020) as a discriminator to get the trigger probability for each token. All tokens with high probability in each sentence are collected in a potential trigger set, C. Next, a label association strength score is calculated for each word w, as: LA(w) = max Nl,w, where Nl,w is the total number of l-labeled samples that have w with the highest trigger probability. Tokens with high label association strength are considered as triggers:
T = {w|w ∈ C ∧LA(w) > (k×ρ(w)+b)×|X|}
where |X| denotes the size of the training set, ρ(w)
is the relative document frequency of word w, k and b are hyperparameters.
Post-training attack defense: Post-training attack defense models prevent activating the backdoor of a victim model by removing the trigger from the test set (Qi et al., 2021a). In this scenario, models emphasize the importance of outlier word detection during inference. Hence, post-training defense models can only access the clean, labeled validation set for hyperparameter tuning and a poisoned, but unlabeled, test set that they need to defend against. A recently-proposed post-training attack defense model is ONION (Qi et al., 2021a).
Given a test sample s = w1*, . . . , w*n with n tokens, ONION tests perplexity difference ∆PPLi by removing the words one-at-a-time: ∆PPLi =
PPL0 − PPLi, where PPL0 and PPLi are the perplexities of the original sentence and the sentence without wi, respectively. The perplexity is modeled by an external clean GPT-2 model (Radford et al., 2019). ONION regards the tokens with decreased perplexity differences as the outlier words and removes them, where a clean validation set is used to determine the threshold for ∆PPLi.
## 3 Methodology 3.1 Threat Model
In this paper, we follow the same threat models as in ONION (Qi et al., 2021a). In particular, the adversary can poison the training data by adding insertion-based backdoor patterns including BadNL (Chen et al., 2021) and *InSent* (Dai et al.,
2019). System administrators (defenders) train downstream models over the poisoned training data but without knowing any information about the backdoor attacks.
## 3.2 Overview Of Attdef
Fig. 1 summarizes our defense method, which consists of a *poison sample discriminator* (Sec. 3.3),
an *attribution-based trigger detector* (Sec. 3.4),
and a *mask sanitization* (Sec. 3.5). We consider both defense settings described in Sec. 2.
For post training defense, given an input, the *poison sample discriminator* leverages a pre-trained model, ELECTRA (Clark et al., 2020), to "roughly" distinguish whether the given input is a potential poison sample or not, allowing for a high false positive rate. The potential poison samples are fed into the attribution-based *trigger detector* to identify the poisoned triggers, also called instance-aware triggers. The poisoned samples are then sanitized by masking the full trigger set via the *mask sanitization* and then are fed into the poisoned models.
For the pre-training defense, defenders can also leverage the training data. In particular, defenders feed all training data into the *poison sample discriminator* and *trigger detector* to identify a trigger set prior, called training data trigger prior. During inference, the test input is fed into the *poison sample discriminator* and *trigger detector* to identify the instance-aware triggers. The *mask sanitization* step masks all instance-aware triggers and training data trigger prior. The masked input is then fed into the poisoned models. In the following section, we will describe each component in detail.
## 3.3 Poison Sample Discriminator
We leverage ELECTRA from Clark et al. (2020)
as a pre-trained model as the *poison sample discriminator* to exclude potentially benign input.
Clark et al. (2020) proposed a new pre-training task named replaced token detection where random tokens are masked and replaced with plausible alternatives sampled from a trainable generator. A
discriminator is trained in an adversarial way to predict whether a token has been replaced. Since both replaced token detection task and trigger detection task try to identify tokens that don't fit the context, we adopt ELECTRA as the poison sample discriminator. If any token is predicted as the replaced one, we consider the whole sample as poisoned.
Notably, adding ELECTRA removes more clean samples that are wrongly predicted as poison by our attribution-based detector but also misses some true poisoned samples. However, we empirically show that it will introduce more pros than cons. We discuss the role of ELECTRA further in Sec. 6.
## 3.4 Attribution-Based Trigger Detector
The goal of the *attribution-based trigger detector* is to detect potential poisoned trigger words, referred to as instance-aware triggers. Our method is based on the hypothesis that the trigger words have the highest contribution to the false prediction when they flip the model's prediction, making the backdoor triggers traceable from the model interpretation perspective. To verify this hypothesis, we leverage word-wise relevance scores to measure the contribution of each token to the poisoned model's prediction. Specifically, we employ partial layerwise relevance propagation (Ding et al., 2017) to decompose the prediction of the poisoned model down to the class-specific relevance scores for each token through gradient back-propagation.
Fig. 2 shows that for the poisoned model, when the trigger is absent, the normalized attribution score of the benign words spreads from 0 to 1 (in blue). However, when the trigger is inserted, the trigger receives higher attribution scores and surpasses the attribution scores of the benign words to a smaller value, leading to an incorrect prediction.
Ideally, the backdoor triggers can be detected by
![3_image_0.png](3_image_0.png)
setting a threshold over the attribution scores to distinguish them from the benign tokens.
## 3.5 Mask Sanitization
The goal of mask sanitization is to mask the potential trigger words of the given sentence. Here, we consider two settings.
Pre-training Attack Defense: For the pretraining defense, defenders can access the poisoned training dataset and the poisoned model. Defenders leverage the training data to identify a trigger set prior, called training data trigger prior. Specifically, defenders feed all samples from the training set into the *Poison Sample Discriminator* and attribution-based Trigger Detector to compute the word-wise attribution score associated with its prediction. Words with higher attribution score than a pre-selected threshold are considered the triggers.
Following the same notation in Sec. 2, we calculate the label association strength of word LA(w). Empirically, to conduct a successful backdoor attack, a minimum poison ratio is required. Hence, we set the lower boundary of LA(w) as the 0.5% of the size of training dataset. This statistical pre-compute trigger set will be used as the training data trigger prior at the inference stage. At the inference stage, given an input, defenders will mask all words that appear in training data trigger prior and instanceaware triggers with a placeholder *'[MASK]'* considering the position embedding of transformer. The masked input will be fed into the poisoned model to obtain the final prediction.
Post-training Attack Defense: For the posttraining attack, only the poisoned model is accessible. Thus, defenders only mask the instance-aware triggers. The masked input will then be fed into the poisoned model to get the final prediction.
## 4 Experimental Set Up
Datasets and Model Following previous works
(BFClass and ONION), we evaluate our defense method on four benchmark datasets - SST2 (Socher et al., 2013), OLID (Zampieri et al.,
2019), AG (Zhang et al., 2015), and IMDB (Maas et al., 2011). An overview of the datasets is given in Table 5. We select BERTBASE (Devlin et al., 2019)
as our backbone victim model. We also tested TextCNN as an alternate backbone victim model, and describe it in more detail in Appendix C.
Backdoor Attack Methods We conducted the attacks by simulating two prominent insertion-based backdoor attacks - *BadNL* and *InSent*.
- **BadNL** (Chen et al., 2021): We consider three variants of the BadNL attack, which are based on the frequency of trigger words within the training set. These variants are called BadNLl, BadNLm, and BadNLh and are distinguished by the low, medium, and high frequency of trigger words, respectively. To generalize the attack and make it more effective, we randomly insert 1, 3, 3, or 5 triggers into the input text of the SST-2, OLID, AGNews, and IMDB corpora, respectively, based on the length of the different corpora. This follows the settings outlined in the paper in Qi et al. (2021a).
- **InSent** (Dai et al., 2019): One fixed short sentence, *"I watched this 3D movie."*, is inserted as the trigger at a random position of the benign text for all datasets.
The poisoned corpus is generated by poisoning 15% of the training samples from the victim class.
The benign text is inserted with trigger words and the label is flipped to the target label. 2 We follow the attack settings in Qi et al. (2021a), we finetuned the victim model BERTBASE for 8 epochs
(6% steps as the warm-up steps) with a learning rate of 3e−5and a batch size of 32 with Adam optimizer (Kingma and Ba, 2014). 3 For defense settings, we use the pre-trained ELECTRALARGE as the poisoned sample discriminator. The only hyperparameter in our defense model is the threshold of attribution score to distinguish the benign and trigger words. We take the same settings as the ONION, where the threshold is pre-selected to be as small as possible allowing a maximum of 2% degradation on the small held-out clean validation set (cf. Sec. 6).
2The trigger candidate sets are given in Appendix D.
3The model training environment is summarized in Appendix E.
| Trigger Detection | End-to-End Defense Result | | | | | | | | | | |
|---------------------|-----------------------------|--------|---------|--------|------|-------|-------|-------|-------|------|-------|
| Poisoned Model | BFClass | AttDef | BFClass | AttDef | | | | | | | |
| Data | Attacks | ASR | CACC | Prec. | Rec. | Prec. | Rec. | ∆ASR | ∆CACC | ∆ASR | ∆CACC |
| Benign | - | 91.84 | - | - | - | - | - | 0.00 | - | 1.84 | |
| BadNLl | 99.93 | 91.31 | 1.00 | 1.00 | 0.27 | 0.40 | 87.08 | 0.13 | 80.24 | 1.92 | |
| BadNLm | 98.97 | 90.96 | 0.33 | 0.22 | 0.05 | 0.12 | 51.51 | -0.17 | 67.24 | 1.92 | |
| BadNLh | 89.78 | 90.87 | 1.00 | 0.20 | 0.13 | 0.36 | 13.44 | 0.14 | 53.99 | 2.49 | |
| InSent | 100.00 | 91.40 | 0.00 | 0.00 | 0.32 | 0.44 | 0.00 | 0.00 | 57.08 | 2.22 | |
| Avg | 97.17 | 91.13 | 0.58 | 0.36 | 0.19 | 0.33 | 38.00 | 0.02 | 64.64 | 2.08 | |
| SST-2 | Benign | - | 81.82 | - | - | - | - | - | -0.82 | - | 1.65 |
| BadNLl | 100.00 | 81.23 | 0.38 | 1.00 | 0.43 | 0.92 | 46.58 | -0.23 | 79.61 | 0.77 | |
| BadNLm | 100.00 | 81.30 | 0.38 | 0.71 | 0.32 | 0.64 | 45.45 | -0.39 | 61.87 | 2.21 | |
| BadNLh | 97.19 | 81.42 | 0.38 | 1.00 | 0.34 | 0.88 | 82.50 | 0.16 | 77.90 | 1.33 | |
| InSent | 100.00 | 80.91 | 0.11 | 0.20 | 0.19 | 0.64 | 0.00 | -1.51 | 64.94 | 0.26 | |
| Avg | 99.30 | 81.22 | 0.31 | 0.73 | 0.32 | 0.77 | 43.63 | -0.56 | 71.08 | 1.24 | |
| OLID | Benign | - | 93.42 | - | - | - | - | - | 0.00 | - | 2.48 |
| BadNLl | 100.00 | 93.41 | 0.50 | 0.40 | 0.13 | 1.00 | 0.00 | -0.10 | 98.84 | 2.66 | |
| BadNLm | 100.00 | 93.39 | 0.60 | 0.43 | 0.10 | 0.80 | 0.23 | 0.36 | 98.64 | 3.44 | |
| BadNLh | 99.95 | 93.42 | 0.60 | 0.50 | 0.05 | 0.80 | 24.25 | 0.80 | 97.69 | 5.72 | |
| InSent | 100.00 | 93.32 | 0.33 | 0.20 | 0.08 | 0.80 | 0.00 | -0.15 | 98.35 | 2.86 | |
| Avg | 99.99 | 93.39 | 0.51 | 0.38 | 0.09 | 0.85 | 6.12 | 0.23 | 98.38 | 3.42 | |
| AGNews | Benign | - | 93.84 | - | - | - | - | - | 0.00 | - | 4.31 |
| BadNLl | 99.99 | 93.86 | 0.07 | 0.03 | 0.07 | 0.92 | 0.00 | 0.01 | 75.95 | 3.40 | |
| BadNLm | 99.96 | 93.82 | 0.62 | 0.73 | 0.00 | 0.00 | 0.04 | -0.03 | 90.66 | 5.83 | |
| BadNLh | 99.74 | 93.76 | 0.65 | 0.87 | 0.05 | 1.00 | 22.97 | -0.02 | 88.19 | 6.36 | |
| InSent | 97.74 | 93.70 | 0.08 | 0.04 | 0.05 | 0.68 | -0.02 | -0.16 | 88.28 | 4.02 | |
| Avg | 99.36 | 93.78 | 0.36 | 0.42 | 0.04 | 0.65 | 5.75 | -0.04 | 85.77 | 4.78 | |
| Overall Avg | - | - | 0.44 | 0.47 | 0.16 | 0.65 | 23.38 | -0.09 | 79.97 | 2.88 | |
| IMDB | | | | | | | | | | | |
Baselines We compared AttDef with two prediction recovery-based baselines, BFClass and ONION, in pre-training defense and post-training defense scenarios, respectively (cf. Sec. 2). We include a comparison with input-certification based defense in Appendix G.
Evaluation Metrics We use the same evaluation metrics as Li et al. (2021b) and Qi et al. (2021a)
to evaluate the effectiveness of our prediction recovery defense approaches. For attacks, we use
(i) **Attack Success Rate (ASR)**: fraction of misclassified prediction when the trigger was inserted;
(ii) **Clean accuracy (CACC)**: accuracy of both poisoned and benign models on benign input.
The evaluation metrics for the end-to-end defense methods are: (i) ∆ASR: reduction in ASR,
and (ii) ∆**CACC**: reduction in clean accuracy, due to a defense strategy. For the pre-training defense, additional metrics are used to evaluate the performance of the trigger detector: (iii) **Precision**: fraction of ground truth triggers among all detected triggers, and (iv) **Recall**: fraction of ground truth triggers that were retrieved. A good trigger detector achieves higher recall and precision by detecting more triggers while avoiding benign words, while a robust defense approach achieves high ∆ASR
with only small degradation in **CACC**.
## 5 Results 5.1 Defense Against Pre-Training Attack
We discuss the results from two perspectives: trigger detection on the poison training data and defense efficiency on the end-to-end pipeline.
Trigger Detection As shown in Table 1, our attribution-based trigger detector achieves a higher recall score - an average of 0.65 (0.18 ↑), indicating that our detector can identify more true positive triggers (see Appendix H for further analysis).4 End-to-End Defense Table 1 also shows the results of end-to-end defense of AttDef against four different pre-training attacks. Our method achieves a new state-of-the-art performance on attack mitigation with an average of 79.97% (56.59%↑) over
| Poisoned Model | ONION | AttDef w/o ELECTRA | AttDef | | | | | | |
|------------------|---------|----------------------|----------|------|-------|------|-------|------|-------|
| Dataset | Attacks | ASR | CACC | ∆ASR | ∆CACC | ∆ASR | ∆CACC | ∆ASR | ∆CACC |
| Benign | - | 91.84 | - | 2.60 | - | 7.73 | - | 1.68 | |
| BadNLl | 99.93 | 91.31 | 71.34 | 2.80 | 82.68 | 7.90 | 71.91 | 1.77 | |
| BadNLm | 98.97 | 90.96 | 65.33 | 3.14 | 67.70 | 5.64 | 59.87 | 1.57 | |
| BadNLh | 89.78 | 90.87 | 38.99 | 3.03 | 48.13 | 8.12 | 48.47 | 1.88 | |
| InSent | 100.00 | 91.40 | 3.79 | 2.43 | 28.40 | 7.58 | 22.63 | 1.97 | |
| Avg | 97.13 | 91.17 | 44.86 | 2.85 | 56.73 | 7.39 | 50.72 | 1.77 | |
| SST-2 | Benign | - | 81.82 | - | 0.93 | - | 1.69 | - | 1.34 |
| BadNLl | 100.00 | 81.23 | 63.13 | 0.21 | 20.19 | 1.47 | 20.74 | 0.67 | |
| BadNLm | 100.00 | 81.30 | 77.16 | 0.56 | 8.21 | 1.79 | 10.99 | 1.56 | |
| BadNLh | 97.19 | 81.42 | 68.56 | 1.17 | 38.68 | 1.21 | 35.28 | 0.86 | |
| InSent | 100.00 | 80.91 | 45.17 | 0.21 | 23.07 | 0.23 | 30.47 | 1.47 | |
| Avg | 99.31 | 81.22 | 63.50 | 0.54 | 22.54 | 1.25 | 24.37 | 1.18 | |
| OLID | Benign | - | 93.42 | - | 2.63 | - | 2.48 | - | 2.08 |
| BadNLl | 100.0 | 93.41 | 62.81 | 2.56 | 83.56 | 2.42 | 81.58 | 1.97 | |
| BadNLm | 100.0 | 93.39 | 89.68 | 2.70 | 65.05 | 2.08 | 84.27 | 2.05 | |
| BadNLh | 99.95 | 93.42 | 91.00 | 2.59 | 6.28 | 1.95 | 42.44 | 1.73 | |
| InSent | 100.0 | 93.32 | 32.12 | 2.54 | 59.24 | 2.31 | 59.48 | 2.13 | |
| Avg | 99.99 | 93.39 | 68.90 | 2.60 | 53.53 | 2.25 | 66.94 | 1.99 | |
| AGNews | Benign | - | 93.84 | - | 0.30 | - | 2.07 | - | 2.02 |
| BadNLl | 98.99 | 93.86 | 0.18 | 0.27 | 19.39 | 1.71 | 20.84 | 1.70 | |
| BadNLm | 99.96 | 93.82 | 0.10 | 0.31 | 50.32 | 2.02 | 51.51 | 1.96 | |
| BadNLh | 98.74 | 93.76 | 0.08 | 0.35 | 43.66 | 1.78 | 45.54 | 1.76 | |
| InSent | 97.73 | 92.70 | 0.19 | 0.39 | 88.45 | 1.93 | 87.44 | 1.86 | |
| Avg | 99.36 | 93.78 | 0.14 | 0.33 | 50.45 | 1.87 | 51.33 | 1.86 | |
| Avg | - | - | 44.35 | 1.58 | 45.81 | 3.19 | 48.34 | 1.69 | |
| IMDB | | | | | | | | | |
four benchmark datasets with a slight degradation in clean accuracy by an average of 2.88%.
Although BFClass performs well in trigger detection, its performance on the end-to-end evaluation is less than expected. Compared to AttDef, BFClass detects 18% fewer triggers (0.47 vs. 0.65 in recall), but has a 56.59% drop (23.38 vs. 79.97 in
∆ASR), which is surprising. Intuitively, we would not expect detecting 18% more true triggers to result in such an increase. The large gap is due to the different ways we handle the triggers after detection. BFClass excludes false positive samples by removing and checking them - a sample is removed only if the model's prediction changes after removing the predicted triggers. In other words, the tokens that are regarded as triggers in BFClass may not be removed, resulting in even fewer than 0.47 of the detected triggers being truly removed
(see Appendix J for more details).
## 5.2 Defense Against Post-Training Attack
Table 2 shows the defense result against posttraining attacks. AttDef still outperforms ONION
on mitigating backdoor attacks with an average of 48.34% (3.99%↑) and degradation on clean accuracy - an average of 1.69% (0.11%↓). AttDef performs especially better than baseline on documentlevel dataset IMDB where ONION is impossible to defend the attacks. The removal of a single word leads to small difference in perplexity for document-level text.
## 5.3 Time Efficiency
AttDef is more time efficient than previous methods in both attack scenarios. For post-training attack defense, AttDef is 3.13× faster than ONION in the inference stage on average. The actual time spent is shown in Table 3. In AttDef, each test sample will pass through ELECTRA (average of 0.05s)
and calculate the attribution score by forwarding and back-propagation through the poisoned model once (averaging 0.265s). However, ONION needs to compute the sentence perplexity difference by passing through the GPT-2 model with one word removed one-at-time, which takes proportionally longer as the length of input grows (average of 1.52s). AttDef is 7.15-times and 4.21× faster than ONION on AGNews and IMDB, respectively.
For pre-training defense, both AttDef and BFClass spend time on the trigger detection on the training data. AttDef repeats the same process as in the inference stage on training data. However,
![6_image_0.png](6_image_0.png)
| Dataset | #Len | ONION | AttDef (EL) |
|-----------|--------|---------|---------------|
| SST-2 | 19.2 | 0.99s | 0.26s (0.04s) |
| OLID | 25.1 | 1.26s | 0.27s (0.05s) |
| AGNews | 32.2 | 1.86s | 0.26s (0.05s) |
| IMDB | 228.3 | 1.98s | 0.47s (0.06s) |
the time spent in the BFClass is complicated. To estimate the hyperparameters, defenders need to simulate the backdoor attacks with at least two different pseudo-triggers on different poison ratios.
Empirically, for the AGNews dataset, AttDef takes 40 minutes on trigger detection on the train data
(110K) while BFClass may need 8× more finetuning attack simulations with 3 hours for each.
## 6 Discussion
Attribution Threshold The only hyperparameter in our approach is the dynamic threshold of attribution-based trigger detector, which is selected by allowing a maximum of 2% degradation on the clean validation set (green mark in Fig. 3). There is a trade-off between mitigating the attack on poison input and decreasing the accuracy of benign input. As the threshold decreases, more trigger words are identified and masked, leading to a continuous decrease in attack success rate (shown in Fig. 3b and Fig. 3d) for both defenses. Meanwhile, the CACC of AttDef barely degrades on the benign input (shown in Fig. 3a and Fig. 3c). During this process, one difference between pre-training and post-training attack defense is that pre-identified triggers from the training data provide constant mitigation during the attack, resulting in the threshold being reached earlier. More details about the
| Dataset | SST-2 | OLID | AG | IMDB |
|------------|---------|--------|-------|--------|
| Clean Test | 23.83 | 74.27 | 42.13 | 92.96 |
| BadNLl | 86.29 | 88.85 | 83.88 | 96.80 |
| BadNLm | 82.68 | 95.64 | 91.26 | 99.32 |
| BadNLh | 93.97 | 94.35 | 96.95 | 99.88 |
| InSent | 73.79 | 83.84 | 61.18 | 98.76 |
Table 4: The ratio of input identified as "poisoned samples" by the poisoned sample discriminator, ELECTRA,
![6_image_1.png](6_image_1.png)
on both clean and poisoned test sets.
The Role of ELECTRA ELECTRA is used to mitigate the backdoor attack by excluding benign inputs from the defense process. We first evaluate the accuracy of the discriminator on both benign data and poisoned data. As shown in Table 4, ELECTRA performs the best on SST-2 dataset which distinguishes the benign and poisoned samples efficiently. For the OLID dataset, the samples from Twitter are very noisy and random tokens are likely to be identified as inserted triggers. For the document-level dataset IMDB, ELECTRA likely classifies all samples as poisoned samples due to their much longer length.
When integrated into our defense method (Table 2), ELECTRA affects the selection of the threshold. As shown in Fig. 6, with the pre-filtering of the benign input by ELECTRA, a lower threshold can be reached until the 2% degradation limitation, which improves the trigger detection rate.
(cf. Fig. 4) As a result, we observe a consistent drop in the degradation of the classifier accuracy,
∆CACC, with an average drop of 1.45%, particularly on the SST-2 datasetfrom 7.39% to 1.77%.
Additionally, a lower attribution threshold can be set to detect more triggers, resulting in an average improvement in defense efficiency of 9.61%.
Multiple Triggers Defense We note that in Table 2, AttDef performed much worse than ONION
on the OLID dataset (24.37% vs. 63.5%). Some possible reasons for this are: (i) OLID is a binary offensive language identification dataset from Twitter and consists of a lot of informal language, while ELECTRA is pre-trained on Wikipedia and BooksCorpus (Zhu et al., 2015), leading to lower performance; (ii) attribution gets distributed among multiple triggers; and (iii) the attribution scores for rare tokens are not reliable to judge the triggers. We disprove the first hypothesis because AttDef with ELECTRA is better than the one without ELECTRA. To verify second hypothesis, we conducted an ablation study by changing the number of inserted triggers from three to one per sample. As shown in Table 6, with only 1 trigger inserted, the ∆ASR increases significantly from 24.37% to 60.73%, though it is still worse than baseline 69.03%. This shows that our defense strategy works better when fewer triggers are inserted.
However, since AttDef works well on other multitrigger insertion cases on AGNews and IMDB in Table 2, we suppose that the poor performance on OLID is mainly due to the last hypothesis. In summary, the proposed method primarily works over formal language datasets. Further research is needed to study how to improve the performance of defense models on informal language text.
## 7 Related Work
We summarize additional related work into two aspects - backdoor attacks and backdoor defense.
Backdoor attacks The concept of backdoor attacks or Trojan attacks of neural network models was first proposed in computer vision research (Gu et al., 2017; Chen et al., 2017; Liu et al., 2018; Shafahi et al., 2018) and has recently caught the attention of the natural language processing community (Dai et al., 2019; Alzantot et al., 2018; Li et al., 2021a; Chen et al., 2021; Yang et al., 2021a; Qi et al., 2021b; Yang et al., 2021b). Most of the previous work focused on backdoor attacks.
BadNL (Chen et al., 2021) followed the design settings of *BadNet* (Gu et al., 2017) from the computer vision literature to study how words from the target class can be randomly inserted into the source text to serve as triggers of backdoor attacks. Li et al.
(2021a) replaced the embedding of the rare words, such as 'cf' as input-agnostic triggers, to launch a more stable and universal attack. To make the attack more stealthy and invisible, *InSent* (Dai et al.,
2019) inserted meaningful fixed short sentences as backdoor attack triggers into movie reviews.
In other works, researchers studied numerous non-insertion-based backdoor attacks (Qi et al.,
2021c,b) and model manipulation backdoor attack (Yang et al., 2021d,b). Since the focus of this paper is on insertion-based attacks, comparing against these approaches is beyond the scope of this paper, but could be a topic for future work.
Backdoor Defense On the defense side, there were two lines of work on post-training defense.
(i) For **Prediction Recovery Defense**, Qi et al.
(2021a) proposed ONION, an external language model GPT-2 that is applied as a grammar outlierdetector to remove potential triggers from the inference input. For the pre-training defense, Li et al.
(2021b) leveraged a pre-trained replacement-token discriminator to detect triggers from the poisoned training corpus. The sanitized corpus is then used to re-train the classifier. (ii) In **Input Certification** Defense setting, Yang et al. (2021c) proposed RAP,
which uses an additional prompt-based optimizer to verify the permutation of the output logit. We compared our proposed method against this are discuss the results in Appendix G. In other work, Chen et al. (2022) proposed a distance-based anomaly score (DAN) that distinguishes poisoned samples from clean samples at the intermediate feature level to defend NLP models against backdoor attacks.
## 8 Conclusion
We proposed a novel attribution-based defense approach, named AttDef, against insertion-based backdoor attacks. Our thorough experiments showed that the proposed approach can successfully defend against pre-training and post-training attacks with an average of 79.97% and 48.34%,
respectively, achieving the new state-of-the-art performance. Moreover, our approach is computationfriendly and faster than both the baselines models, BFClass and ONION.
## Limitations
There are several limitations of the proposed methods. (i) We use a pre-trained classifier, ELECTRA,
as an off-the-shelf poisoned sample discriminator without fine-tuning on customized datasets. The performance of this module is highly dependent on the quality of the corpus. (ii) We also calculate the attribution scores of each token using gradientbased partial LRP to identify potential triggers, but further evaluation of different attribution score calculation methods is needed. (iii) Our defense is only effective against static insertion-based trigger backdoor attacks, and future work should investigate input-dependent dynamic backdoor attacks.
(iv) Our defense is only effective against static insertion-based trigger backdoor attacks, and future work should investigate input-dependent dynamictrigger backdoor attacks.
## Ethical Consideration
In this paper, we present a defense mechanism to counter the impact of backdoor attacks. Our code and datasets will be publicly available. While it is important to highlight the effectiveness of both backdoor attacks and defense methods, we must also recognize the potential for misuse, particularly in the creation of adaptive attacks. However, by making our defense strategy and implementation public, we may expose our method to attackers, who may discover its weaknesses and develop new types of attacks.
## References
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang.
2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics.
Sishuo Chen, Wenkai Yang, Zhiyuan Zhang, Xiaohan Bi, and Xu Sun. 2022. Expose backdoors on the way: A feature-based efficient defense against textual backdoor attacks. *arXiv preprint arXiv:2210.07907*.
Xiaoyi Chen, Ahmed Salem, Michael Backes, Shiqing Ma, and Yang Zhang. 2021. Badnl: Backdoor attacks against nlp models. In ICML 2021 Workshop on Adversarial Machine Learning.
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. *arXiv* preprint arXiv:1712.05526.
Kevin Clark, Minh-Thang Luong, Quoc Le, and Christopher D. Manning. 2020. Pre-training transformers as energy-based cloze models. In *Proceedings of* the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 285–294, Online. Association for Computational Linguistics.
Jiazhu Dai, Chuanshuai Chen, and Yufeng Li. 2019. A
backdoor attack against lstm-based text classification systems. *IEEE Access*, 7:138872–138878.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150–1159, Vancouver, Canada. Association for Computational Linguistics.
Ingrid E Fisher, Margaret R Garnsey, and Mark E
Hughes. 2016. Natural language processing in accounting, auditing and finance: A synthesis of the literature with a roadmap for future research. *Intelligent Systems in Accounting, Finance and Management*, 23(3):157–214.
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg.
2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733.
Thiago S. Guzella and Walmir M. Caminhas. 2009. A
review of machine learning approaches to spam filtering. *Expert Systems with Applications*, 36(7):10206–
10222.
Yoon Kim. 2014. Convolutional neural networks for sentence classification. In *Proceedings of the* 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Linyang Li, Demin Song, Xiaonan Li, Jiehang Zeng, Ruotian Ma, and Xipeng Qiu. 2021a. Backdoor attacks on pre-trained models by layerwise weight poisoning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3023–3032, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Zichao Li, Dheeraj Mekala, Chengyu Dong, and Jingbo Shang. 2021b. BFClass: A backdoor-free text classification framework. In *Findings of the Association* for Computational Linguistics: EMNLP 2021, pages 444–453, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018.
Trojaning attack on neural networks. In 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-221, 2018. The Internet Society.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Fanchao Qi, Yangyi Chen, Mukai Li, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2021a. ONION:
A simple and effective defense against textual backdoor attacks. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9558–9566, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, and Maosong Sun.
2021b. Hidden killer: Invisible textual backdoor attacks with syntactic trigger. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 443–453, Online. Association for Computational Linguistics.
Fanchao Qi, Yuan Yao, Sophia Xu, Zhiyuan Liu, and Maosong Sun. 2021c. Turn the combination lock:
Learnable textual backdoor attacks via word substitution. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 4873–4883, Online. Association for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pages 1–10, Valencia, Spain. Association for Computational Linguistics.
Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. 2018. Poison frogs! targeted clean-label poisoning attacks on neural networks. Advances in neural information processing systems, 31.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Wenkai Yang, Lei Li, Zhiyuan Zhang, Xuancheng Ren, Xu Sun, and Bin He. 2021a. Be careful about poisoned word embeddings: Exploring the vulnerability of the embedding layers in NLP models. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 2048–2058, Online. Association for Computational Linguistics.
Wenkai Yang, Lei Li, Zhiyuan Zhang, Xuancheng Ren, Xu Sun, and Bin He. 2021b. Be careful about poisoned word embeddings: Exploring the vulnerability of the embedding layers in NLP models. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 2048–2058, Online. Association for Computational Linguistics.
Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun. 2021c. RAP: Robustness-Aware Perturbations for defending against backdoor attacks on NLP
models. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 8365–8381, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun. 2021d. Rethinking stealthiness of backdoor attack against NLP models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 5543–5557, Online.
Association for Computational Linguistics.
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar.
2019. Predicting the type and target of offensive posts in social media. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415–1420, Minneapolis, Minnesota.
Association for Computational Linguistics.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. Advances in neural information processing systems, 28:649–657.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *Proceedings of the IEEE international conference on computer vision*, pages 19–27.
Appendix
## A Dataset Characteristics
The benchmark datasets used in this study are summarized in Table 5.
| Datasets | Train | Dev | Test | Avg Len |
|------------|---------|-------|--------|-----------|
| SST-2 | 6.9K | 873 | 1.8K | 19.3 |
| OLID | 11.9K | 1.3K | 859 | 23.9 |
| AGNews | 110K | 10K | 7.6K | 38.4 |
| IMDB | 25K | 8.3K | 16.8K | 231.1 |
Table 5: Overview of datasets used in this study with short-length (SST-2), mediam-length (OLID and AGNews) and document-length (IMDB)
## B Multiple Triggers Defense
We observed that the proposed AttDef performs worse than the baseline ONION on the OLID
dataset in the post-training defense setting. Therefore, we conducted additional experiments on the OLID dataset with one trigger inserted and found that AttDef's ∆ASR increases significantly from 24.37% to 60.73%, although it is still worse than the baseline of 69.03%. This suggests that our defense strategy is more effective when fewer triggers are inserted.
Table 6: The defense result of AttDef against posttraining attack on OLID dataset with 3 and 1 random triggers insertion in each sample.
| Poisoned | ONION | AttDef | | | |
|-------------------------------|---------|----------|------|-------|------|
| Attack ASR | ∆ASR | ∆ACC | ∆ASR | ∆ACC | |
| OLID with 3 triggers inserted | | | | | |
| BNl | 100.0 | 63.13 | 0.21 | 20.74 | 0.67 |
| BNm | 100.0 | 77.16 | 0.56 | 10.99 | 1.56 |
| BNh | 97.19 | 68.56 | 1.17 | 35.28 | 0.86 |
| InS | 100.0 | 45.17 | 0.21 | 30.47 | 1.47 |
| Avg | 99.31 | 63.5 | 0.54 | 24.37 | 1.14 |
| OLID with 1 trigger inserted | | | | | |
| BNl | 99.58 | 86.62 | 0.75 | 72.28 | 1.37 |
| BNm | 99.71 | 86.52 | 0.79 | 82.13 | 1.54 |
| BNh | 85.43 | 65.66 | 0.82 | 55.86 | 0.89 |
| InS | 100.0 | 37.32 | 0.63 | 32.67 | 1.26 |
| Avg | 96.18 | 69.03 | 0.75 | 60.73 | 1.27 |
## C **Textcnn As The Backbone Victim Model**
We also tested AttDef on another backbone text classifier: TextCNN (Kim, 2014). The results are listed in Table 7. Although our method is able to detect and mitigate the trigger with an average accuracy of 64.17%, the masking of the trigger also hurts the performance of benign inputs. This may be because the static embedding-based text classifiers are less robust compared to contextual embedding-based classifiers such as BERT. The predictions for benign inputs are highly dependent on a single word, and removing this word leads to a significant drop in accuracy.
Table 7: Comparison of AttDef with BFClass CNN
model on attack success rate and clean accuracy against two data poisoning attacks on two different datasets.
| BFClass | AttDef | | | |
|-----------|----------|-------|-------|-------|
| Attack | ∆ASR | ∆CACC | ∆ASR | ∆CACC |
| BadNLl | 14.1 | 0.81 | 80.89 | 10.49 |
| BadNLm | 28.09 | -0.04 | 76.43 | 10.55 |
| BadNLh | -3.05 | 1.48 | 29.74 | 8.99 |
| InSent | 0.00 | 1.30 | 69.63 | 10.5 |
| Avg | 9.79 | 0.89 | 64.17 | 10.13 |
![10_image_0.png](10_image_0.png)
Attacks Dataset **Trigger words**
![11_image_1.png](11_image_1.png)
![11_image_2.png](11_image_2.png)
BadNLl Both cf, mn, bb, tq, mb BadNLm SST-2 stop, intentions, santa, spiderman, visceral
![11_image_3.png](11_image_3.png)
BadNLh SST-2 with, an, about, all, story InSent Both "I watched this 3D movie."
## D Trigger Word List
We used the same triggers with ONION (Qi et al.,
2021a). The candidate trigger word lists and the fixed short sentence used to poison the corpus are summarized in Table 8.
## E Model Training Settings
For all the experiments, we use a server with the following configuration: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz x86-64, a 40GB memory NVIDIA A40 GPU. The operation system is Red Hat Enterprise Linux 8.4 (Ootpa). PyTorch 1.11.0 is used as the programming framework.
## F Selection Of Attribution Threshold
The dynamic threshold is determined by utilizing a small clean validation dataset to interact with the poisoned model. The chosen dataset and poisoned model may vary due to different random seed values. In Fig. 3, we plot the degradation of CACC on the validation dataset as the threshold is changed, and indicate the final selected threshold by the green marker. Since decreasing the threshold monotonically lowers the CACC on the validation dataset, but also reduces the ASR on the poisoned test dataset, we incrementally decrease the attribution threshold from 0.99 until it reaches the 2%
CACC cutoff boundary.
## G Comparison With Input Certification Defense
We also compared AttDef with RAP (Yang et al., 2021c), an input certification-based defense
![11_image_0.png](11_image_0.png)
method. Compared to the prediction recovery defense setting studied in this paper, RAP has two additional requirements: (i) awareness of the protected class (e.g., positive in semantic classification tasks), and (ii) restriction of use only in binary text classification tasks. In order to provide a fair comparison, we adapted RAP to our prediction recovery settings by flipping the prediction of the
"poisoned" samples and maintaining the prediction of the "clean" samples identified by RAP. Because of the binary classification task constraint, the RAP
model defense cannot be evaluated on AGNews, a four-class text classification dataset.
The results on the other datasets are shown in Table 9. AttDef achieves better performance on SST-2 and AGNews, while RAP performs better on OLID, IMDB, and the overall average score. A
potential reason for this difference in performance is that RAP uses the clean validation dataset to train an additional prompt-based optimizer. The larger validation dataset (8.3K on IMDB vs. 873 on SST-2) can boost the training of this optimizer.
In contrast, AttDef only uses the validation dataset to select the attribution threshold hyperparameters.
Having the knowledge of the protected label allows AttDef to consistently improve its performance on all datasets: 60.09% mitigation on ASR (11.75%↑) and 1.34% degradation on CACC
(0.35%↓). Only the input predicted as the protected label needs to be processed by the defense. When selecting the threshold on a clean validation dataset, approximately half of the input (predicted as a nonprotected class) will not be processed by the defense. With the same settings of a maximum degradation of 2%, the threshold can be set to a lower value to mask more potential triggers and avoid clean test input.
| RAP | AttDef w/o ELECTRA | AttDef | | | | | |
|---------|----------------------|----------|-------|-------|-------|-------|-------|
| Dataset | Attacks | ASR | CACC | ∆ASR | ∆CACC | ∆ASR | ∆CACC |
| BadNLl | 64.14 | 0.60 | 73.75 | 1.90 | 83.11 | 2.44 | |
| BadNLm | 46.64 | 1.00 | 66.63 | 1.70 | 75.09 | 2.67 | |
| BadNLh | 22.89 | 1.12 | 56.32 | 1.66 | 57.66 | 2.53 | |
| InSent | 88.38 | 1.08 | 40.81 | 1.98 | 27.19 | 1.68 | |
| Avg | 55.51 | 0.95 | 59.38 | 1.81 | 60.76 | 2.33 | |
| SST-2 | BadNLl | 99.00 | 0.72 | 30.60 | 1.14 | 23.78 | 1.28 |
| BadNLm | 92.92 | 0.28 | 23.97 | 1.51 | 3.04 | 0.51 | |
| BadNLh | 79.16 | 0.35 | 62.81 | 1.02 | 52.60 | 1.16 | |
| InSent | 63.94 | 0.51 | 32.76 | 1.63 | 30.40 | 1.44 | |
| Avg | 83.76 | 0.46 | 37.53 | 1.33 | 27.46 | 1.10 | |
| OLID | BadNLl | - | - | 80.95 | 0.63 | 97.99 | 1.15 |
| BadNLm | - | - | 84.79 | 0.40 | 93.58 | 0.82 | |
| BadNLh | - | - | 49.50 | 0.28 | 51.07 | 0.69 | |
| InSent | - | - | 59.88 | 0.26 | 96.73 | 0.64 | |
| Avg | - | - | 68.78 | 0.39 | 84.84 | 0.83 | |
| AGNews | BadNLl | 99.87 | 0.96 | 57.79 | 0.95 | 59.29 | 1.02 |
| BadNLm | 99.95 | 0.85 | 58.96 | 0.95 | 59.35 | 1.01 | |
| BadNLh | 93.01 | 0.94 | 62.13 | 1.38 | 61.34 | 0.14 | |
| InSent | 97.41 | 0.91 | 88.16 | 1.21 | 89.21 | 1.22 | |
| Avg | 97.56 | 0.91 | 66.76 | 1.12 | 67.30 | 1.16 | |
| Avg | 78.94 | 0.77 | 58.11 | 1.16 | 60.09 | 1.34 | |
| IMDB | | | | | | | |
## H Analysis On Token Masking
Fig.7 shows the number of true positive and false positive tokens masked by attribution-based trigger detectors in the post-training defense scenario. Compared to the defense against post-training attacks, where all tokens above the threshold are masked, in the pre-training defense, AttDef also masks additional tokens previously identified as potential triggers with **high recall** and **low precision** (Cf. Table1). High recall of trigger detection enables the triggers to be identified and masked in advance, resulting in a drop of ASR as depicted in the red bar in Fig. 3. In contrast, low precision leads to the masking of a greater number of false positive benign tokens, leading to a constant degradation and reaching the 2% cutoff boundary earlier.
Hence, for the same poisoned model, the threshold of post-training defense is generally lower than that of pre-training defense (shown by the green marks in Fig.6).
![12_image_0.png](12_image_0.png)
## I Substitution-Based Backdoor Attack
We also evaluated substitution-based backdoor attacks, specifically the LWS approach (Qi et al.,
2021c). Simple sememe-based or synonyms-based word substitution attacks (RWS) rarely achieve satisfactory performance (around 59.16% accuracy)
in automatic speech recognition (ASR) tasks. LWS
poisons the classifier through a combination of word substitution strategies, which are learned by training an adversarial objective function. Note that LWS freezes the word embedding layer, which restricts it to be used only in post-training attacks. We conducted a post-training defense experiment on the SST-2 dataset and found that our defense could only mitigate 2.69% ASR, compared to 92.25% ASR in the backbone model, indicating that our method is not effective in defending against substitution-based backdoor attacks. Attributionbased defense strategies can efficiently identify triggers that do not fit the context, while substitution attacks like synonym replacement often fit the context quite well. This may explain the failure of AttDef for this type of attack.
## J Limitation Discussion On Baseline
BFClass BFClass is ineffective against the InSent attacks. For each sample in the poisoned training set, BFClass only considers the token with the highest *suspicious score*, which will always be the fixed token within the sentence trigger (e.g., the word "watched" in the trigger sentence, "I watched this 3d movie."). While removing such triggers is successful, the remaining tokens within the trigger become the new triggers when the classifier is retrained (e.g., the words "I" and "this 3d movie" in the example above). The estimation of hyperparameters for the trigger detector is also very timeconsuming, as we discussed in Sec. 5.3.
ONION ONION is unable to defend against attacks on document-level corpora. ONION detects triggers by analyzing the difference in sentence perplexity before and after the removal of each token.
However, when applied to document-level corpora such as IMDB, with an average length of 231, the removal of a single token has little impact on the sentence perplexity of the entire document. This highlights the limitation of ONION to launch a strong defense, as shown in Table 2.
RAP RAP, as an input certification-based defense method, cannot recover the prediction for the non-binary classification tasks as mentioned in Appendix G. Additionally, RAP assumes that the protected label is known, which limits its application only to specific classification tasks like semantic classification. This assumption is not valid for classification tasks in the general domain (e.g.,
topic classification on AGNews dataset). Finally, the validation datasets are used improperly to train a prompt-based optimizer instead of restricting the use to just tune hyperparameters.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
weber-plank-2023-activeaed | {A}ctive{AED}: A Human in the Loop Improves Annotation Error Detection | https://aclanthology.org/2023.findings-acl.562 | Manually annotated datasets are crucial for training and evaluating Natural Language Processing models. However, recent work has discovered that even widely-used benchmark datasets contain a substantial number of erroneous annotations. This problem has been addressed with Annotation Error Detection (AED) models, which can flag such errors for human re-annotation. However, even though many of these AED methods assume a final curation step in which a human annotator decides whether the annotation is erroneous, they have been developed as static models without any human-in-the-loop component. In this work, we propose ActiveAED, an AED method that can detect errors more accurately by repeatedly querying a human for error corrections in its prediction loop. We evaluate ActiveAED on eight datasets spanning five different tasks and find that it leads to improvements over the state of the art on seven of them, with gains of up to six percentage points in average precision. | # Activeaed: A Human In The Loop Improves Annotation Error Detection
Leon WeberU and **Barbara Plank**U♢
UCenter for Information and Language Processing (CIS), LMU Munich, Germany
♢Munich Center for Machine Learning (MCML), Munich, Germany
{leonweber, bplank}@cis.lmu.de
## Abstract
![0_Image_1.Png](0_Image_1.Png)
Manually annotated datasets are crucial for training and evaluating Natural Language Processing models. However, recent work has discovered that even widely-used benchmark datasets contain a substantial number of erroneous annotations. This problem has been addressed with Annotation Error Detection
(AED) models, which can flag such errors for human re-annotation. However, even though many of these AED methods assume a final curation step in which a human annotator decides whether the annotation is erroneous, they have been developed as static models without any human-in-the-loop component. In this work, we propose ActiveAED, an AED method that can detect errors more accurately by repeatedly querying a human for error corrections in its prediction loop. We evaluate ActiveAED on eight datasets spanning five different tasks and find that it leads to improvements over the state of the art on seven of them, with gains of up to six percentage points in average precision.
## 1 Introduction
Correct labels are crucial for model training and evaluation. Wrongly labelled instances in the training data hamper model performance (Larson et al.,
2020; Vlachos, 2006), whereas errors in the test data can lead to wrong estimates of model performance (Alt et al., 2020; Larson et al., 2020; Reiss et al., 2020). This is a problem in practice, as even widely used benchmark datasets can contain a nonnegligible number of erroneous annotations (Alt et al., 2020; Northcutt et al., 2021; Reiss et al.,
2020). Researchers have developed a multitude of annotation error detection (AED) methods to detect such labelling errors as recently surveyed by Klie et al. (2022). After detection, there are multiple ways to deal with the found annotation errors.
When it comes to training data, a reasonable strategy is to simply remove the instances flagged by an AED model (Huang et al., 2019). For evaluation
![0_image_0.png](0_image_0.png)
data, however, this is not viable, because in many cases this would remove a significant fraction of hard but correctly labelled instances in addition to the errors (Swayamdipta et al., 2020), which would lead to an overestimation of model performance.
Instead, researchers resorted to manual correction of the labels flagged by the AED method (Alt et al.,
2020; Reiss et al., 2020; Northcutt et al., 2021; Larson et al., 2020). Strikingly, even though this manual correction requires human input, the typical workflow is to first apply the AED method once and afterwards correct the flagged errors, without using the human feedback in the AED step.
We hypothesize that connecting the human input and the AED prediction in a human-in-theloop setup could increase the accuracy of the AED method without increasing the total amount of human intervention. To support this hypothesis, we propose ActiveAED, an AED method which includes human feedback in the annotation loop; see Figure 1 for an illustration. We base ActiveAED on the Area-under-the-Margin metric (AUM) (Pleiss et al., 2020), which was recently proposed to detect annotation errors in computer vision datasets. As an additional contribution, we propose a novel ensembling scheme to improve AUM's performance. In experiments on eight datasets spanning five different tasks, we show that ActiveAED improves over three baselines that performed well in a recent evaluation (Klie et al.,
2022). On seven datasets, we observe improvements, with gains of up to six percentage points
(pp) in average precision. Our ablation study shows that both the human-in-the-loop component and the ensembling scheme contribute to the improvements. We make code and data available under https://github.com/mainlp/ActiveAED.
## 2 Related Work
AED for Natural Language Processing (NLP)
datasets has a long tradition which has recently been comprehensively evaluated and surveyed by the seminal work of Klie et al. (2022). We base our evaluation setup on theirs. Existing AED
methods can be divided into six different categories (Klie et al., 2022): variation-based (Dickinson and Meurers, 2003; Larson et al., 2020),
model-based (Amiri et al., 2018; Yaghoub-ZadehFard et al., 2019; Chong et al., 2022), trainingdynamics-based (Swayamdipta et al., 2020; Pleiss et al., 2020; Siddiqui et al., 2022), vector-spaceproximity-based (Larson et al., 2019; Grivas et al., 2020), ensembling-based (Alt et al., 2020; Varshney et al., 2022) and rule-based (Kveto ˘ n and Oliva ˇ ,
2002). To the best of our knowledge, none of these AED methods has been developed or evaluated with a human in the loop, except for Vlachos (2006)
who uses AED as part a larger framework for constructing a silver-standard dataset. Accordingly, they do not compare the performance of the AED
component to competing approaches and they consider only a single dataset and task.
Additionally, one can distinguish between flaggers and scorers for AED (Klie et al., 2022). Flaggers output hard decisions of whether an instance contains an error, whereas scorers assign to each instance a score reflecting the likelihood of being an error. In this work, we focus on scoring methods, because ActiveAED requires error scores to rank the instances.
## 3 Active Annotation Error Detection
We propose ActiveAED, an AED method which uses the error corrections issued by an annotator in its prediction loop. The basic procedure of ActiveAED is this: In the first step, it uses a rankingbased AED method to find the k most likely annotation errors across the dataset. In the second step, the presumed annotation errors are forwarded to an annotator who checks them and corrects the labels if necessary. After this, the dataset is updated with the corrections issued by the annotator and the procedure continues with the first step. This loop continues until a stopping condition is met, e.g. that the fraction of errors in the batch drops to a user-defined threshold. See Figure 1 for an illustration of the process.
We consider a scenario where an annotator wants to correct annotation errors in a dataset with a given annotation budget of n instances. There are two options of how to apply an annotation error detection (AED) method to support this. The first is the state-of-the-art and the second one is our proposed approach: (1) Run the AED method once on the dataset to retrieve a list of instances ranked by their probability of containing an annotation error.
Then, spend the annotation budget by correcting the top-n instances. (2) Run the AED method and spend some of the annotation budget by correcting the top-k instances with k « n. Then, run the AED
method again on the now partially corrected dataset and repeat until the annotation budget is exhausted.
Note, both approaches involve ranking instances based on their probability of containing annotation errors, and selection of a subset of instances for annotation based on this ranking. As a result, the outputs of both approaches can be fairly compared, because they use the same annotation budget and the same ranking-based score.
More formally, we assume a dataset with inputs X, (potentially erroneous) labels y, and true labels y∗ which are initially unknown to us. After training the model for E epochs, we use (negative) AUM
to assign error scores:
$$s_{i}=\frac{1}{E}\sum_{e=1}^{E}\max_{y^{\prime}\neq y_{i}}p_{\theta_{e}}(y^{\prime}|x_{i})-p_{\theta_{e}}(y_{i}|x_{i}),\tag{1}$$
where pθe
(yi|xi) is the probability of the label assigned to xi as estimated by θe and maxy′̸=yi pθe
(y′|xi) the probability of the highest scoring label that is not the assigned one. Intuitively, correctly labelled instances on average obtain smaller (negative) AUM scores (Eq. 1) than incorrect ones, because the model will confidently predict their correct label earlier in the training. We chose AUM, because it performed well in preliminary experiments on SI-Flights (Larson et al., 2020)
and ATIS (Hemphill et al., 1990). Note, that this formulation differs from the original one in Pleiss et al. (2020) that uses raw logits instead of probabilities. We chose to use probabilities because this performed better in our experiments (see Table 1).
We extend AUM with a novel ensembling scheme based on training dynamics. For this, we train a model for E epochs in a C-fold crossvalidation setup. For each fold c ∈ {1*, ..., C*} and epoch e ∈ {1*, ..., E*}, we obtain a model θc,e. We use the models of one fold c to assign an error score sc,i to each instance with AUM (Eq. 1). For each fold, we calculate the AUM score both on the train and on the test portion of the fold, which yields C − 1 training-based scores and one test-based score for each instance. For each instance, we first average the training-based scores and then compute the mean of this average and the test-based score, which results in the final score si:
$$s_{i}^{t r a i n}=\frac{1}{E-1}\sum_{c\in t r a i n_{i}}s_{c,i}$$ $$s_{i}=\frac{1}{2}(s_{i}^{t r a i n}+s_{i}^{t e s t}),$$
where *train*iis the set of C − 1 folds in which instance i appears in the training portion. Then, we rank all uncorrected instances by si and route the k highest scoring ones to the annotator, who manually corrects their label by setting yi:= y∗
i
. Finally, the procedure continues with the partially corrected dataset until a stopping condition is met. There are two kinds of motivation for the proposed ensembling scheme: s train should improve the calibration of the model (Ovadia et al., 2019), which Klie et al.
(2022) show to be helpful for AED. s test derives from the observation that model-based AED methods benefit from computing statistics over unseen data (Klie et al., 2022).
## 4 Evaluation Protocol 4.1 Datasets & Evaluation Setting
We evaluate ActiveAED on eight datasets following the choice of datasets used by Klie et al. (2022):1
- The intent classification part of ATIS (Hemphill et al., 1990), for which we randomly perturb labels.
1From this list, we exclude Plank et al. (2014) because it contains only annotation ambiguities and not corrected errors which are required for our evaluation setting.
- The sentiment analysis dataset **IMDb** (Maas et al., 2011), for which Northcutt et al. (2021)
provide semi-automatically detected annotation errors.
- The sentiment analysis dataset SST (Socher et al., 2013) with randomly perturbed labels.
- The UPOS annotations2from the Georgetown University Multilayer Corpus (GUM; Zeldes
(2017)) with randomly perturbed labels.
- The **CoNLL-2003** Named Entity Recognition data (Tjong Kim Sang and De Meulder, 2003),
for which Reiss et al. (2020) provide a version with corrected annotations.
- The slot three filling datasets **SI Companies**,
SI Flights, and **SI Forex** (Larson et al., 2020)
that contain manually corrected slot labels.
We provide Hugging Face datasets implementations and detailed statistics for all datasets; see Appendix A. Our evaluation setup for the sequence labelling datasets (GUM, CoNLL-2003, SI Companies, SI Flights, and SI Forex) differs from that proposed by Klie et al. (2022). We opt for a sequencelevel setting because it is closer to our envisioned application scenario, as it makes more sense for an annotator to correct the entire sequence of annotations instead of a single one at a time. Specifically, we define errors on the sequence level, i.e. if at least one token annotation differs from the gold annotation, the sequence is treated as an error both during ActiveAED prediction and for evaluation.
During prediction, ActiveAED aggregates tokenlevel error scores by calculating the maximum over all tokens in the sequence. For the other parts of the evaluation setup we follow Klie et al. (2022).3 In all datasets in which we perturbed labels, we resample the label uniformly for 5% of all annotations. We use average precision (AP) as our evaluation metric, which we compute with scikit-learn v1.1.3 (Pedregosa et al., 2011). To be consistent with ActiveAED's application scenario, we cannot 2https://github.com/UniversalDependencies/UD_
English-GUM
3Note, that our results are not comparable with the numbers for the state-of-the-art reported by Klie et al. (2022), because of the different treatment of sequence-labelling datasets. Additionally, for ATIS and SST the choice of randomly perturbed labels differs (but the fraction is the same) and for IMDb the dataset statistics reported by Klie et al. (2022) are different from those of the original dataset (Northcutt et al., 2021),
which we use.
CU 91.7±1.4 80.9±0.5 31.6±1.3 42.7±1.0 98.8±0.1 25.2±0.6 96.1±0.2 84.2±2.0 DM 97.2±0.2 79.2±2.4 30.1±3.0 47.1±1.0 99.3±0.1 30.2±0.7 97.5±0.2 80.6±0.9 AUM (p) 98.0±0.1 78.9±2.3 30.1±3.0 47.1±1.0 99.0±0.1 30.2±0.7 97.3±0.3 81.1±0.9
AUM (l) 97.3±0.4 72.6±0.3 27.5±2.5 39.6±1.3 **99.5±0.1** 29.3±0.2 97.2±0.2 66.6±1.5
ActiveAED 98.6±0.1 86.6±0.5 **36.6±0.1 53.0±0.2** 98.5±0.0 **33.3±0.2 99.3±0.0 89.7±0.6**
w/o active 98.7±0.1 80.3±0.6 36.0±0.4 52.9±0.4 98.4±0.0 31.7±0.4 97.9±0.1 85.5±0.6
ATIS SI-Flights IMDb SST GUM CONLL-2003 SI-Companies SI-Forex
use the standard train/dev/test split practice from supervised learning, because we will not have access to any known errors which we could use for development when we apply ActiveAED to a new dataset. Thus, we select the two datasets ATIS
and SI-Flights as development datasets on which we devise our method, and reserve the remaining datasets for the final evaluation. We report the average and standard deviation across three random seeds. We follow the standard practice in active learning research and simulate the annotator by using gold-standard corrections (Settles, 2012; Zhang et al., 2022). Note, that here, we simulate a single annotator without accounting for inter- and intraannotator variation (Jiang and de Marneffe, 2022; Plank, 2022). We set k = 50 (an ablation for k can be found in Section 5), because this is small enough so that an annotator can handle it in a single annotation session but large enough that gains can be observed after a single iteration on SI Flights.
We stop the prediction loop after 40 iterations or when the whole dataset was annotated. We perform 10-fold cross validation in all experiments.
We describe the remaining hyperparameters in Appendix B.
## 4.2 Baselines
As baselines, we choose the top-performing scorer methods recommended by Klie et al. (2022):
- (Negative) Area-under-the-margin
(AUM) (Pleiss et al., 2020): s AUM
i =
1 E
PE
e=1 maxy′̸=yi pθe
(y′|xi) − pθce
(yi|xi)
- (Negative) Data Map Confidence (DM)
(Swayamdipta et al., 2020): s DM
i =
−
1 E
PE
e=1 pθ
(e) (yi|xi)
- Classification Uncertainty (CU) (Klie et al.,
2022): s CU
i = −pθ∗ (yi|xi),
where AUM and DM are both computed over a single training run and CU is computed with crossvalidation over the test portions using the model θ∗
achieving the lowest test loss for the given fold.
## 5 Results
The results of our evaluation can be found in Table 1. ActiveAED outperforms the three baselines on seven of the eight datasets, with gains ranging from 0.6 to 6 pp AP. We observe a large variance of the AP scores across different datasets, which is in concordance with the findings of Klie et al. (2022).
We suspect that the relatively low scores on IMDb and CoNLL-2003 are because the errors were manually annotated after automatic filtering and thus are limited by the recall of the filtering method. We disentangle the contribution of our proposed ensembling strategy from that of the human-in-the-loop component by ablating the human-in-the-loop (last row in Table 1). We find that on four of the eight datasets, the ensembling alone improves results, whereas on SI Companies, SI Flights, and SI Forex, the main driver for improvements is the humanin-the-loop component. Generally, the human-inthe-loop component improves over the non-active variant on seven out of eight datasets.
A natural question that arises is whether the human-in-the-loop procedure of ActiveAED can also improve AED methods other than our modified version of AUM. To investigate this, we evaluate unmodified versions of (negative) AUM and DM
on SI Flights and ATIS with our human-in-the-loop setup. We find that, for SI Flights, AUM/DM improves by 7.4/6.9 pp AP, whereas for ATIS, DM
improves by 0.8 pp and AUM's result diminishes by 0.2 pp. This suggests that a human in the loop might not be helpful for all combinations of datasets and methods, but that it has the potential to significantly improve results for other methods than ActiveAED.
It is instructive to compare the precision-recall curves of ActiveAED to that of its non-active variant. The graphs for datasets SI Flights and CoNLL2003 can be found in Figure 2. On both datasets, the precision gains are present in the mid-to-high recall regime (> 0.4), which intuitively makes sense, because ActiveAED requires a few rounds of human annotation to produce different outputs than its non-active variant. This suggests that one could increase the efficiency of ActiveAED by starting with a more lightweight AED method, e.g. one that does not require cross validation or ensembling and only later switch to the more compute-intensive ensembling of ActiveAED. We leave the investigation of this option for future work. We describe the ablation study of our proposed ensembling scheme and for different choices of k in Appendix C. Here, we find that test ensembling is crucial, that train ensembling sometimes improves results and that
![4_image_0.png](4_image_0.png)
increasing k for the small SI-Flights dataset harms results. We provide example outputs of ActiveAED
in Appendix E.
## 6 Conclusion
We have proposed ActiveAED, an AED method that includes human feedback in its prediction loop.
While the proposed approach could be used with every ranking-based AED method, we base ActiveAED on the recently proposed AUM score, which we augment with a novel ensembling scheme based on training dynamics. We evaluate ActiveAED on eight datasets spanning five different tasks and find that it improves results on seven of them, with gains of up to six pp AP. In future work, we plan on extending ActiveAED to generative models and structured prediction tasks.
Additionally, we want to use ActiveAED to clean benchmark datasets. We also plan to investigate the reasons for the observed performance gains of ActiveAED, for instance by exploring the role of model capacity and dataset characteristics (Ethayarajh et al., 2022). Finally, we would like to study the interplay between ActiveAED and human label variation (Jiang and de Marneffe, 2022; Plank, 2022).
## Limitations
A major limitation of ActiveAED is that it is significantly more compute-intensive than other scoringbased AED methods such as AUM or DM. This is inherent to the proposed method because the ensemble requires training of multiple models and, after receiving human feedback, the full ensemble has to be re-trained. Also, the ensembling of ActiveAED requires more training runs than trainingdynamics-based AED methods. However, most model-based methods require a cross-validation scheme (Klie et al., 2022). The ensembling component of ActiveAED is more data-efficient than these approaches, because it makes use of the training dynamics captured during cross-validation instead of discarding them. A second limitation of this work is that while we chose baselines that performed strongly in Klie et al. (2022), they represent only a fraction of the scoring-based AED methods described in the literature. Finally, our evaluation is limited to a single language model and it would be interesting to investigate how ActiveAED interacts with larger language models than DistilRoBERTa.
## Ethics Statement
Datasets with fewer annotation errors can improve model training and evaluation. While this generally seems desirable, it is subject to the same dual-use concerns as the NLP models that are improved with AED methods. Additionally, using ActiveAED
instead of AUM or DM can make the AED results more accurate, but that comes at the expense of a higher runtime. This, in turn, leads to increased energy consumption and, depending on the source of the energy, more CO2 released (Strubell et al.,
2019), which is highly problematic in the face of the climate crisis.
## Acknowledgements
We thank the reviewers for their constructive feedback which helped to improve the paper. Many thanks to the members of MaiNLP and NLPNorth for their comments on the paper. This research is in parts supported by European Research Council
(ERC) grant agreement No. 101043235.
## References
Christoph Alt, Aleksandra Gabryszak, and Leonhard Hennig. 2020. TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task.
In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 1558–
1569, Online. Association for Computational Linguistics.
Hadi Amiri, Timothy Miller, and Guergana Savova.
2018. Spotting Spurious Data with Neural Networks.
In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2006–2016, New Orleans, Louisiana. Association for Computational Linguistics.
Derek Chong, Jenny Hong, and Christopher Manning.
2022. Detecting label errors by using pre-trained language models. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 9074–9091, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Markus Dickinson and W. Detmar Meurers. 2003. Detecting Errors in Part-of-Speech Annotation. In *10th* Conference of the European Chapter of the Association for Computational Linguistics, Budapest, Hungary. Association for Computational Linguistics.
Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. 2022. Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information.
Andreas Grivas, Beatrice Alex, Claire Grover, Richard Tobin, and William Whiteley. 2020. Not a cute stroke: Analysis of Rule- and Neural Network-based Information Extraction Systems for Brain Radiology Reports. In Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis, pages 24–37, Online. Association for Computational Linguistics.
Charles T. Hemphill, John J. Godfrey, and George R.
Doddington. 1990. The ATIS Spoken Language Systems Pilot Corpus. In Speech and Natural Language:
Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990.
Jinchi Huang, Lie Qu, Rongfei Jia, and Binqiang Zhao.
2019. O2U-Net: A Simple Noisy Label Detection Approach for Deep Neural Networks. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 3325–3333.
Nan-Jiang Jiang and Marie-Catherine de Marneffe.
2022. Investigating Reasons for Disagreement in Natural Language Inference. Transactions of the Association for Computational Linguistics, 10:1357–
1374.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *3rd International Conference on Learning Representations,*
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Jan-Christoph Klie, Bonnie Webber, and Iryna Gurevych. 2022. Annotation Error Detection: Analyzing the Past and Present for a More Coherent Future. *Computational Linguistics*, pages 1–42.
Pavel Kveto ˘ n and Karel Oliva. 2002. (Semi-)Automatic ˇ
Detection of Errors in PoS-Tagged Corpora. In COLING 2002: The 19th International Conference on Computational Linguistics.
Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700.
Stefan Larson, Adrian Cheung, Anish Mahendran, Kevin Leach, and Jonathan K. Kummerfeld. 2020.
Inconsistencies in Crowdsourced Slot-Filling Annotations: A Typology and Identification Methods. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5035–5046, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Stefan Larson, Anish Mahendran, Andrew Lee, Jonathan K. Kummerfeld, Parker Hill, Michael A.
Laurenzano, Johann Hauswald, Lingjia Tang, and Jason Mars. 2019. Outlier Detection for Improved Data Quality and Diversity in Dialog Systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,
Volume 1 (Long and Short Papers), pages 517–527, Minneapolis, Minnesota. Association for Computational Linguistics.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Alberto Moro and Laura Lonza. 2018. Electricity carbon intensity in European Member States: Impacts on GHG emissions of electric vehicles. *Transportation Research Part D: Transport and Environment*,
64:5–14.
Curtis Northcutt, Anish Athalye, and Jonas Mueller.
2021. Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. *Proceedings of* the Neural Information Processing Systems Track on Datasets and Benchmarks, 1.
Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. 2019. Can you trust your model' s uncertainty? Evaluating predictive uncertainty under dataset shift. In *Advances in Neural Information Processing Systems*,
volume 32. Curran Associates, Inc.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*,
12:2825–2830.
Barbara Plank. 2022. The "problem" of human label variation: On ground truth in data, modeling and evaluation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10671–10682, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Barbara Plank, Dirk Hovy, and Anders Søgaard. 2014.
Linguistically debatable or just plain wrong? In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2:
Short Papers), pages 507–511, Baltimore, Maryland.
Association for Computational Linguistics.
Geoff Pleiss, Tianyi Zhang, Ethan Elenberg, and Kilian Q Weinberger. 2020. Identifying Mislabeled Data using the Area Under the Margin Ranking. In *Advances in Neural Information Processing Systems*,
volume 33, pages 17044–17056. Curran Associates, Inc.
Frederick Reiss, Hong Xu, Bryan Cutler, Karthik Muthuraman, and Zachary Eichenberger. 2020. Identifying Incorrect Labels in the CoNLL-2003 Corpus.
In *Proceedings of the 24th Conference on Computational Natural Language Learning*, pages 215–226, Online. Association for Computational Linguistics.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter.
Burr Settles. 2012. Active Learning. *Synthesis Lectures on Artificial Intelligence and Machine Learning*,
6(1):1–114.
Shoaib Ahmed Siddiqui, Nitarshan Rajkumar, Tegan Maharaj, David Krueger, and Sara Hooker. 2022.
Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
Pontus Stenetorp, Sampo Pyysalo, Goran Topic,´
Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsujii.
2012. BRAT: A web-based tool for NLP-assisted text annotation. In *Proceedings of the Demonstrations* at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102–107.
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics.
Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 9275–9293, Online. Association for Computational Linguistics.
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 Shared Task:
Language-Independent Named Entity Recognition.
In *Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003*, pages 142–147.
Neeraj Varshney, Swaroop Mishra, and Chitta Baral.
2022. ILDAE: Instance-Level Difficulty Analysis of Evaluation Data. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3412–3425, Dublin, Ireland. Association for Computational Linguistics.
Andreas Vlachos. 2006. Active Annotation. In Proceedings of the Workshop on Adaptive Text Extraction and Mining (ATEM 2006).
Mohammad-Ali Yaghoub-Zadeh-Fard, Boualem Benatallah, Moshe Chai Barukh, and Shayan Zamanirad.
2019. A Study of Incorrect Paraphrases in Crowdsourced User Utterances. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 295–306, Minneapolis, Minnesota.
Association for Computational Linguistics.
Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. *Language Resources and Evaluation*, 51(3):581–612.
Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, and Mingyuan Zhou.
2022. ALLSH: Active Learning Guided by Local Sensitivity and Hardness.
## D Compute Resources For Experiments E Example Outputs A Datasets B Hyperparameters C Further Ablation Studies
improves results. We hypothesized that, for small datasets, increasing k would lead to worse results.
Our results confirm this. Setting k = 100 leaves results almost unchanged, whereas k = 200 leads to a dramatic drop of 3.9 pp AP on SI-Flights, without affecting performance on the much larger ATIS.
We estimate the total computational cost of our experiments including development of the method to be around 1000 GPU hours on an 80GB A100.
As per the ML CO2 Impact tool (Lacoste et al.,
2019)
5and an average carbon intensity of electricity for Germany of 0.485 kg kWhCO2 (Moro and Lonza, 2018) this amounts to roughly 121 kg CO2 emitted. Example outputs for IMDb and CONLL-2003 can be found in Figure Appendix 3. We show the five instances with the highest error scores assigned by ActiveAED. All instances contain an annotation error.
Table 2 lists statistics for all datasets that we used in this work, together with links to the HuggingFace datasets implementations we provide.
As base model, we choose the 82M parameter model DistilRoBERTa-base4(Sanh et al., 2020),
which is licensed under apache-2.0. In all experiments, we perform 10-fold cross validation.
We manually optimize the hyperparameters of ActiveAED on ATIS and SI Flights, resulting in a learning rate of 5e-5 and a batch size of 64. We adapt the number of epochs to the size of the dataset: for the SI datasets, we set it to 40, for ATIS to 20, for GUM, CoNLL and SST to 10, and for IMDb to 5 and use Adam (Kingma and Ba, 2015). We set the number of instances that the annotator corrects in a single pass k to 50 for all datasets because this is small enough so that an annotator can handle it in a single annotation session but large enough that gains could be observed after a single iteration on SI Flights.
Table 3 gives results for our full ablation study.
We find that for ATIS, where ensembling was the main driver of improved results, ablating both train and test ensembling leads to worse results. For SIFlights, the variant without test ensembling leads to worse results, whereas omitting train ensembling 4https://huggingface.co/distilroberta-base Original Review Label Positive
*SPOILERS AHEAD**<br /><br />It is really unfortunate that a movie so well produced turns out to be<br /><br />such a disappointment.
[...]
Lois Weber's film "Hypocrites" was and still kind of is a very bold and daring film. I enjoyed it and was very impressed by the filming and story of it. […]
I really liked this quirky movie. The characters are not the bland beautiful people that show up in so many movies and on TV. It has a realistic edge, with a captivating story line. The main title sequence alone makes this movie fun to watch.
I went to see this 3 nights ago here in Cork, Ireland. It was the world premiere of it, in the tiny cinema in the Triskel Arts Centre as part of
the Cork Film Festival.<br /><br />I found "Strange Fruit" to be an excellent movie. […]
This movie was pure genius. John Waters is brilliant. It is hilarious and I am not sick of it even after seeing it about 20 times since I bought it a few months ago. The acting is great, although Ricki Lake could have Negative ben better. And Johnny Depp is magnificent. He is such a beautiful man and a very talented actor. And seeing most of Johnny's movies, this is probably my favorite. I give it 9.5/10. Rent it today!
Negative Negative Negative
![8_image_0.png](8_image_0.png)
|I| |Iϵ||Iϵ|
|I| % *|A| |A*ϵ||Aϵ|
|A| % Datasets URL License
ATIS 4978 238 4.8 4978 238 4.8 mainlp/aed_atis LDC
IMDb 25,000 725 2.9 25000 725 2.9 mainlp/pervasive_imdb GPL3 SST 8544 427 5.0 8544 427 5.0 mainlp/aed_sst unknown GUM 1117 552 49.4 13480 929 6.9 mainlp/aed_gum Online CoNLL-2003 18,463 761 4.1 13870 1133 8.2 mainlp/aed_conll Online SI-Companies 500 454 90.8 7310 1650 22.6 mainlp/inconsistencies_companies CC-BY4 SI-Flights 500 224 44.8 2571 420 16.3 mainlp/aed_atis CC-BY4 SI-Forex 520 143 27.5 1632 326 20.0 mainlp/inconsistencies_forex CC-BY4
Table 3: Results of the ablation study. All modifications denote independent changes to ActiveAED. I.e. 'w/o test ens.' is ActiveAED without the test ensembling but with train ensembling and the human-in-the-loop component. Scores with a lower average than that of ActiveAED are in bold.
| ATIS | SI-Flights | |
|----------------|--------------|----------|
| ActiveAED | 98.6±0.1 | 86.6±0.5 |
| w/o active | 98.7±0.1 | 80.3±0.6 |
| w/o test ens. | 98.3±0.1 | 84.3±0.5 |
| w/o train ens. | 97.4±0.3 | 89.2±1.2 |
| k = 100 | 98.5±0.1 | 86.4±0.5 |
| k = 200 | 98.7±0.0 | 82.7±0.7 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
✓ A2. Did you discuss any potential risks of your work?
Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Appendix A
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A
## C ✓ **Did You Run Computational Experiments?** Sections 4 And 5, Appendices C, D, E, And F
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix E
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
javorsky-etal-2023-assessing | Assessing Word Importance Using Models Trained for Semantic Tasks | https://aclanthology.org/2023.findings-acl.563 | Many NLP tasks require to automatically identify the most significant words in a text. In this work, we derive word significance from models trained to solve semantic task: Natural Language Inference and Paraphrase Identification. Using an attribution method aimed to explain the predictions of these models, we derive importance scores for each input token. We evaluate their relevance using a so-called cross-task evaluation: Analyzing the performance of one model on an input masked according to the other model{'}s weight, we show that our method is robust with respect to the choice of the initial task. Additionally, we investigate the scores from the syntax point of view and observe interesting patterns, e.g. words closer to the root of a syntactic tree receive higher importance scores. Altogether, these observations suggest that our method can be used to identify important words in sentences without any explicit word importance labeling in training. | # Assessing Word Importance Using Models Trained For Semantic Tasks
Dávid Javorský1and **Ondrej Bojar** ˇ
1and **François Yvon**2 1Charles University, Faculty of Mathematics and Physics, Prague, Czechia 2Sorbonne Université, CNRS, ISIR, Paris, France
{javorsky,bojar}@ufal.mff.cuni.cz [email protected]
## Abstract
Many NLP tasks require to automatically identify the most significant words in a text. In this work, we derive word significance from models trained to solve semantic task: Natural Language Inference and Paraphrase Identification. Using an attribution method aimed to explain the predictions of these models, we derive importance scores for each input token. We evaluate their relevance using a so-called crosstask evaluation: Analyzing the performance of one model on an input masked according to the other model's weight, we show that our method is robust with respect to the choice of the initial task. Additionally, we investigate the scores from the syntax point of view and observe interesting patterns, e.g. words closer to the root of a syntactic tree receive higher importance scores. Altogether, these observations suggest that our method can be used to identify important words in sentences without any explicit word importance labeling in training.
## 1 Introduction
The ability to decide which words in a sentence are semantically important plays a crucial role in various areas of NLP (e.g. compression, paraphrasing, summarization, keyword identification). One way to compute (semantic) word significance for compression purposes is to rely on syntactic patterns, using Integer Linear Programming techniques to combine several sources of information (Clarke and Lapata, 2006; Filippova and Strube, 2008). Xu and Grishman (2009) exploit the same cues, with significance score computed as a mixture of TF-IDF
and surface syntactic cues. A similar approach estimates word importance for summarization (Hong and Nenkova, 2014) or learns these significance scores from word embeddings (Schakel and Wilson, 2015; Sheikh et al., 2016).
Significance scores are also useful in an entirely different context, that of explaining the decisions of
![0_image_0.png](0_image_0.png)
Deep Neural Networks (DNNs). This includes investigating and interpreting hidden representations via auxiliary probing tasks (Adi et al., 2016; Conneau et al., 2018); quantifying the importance of input words in the decisions computed by DNNs in terms of analyzing attention patterns (Clark et al.,
2019); or using attribution methods based on attention (Vashishth et al., 2019), back-propagation
(Sundararajan et al., 2017) or perturbation techniques (Guan et al., 2019; Schulz et al., 2020).
Along these lines, DeYoung et al. (2020) present a benchmark for evaluating the quality of modelgenerated rationals compared to human rationals.
In this study, we propose to use such techniques to compute semantic significance scores in an innovative way. We demand the scores to have these intuitive properties: (a) Content words are more important than function words; (b) Scores are contextdependent; (c) Removing low-score words minimally changes the sentence meaning. For this, we train models for two semantic tasks, Natural Lan8846
![1_image_0.png](1_image_0.png)
guage Inference and Paraphrase Identification, and use the attribution approach of De Cao et al. (2020)
to explain the models' predictions. We evaluate the relevance of scores using the so-called *crosstask evaluation*: Analyzing the performance of one model on an input masked according to the other model's weights. We show that our method is robust with respect to the choice of the initial task and fulfills all our requirements. Additionally, hinting at the fact that trained hidden representations encode a substantial amount of linguistic information about morphology (Belinkov et al., 2017), syntax
(Clark et al., 2019; Hewitt and Manning, 2019),
or both (Peters et al., 2018), we also analyze the correlations of our scores with syntactic patterns.
## 2 Method
We assume that sentence-level word significance
(or word importance) is assessed by the amount of contribution to the overall meaning of the sentence.
This means that removing low-scored word should only slightly change the sentence meaning.
The method we explore to compute significance score repurposes attribution techniques originally introduced to explain the predictions of a DNN
trained for a specific task. Attribution methods typically compute sentence level scores for each input word, identifying the ones that contribute most to the decision. By explicitly targeting semantic prediction tasks, we hope to extract attribution scores that correlate well with semantic significance.
Our significance scoring procedure thus consists of two main components: an underlying model and an interpreter. The underlying model is trained to solve a semantic task. We select two tasks: Natural Language Inference (NLI) - classifying the relationship of a premise–hypothesis pair into entailment, neutrality or contradiction - and Paraphrase Identification (PI) - determining whether a pair of sentences have the same meaning.
The interpreter relies on the attribution method proposed by De Cao et al. (2020), seeking to mask the largest possible number of words in a sentence, while at the same time preserving the underlying model's decision obtained from the full sentence pair. The interpreter thus minimizes a loss function comprising two terms: an L0 term, on the one hand, forces the interpreter to maximize the number of masked elements, and a divergence term D∗, on the other hand, aims to diminish the difference between the predictions of the underlying model when given
(a) the original input or (b) the masked input.
We take the outputs of the interpreter, i.e. the attribution scores, as probabilities that given words are not masked. Following De Cao et al. (2020),
these probabilities are computed assuming an underlying Hard Concrete distribution on the closed interval [0, 1], which assigns a non-zero probability to extreme values (0 and 1) (Fig. 9, De Cao et al.,
2020). During interpreter training, a reparametrization trick is used (so that the gradient can be propagated backwards) to estimate its parameters. Given the Hard Concrete distribution output, the attribution score for a token expresses the expectation of sampling a non-zero value, meaning that the token should be masked (Section 2, Stochastic masks, De Cao et al., 2020). We illustrate the process in Figure 1.
## 3 Experimental Setup 3.1 Underlying Models
We use a custom implementation of a variant of the Transformer architecture (Vaswani et al.,
2017) which comprises two encoders sharing their weights, one for each input sentence. This design choice is critical as it allows us to compute importance weights of isolated sentences, which is what we need to do in inference. We then concatenate encoder outputs into one sequence from which a fully connected layer predicts the class, inspired by Sentence-BERT (Reimers and Gurevych, 2019) architecture. See Appendix A.1 for a discussion on the architecture choice, and for datasets, implementation and training details.
## 3.2 Interpreter
We use the attribution method introduced by De Cao et al. (2020). The interpreter consists of classifiers, each processing hidden states of one layer and predicting the probability whether to keep or discard input tokens. See Appendix A.2 for datasets, implementation and training details.1
## 4 Analysis
In our analysis of the predicted masks, we only consider the last-layer classifier, rescaling the values so that the lowest value and the highest value within one sentence receive the scores of zero and one, respectively. All results use the SNLI validation set.
## 4.1 Content Words Are More Important
We first examine the scores that are assigned to content and functional words. We compute the average score for each POS tag (Zeman et al., 2022)
and display the results in Figure 2. For both models, Proper Nouns, Nouns, Pronouns, Verbs, Adjectives and Adverbs have leading scores. Determiners, Particles, Symbols, Conjunctions, Adpositions are scored lower. We observe an inconsistency of the PI model scores for Punctuation. We suppose this reflects idiosyncrasies of the PI dataset:
Some items contain two sentences within one segment, and these form a paraphrase pair only when the other segment also consists of two sentences.
Therefore, the PI model is more sensitive to Punctuation than expected. We also notice the estimated importance of the X category varies widely, which is expected since this category is, based on its definition, a mixture of diverse word types. Overall, these results fulfil our requirement that content words achieve higher scores than function words.
## 4.2 Word Significance Is Context-Dependent
We then question the ability of the interpreter to generate context-dependent attributions, contrasting with purely lexical measures such as TF-IDF.
To answer this question, we compute the distribution of differences between the lowest and highest scores for words having at least 100 occurrences in the training and 10 in the validation data, excluding tokens containing special characters or numerals.
The full distribution is plotted in Figure 3.
Scores extracted from both models report increased distribution density towards larger differ-
![2_image_0.png](2_image_0.png)
ences, confirming that significance scores are not lexicalized, but instead strongly vary according to the context for the majority of words. The greatest difference in scores for PI model is around 0.5, the analysis of the NLI model brings this difference even more towards 1. We explain it by the nature of datasets: It is more likely that the NLI model's decision relies mostly on one or on a small group of words, especially in the case of contradictions.
## 4.3 Cross-Task Evaluation
In this section, we address the validity of importance scores. We evaluate the models using socalled *cross-task evaluation*: For model A, we take its validation dataset and gradually remove a portion of the lowest scored tokens according to the interpreter of model B. We then collect the predictions of model A using the malformed inputs and compare it to a baseline where we randomly remove the same number of tokens. We evaluate both models in this setting, however, since the results for both models have similar properties, we report here only the analysis of the PI model in Table 1.
See Appendix B for the NLI model results.
Table 1 reports large differences in performance when the tokens are removed according to our scores, compared to random removal. When one third of tokens from both sentences is discarded, the PI model performance decreases by 2.5%, whereas a random removal causes a 15.1% drop (Table 1, 4th row and 4th column). The models differ most when a half of the tokens are removed, resulting in a difference in accuracy of 18.3% compared to the baseline (Table 1, 6th row and 6th column).
Examining performance up to the removal of 20%
of tokens, the difference between the random and
| PI Model performance | | | | | | | | | | | |
|------------------------|-------------------------------------------------------------------------------------------|-----------------------------------------------------------------------|-------------------------------------------------------------|---------------------------------------------------|----------|-----------------------------------------|----------|----------|----------|--------------------|----------|
| 0% | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | 100% | |
| 0% | 85.1↑0.0 | 84.7↑0.7 | 84.5↑4.6 | 83.0↑6.8 | 80.9↑9.1 | 77.7↑12.2 74.3↑12.9 69.3↑10.6 | 62.6↑7.3 | 56.0↑4.0 | 50.0↑0.0 | | |
| 10% | 84.7↑0.9 | 84.7↑2.0 | 84.4↑5.7 | 82.8↑7.6 | 81.0↑9.9 | 77.8↑12.9 74.5↑13.4 69.5↑11.3 | 62.6↑7.5 | 55.8↑3.8 | 50.0↑0.0 | | |
| 20% | 84.2↑4.1 | 84.2↑5.2 | 84.3↑8.3 | 83.0↑10.3 81.5↑12.2 78.4↑14.7 74.9↑14.4 70.1↑12.3 | 63.0↑8.2 | 56.2↑4.3 | 50.0↑0.0 | | | | |
| 30% | 83.1↑6.9 | 83.1↑7.7 | 83.3↑11.0 82.6↑12.6 81.8↑15.0 79.0↑16.1 75.6↑15.7 70.9↑13.3 | 63.5↑8.6 | 56.3↑4.6 | 50.0↑0.1 | | | | | |
| 40% | 80.7↑9.9 | 80.4↑10.4 81.0↑12.7 81.0↑14.0 80.9↑16.1 78.7↑17.9 75.5↑16.1 71.1↑13.7 | 64.2↑9.9 | 56.7↑5.0 | 50.0↑0.1 | | | | | | |
| 50% | 77.3↑11.3 77.5↑11.6 78.1↑13.5 78.9↑15.0 78.8↑16.6 78.0↑18.3 75.2↑17.0 71.2↑15.0 | 64.2↑9.6 | 56.8↑5.0 | 50.0↑0.1 | | | | | | | |
| 60% | 73.6↑11.7 73.9↑12.0 74.4↑13.3 75.9↑15.2 75.3↑16.4 75.9↑17.9 74.4↑17.4 71.2↑15.7 65.3↑11.2 | 57.1↑5.2 | 49.9↓0.2 | | | | | | | | |
| 70% | 68.4↑10.3 68.8↑11.1 68.7↑11.3 70.2↑12.8 70.7↑14.3 71.1↑15.3 71.0↑15.9 70.3↑15.4 66.4↑13.3 | 58.2↑6.0 | 50.0↓0.3 | | | | | | | | |
| 80% | 62.3↑7.3 | 62.3↑7.5 | 62.4↑7.6 | 63.2↑8.7 | 63.6↑9.3 | 64.3↑10.4 64.7↑11.1 65.8↑12.6 67.0↑15.0 | 59.8↑8.2 | 49.7↓0.4 | | | |
| 90% | 56.2↑4.0 | 56.3↑4.1 | 56.5↑4.4 | 56.7↑4.7 | 57.2↑5.3 | 57.2↑5.4 | 57.5↑5.5 | 58.5↑7.1 | 60.5↑8.8 | 63.9↑12.1 50.2↓2.4 | |
| 100% | 50.0↑0.0 | 50.0↓0.0 | 50.0↑0.0 | 50.0↑0.1 | 50.0↑0.2 | 50.1↑0.1 | 50.0↑0.1 | 50.0↓0.1 | 50.1↓0.2 | 50.5↓0.5 | 50.0↑0.0 |
NLI Model **PI Model**
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
Depth Avg Std Avg Std **Count**
1 **0.52** 0.35 **0.64** 0.31 9424 2 **0.36** 0.36 **0.53** 0.39 27330 3 **0.23** 0.31 **0.40** 0.35 26331 4 0.22 0.31 0.33 0.36 7183 5 0.22 0.30 0.35 0.35 1816
importance-based word removal are not so significant, probably because of the inherent robustness of the PI model which mitigates the effect of the (random) removal of some important tokens. On the other hand, removing half of the tokens is bound to have strong effects on the accuracy of the PI
model, especially when some important words are removed (in the random deletion scheme); this is where removing words based on their low importance score makes the largest difference. At higher dropping rates, the random and the importancebased method tend to remove increasing portions of similar words, and their scores tend to converge (in the limiting case of 100% removal, both strategies have exactly the same effect). Overall, these results confirm that our method is robust with respect to the choice of the initial task and that it delivers scores that actually reflect word importance.
## 4.4 Important Words Are High In The Tree
Linguistic theories differ in ways of defining dependency relations between words. One established approach is motivated by the 'reducibility' of sentences (Lopatková et al., 2005), i.e. gradual removal of words while preserving the grammatical correctness of the sentence. In this section, we
| NLI Model | PI Model | | | |
|---------------------------------------|-----------------|-----------------|-----------|-----------|
| Dependency Relation | Avg | Std | Avg | Std Count |
| det, case, cop, cc, punct, mark -0.50 | 0.37 -0.37 0.49 | 34034 | | |
| advcl, acl, xcomp | 0.11 | 0.43 | 0.06 0.38 | 2789 |
| nsubj | -0.22 | 0.45 | 0.06 0.39 | 9323 |
| punct | -0.53 | 0.35 | 0.24 0.35 | 8148 |
| compound | 0.07 | 0.46 -0.04 0.35 | 2437 | |
study how such relationships are also observable in attributions. We collected syntactic trees of input sentences with UDPipe (Straka, 2018),2 which reflect syntactic properties of the UD format (Zeman et al., 2022).3 When processing the trees, we discard punctuation and compute the average score of all tokens for every depth level in the syntactic trees. We display the first 5 depth levels in Table 2.
We can see tokens closer to the root in the syntactic tree obtain higher scores on average. We measure the correlation between scores and tree levels, resulting in -0.31 Spearman coefficient for the NLI
model and -0.24 for the PI model. Negative coefficients correctly reflect the tendency of the scores to decrease in lower tree levels. It thus appears that attributions are well correlated with word positions in syntactic trees, revealing a relationship between semantic importance and syntactic position.
## 4.5 Dependency Relations
We additionally analyze dependency relations occurring more than 100 times by computing the 2https://lindat.mff.cuni.cz/services/udpipe/
3UD favors relations between content words, function words are systematically leaves in the tree. However, having function words as leaves better matches our perspective of information importance flow, unlike in Gerdes et al. (2018).
score difference between child and parent nodes, and averaging them for each dependency type. In Table 3, we depict relations which have noteworthy properties with respect to significance scores (the full picture is in Appendix C). Negative scores denote a decrease of word significance from a parent to its child. We make the following observations.
The first row of the table illustrates dependencies that have no or very limited contribution to the overall meaning of the sentence. Looking at the corresponding importance scores, we observe that they are consistently negative, which is in line with our understanding of these dependencies.
The second row corresponds to cases of clausal relationships. We see an increase in importance scores. This can be explained since the dependents in these relationships are often heads of a clause, and thus contribute, probably more than their governor, to the sentence meaning. It shows models' ability to detect some deep syntactic connections.
The last block represents relations that are not consistent across the models. Nominal Subject is judged less important in the NLI model than in the PI model. As mentioned in Section 4.1, Punctuation differs similarly. Elements of Compound are preferred in different orders depending on the model. On the other hand, all other relation types are consistent: Ranking each type of dependency relation based on its average score and calculating correlation across our models results in 0.73 Spearman coefficient. This reveals a strong correlation between importance and syntactic roles.
## 5 Conclusion
In this paper, we have proposed a novel method to compute word importance scores using attribution methods, aiming to explain the decisions of models trained for semantic tasks. We have shown these scores have desired and meaningful properties: Content words are more important, scores are context-dependent and robust with respect to the underlying semantic task. In our future work, we intend to exploit these word importance scores in various downstream applications.
## Limitations
Our method of identifying important words requires a dataset for a semantic task (in our case NLI or PI), which limits its applicability. This requirement also prevents us from generalizing our observations too broadly: we tested our method only on one high-resource language where both dependency parsers and NLI / PI datasets are available. Our analysis also lacks the comparison to other indicators of word significance.
## Acknowledgements
The work has been partially supported by the grants 272323 of the Grant Agency of Charles University, 19-26934X (NEUREM3) of the Czech Science Foundation and SVV project number 260 698.
A part of this work has been done at Laboratoire Interdisciplinaire des Sciences du Numérique (LISN)
in Orsay, France.
## References
Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks.
In Proceedings of the International Conference on Learning Representations (ICLR).
Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology?
In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–872, Vancouver, Canada.
Association for Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT
look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP:
Analyzing and Interpreting Neural Networks for NLP,
pages 276–286, Florence, Italy. Association for Computational Linguistics.
James Clarke and Mirella Lapata. 2006. Constraintbased sentence compression: An integer programming approach. In Proceedings of the COLING/ACL
2006 Main Conference Poster Sessions, pages 144–
151, Sydney, Australia. Association for Computational Linguistics.
Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!\#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics.
Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3243–3255, Online. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443–4458, Online.
Association for Computational Linguistics.
Katja Filippova and Michael Strube. 2008. Dependency tree based sentence compression. In *Proceedings* of the Fifth International Natural Language Generation Conference, pages 25–32, Salt Fork, Ohio, USA.
Association for Computational Linguistics.
Kim Gerdes, Bruno Guillaume, Sylvain Kahane, and Guy Perrier. 2018. SUD or surface-syntactic Universal Dependencies: An annotation scheme nearisomorphic to UD. In *Proceedings of the Second* Workshop on Universal Dependencies (UDW 2018),
pages 66–74, Brussels, Belgium. Association for Computational Linguistics.
Chaoyu Guan, Xiting Wang, Quanshi Zhang, Runjin Chen, Di He, and Xing Xie. 2019. Towards a deep and unified understanding of deep neural models in NLP. In *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pages 2454–
2463. PMLR.
John Hewitt and Christopher D. Manning. 2019. A
structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics.
Kai Hong and Ani Nenkova. 2014. Improving the estimation of word importance for news multi-document summarization. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 712–721, Gothenburg, Sweden. Association for Computational Linguistics.
Diederik P Kingma and Jimmy Lei Ba. 2015. Adam: A
method for stochastic gradient descent. In *ICLR: International Conference on Learning Representations*,
pages 1–15.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning Representations*.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Markéta Lopatková, Martin Plátek, and Vladislav Kubon. 2005. Modeling syntax of free word-order ˇ languages: Dependency analysis by reduction. In International Conference on Text, Speech and Dialogue, pages 140–147. Springer.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*,
pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics.
Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting contextual word embeddings: Architecture and representation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 1499–
1509, Brussels, Belgium. Association for Computational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
Adriaan MJ Schakel and Benjamin J Wilson.
2015. Measuring word significance using distributed representations of words. *arXiv preprint* arXiv:1508.02297.
Karl Schulz, Leon Sixt, Federico Tombari, and Tim Landgraf. 2020. Restricting the flow: Information bottlenecks for attribution. In *International Conference on Learning Representations*.
Imran Sheikh, Irina Illina, Dominique Fohr, and Georges Linarès. 2016. Learning word importance with the neural bag-of-words model. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 222–229, Berlin, Germany. Association for Computational Linguistics.
Milan Straka. 2018. UDPipe 2.0 prototype at CoNLL
2018 UD shared task. In Proceedings of the CoNLL
2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197–207, Brussels, Belgium. Association for Computational Linguistics.
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017.
Axiomatic attribution for deep networks. In *International conference on machine learning*, pages 3319–
3328. PMLR.
Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention interpretability across NLP tasks. arXiv preprint arXiv:1909.11218.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Wei Xu and Ralph Grishman. 2009. A parse-and-trim approach with information significance for Chinese sentence compression. In *Proceedings of the 2009* Workshop on Language Generation and Summarisation (UCNLG+Sum 2009), pages 48–55, Suntec, Singapore. Association for Computational Linguistics.
Daniel Zeman et al. 2022. Universal dependencies 2.10.
LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
Yuan Zhang, Jason Baldridge, and Luheng He. 2019.
PAWS: Paraphrase adversaries from word scrambling.
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298–1308, Minneapolis, Minnesota. Association for Computational Linguistics.
## A Training A.1 Underlying Models
Implementation Language modeling often treats the input of semantic classification tasks as a onesequence input, even for tasks involving multiple sentences on the input side (Devlin et al., 2019; Lewis et al., 2020; Lan et al., 2020). However, processing two sentences as one irremediably compounds their hidden representations. As we wish to separate representations of single sentences, we resort to a custom implementation based on the Transformers architecture (Vaswani et al., 2017), which comprises two encoders (6 layers, 8 att. heads, 1024 feed forward net. size, 512 emb. size) sharing their weights, one for each input sentence. Following Sentence-BERT (Reimers and Gurevych, 2019), we computed the mean of the encoder output sentence representations u and v, and concatenated them to an additional |u − v| term. This was passed to a linear layer for performing the final classification. We implemented models in fairseq
(Ott et al., 2019).4 Datasets The NLI model was trained on SNLI
(Bowman et al., 2015)
5, MULTI_NLI (Williams et al., 2018)
6and QNLI (Rajpurkar et al., 2016)
7 datasets. Since QNLI uses a binary scheme ('entailment' or 'non-entailment'), we interpret 'nonentailment' as a neutral relationship. Table 6 describes the NLI training and validation data. The PI
model was trained on QUORA Question Pairs8and PAWS (Zhang et al., 2019)
9 datasets. We swapped a random half of sentences in the data to ensure the equivalence of both sides of the data. Table 7 displays the PI training and validating data.
Training We trained both models using an adaptive learning rate optimizer (α = 3 × 10−4, β1 =
0.9, β2 = 0.98) (Kingma and Ba, 2015) and a inverse square root scheduler with 500 warm-up updates. We trained with 64k maximum batch tokens over 6 epochs with 0.1 dropout regulation.
We trained on an NVIDIA A40 GPU using halfprecision floating-point format FP16, which took less than 2 hours for both models. The PI model and NLI model achieve 85.1% and 78.4% accuracy 4https://github.com/facebookresearch/fairseq 5https://huggingface.co/datasets/snli 6https://huggingface.co/datasets/multi_nli 7https://huggingface.co/datasets/glue\#qnli 8https://huggingface.co/datasets/quora 9https://huggingface.co/datasets/paws
NLI Model performance
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
0% 78.4↑0.0 78.1↑0.5 77.9↑3.1 77.1↑5.5 75.8↑8.1 72.0↑8.9 68.6↑8.5 63.7↑7.3 55.9↑5.1 46.8↑3.1 33.5↑0.0
10% 78.4↑1.4 78.3↑1.8 78.1↑4.7 77.2↑6.5 75.7↑8.9 72.1↑9.4 68.6↑9.0 63.6↑7.6 55.8↑5.5 46.6↑3.0 33.6↑0.1
20% 78.0↑4.1 77.8↑4.3 77.7↑6.4 77.1↑8.4 75.4↑9.7 72.0↑10.2 68.2↑9.6 63.6↑8.3 55.7↑5.7 46.7↑3.8 33.5↑0.3
30% 77.3↑6.5 77.2↑6.6 77.0↑8.8 76.7↑10.3 74.9↑11.2 71.3↑11.8 68.1↑11.1 63.2↑8.9 55.7↑6.7 46.6↑3.9 33.4↑0.6 40% 76.1↑8.3 76.0↑8.6 75.9↑9.9 75.3↑11.0 74.0↑11.9 71.1↑12.5 67.4↑10.9 63.1↑9.5 55.7↑7.7 47.1↑5.1 33.5↑0.2
50% 72.8↑8.6 72.7↑8.6 73.1↑10.2 72.4↑10.2 71.5↑11.3 69.3↑12.6 66.7↑12.4 62.4↑10.2 55.5↑8.1 46.4↑4.5 33.5↓0.2
60% 68.7↑6.7 68.5↑6.9 68.9↑7.9 68.6↑9.1 67.7↑9.5 66.1↑10.6 64.3↑10.8 60.8↑9.9 54.0↑6.9 45.9↑3.8 33.4↓0.2 70% 63.2↑5.3 63.0↑5.2 63.5↑6.3 62.9↑6.0 62.2↑7.0 61.3↑7.8 60.2↑8.8 58.1↑9.5 52.5↑6.2 45.1↑3.4 33.4↑0.1
80% 57.4↑3.6 57.3↑3.6 57.7↑3.7 57.2↑3.3 57.1↑4.1 56.5↑5.4 55.1↑4.9 53.8↑6.0 50.3↑4.9 44.9↑3.7 33.4↓0.0
90% 52.5↑2.1 52.4↑2.1 52.9↑2.6 52.8↑2.1 52.4↑2.3 51.9↑2.9 51.2↑2.7 49.9↑2.5 47.6↑3.2 43.5↑3.2 33.7↑0.4
100% 42.8↑0.0 42.8↑0.1 43.5↑0.1 43.8↑0.2 44.5↑0.5 44.7↑0.5 45.1↑0.4 44.2↓0.8 43.1↓0.1 40.2↑0.3 33.8↑0.0
NLI Model **PI Model**
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
![7_image_2.png](7_image_2.png)
Dep. Rel. Avg Std Avg Std Count **Description**
cop -0.74 0.30 -0.74 0.27 1623 Copula, e.g. John is the best dancer; Bill is honest case -0.55 0.35 -0.54 0.30 7651 Case Marking, e.g. the Chair 's office; the office of the Chair punct -0.53 0.35 0.24 0.35 8148 Punctuation, e.g. Go home !
aux -0.51 0.34 -0.67 0.27 4622 Auxiliary, e.g. John has died; he *should* leave cc -0.48 0.32 -0.74 0.23 707 Coordinating Conjunction, e.g. and yellow det -0.45 0.38 -0.55 0.38 14801 Determiner, e.g. the man mark -0.39 0.34 -0.48 0.31 1104 Marker, e.g. before; after; with; *without* nsubj -0.22 0.45 0.06 0.39 9323 Nominal Subject, e.g. *John* won nummod -0.10 0.37 -0.02 0.38 1269 Numeric Modifier, e.g. *forty* dollars, 3 sheep nmod -0.06 0.52 -0.13 0.42 3153 Nominal Modifier, e.g. the office of the *Chair* advmod -0.01 0.51 -0.01 0.41 1299 Adverbial Modifier, e.g. *genetically* modified, *less* often advcl 0.05 0.43 0.05 0.33 857 Adverbial Clause Modifier, e.g. if you know who did it, you should say it compound 0.07 0.46 -0.04 0.35 2437 Compound, e.g. *phone* book; ice cream conj 0.10 0.41 0.03 0.28 742 Conjunct, e.g. big and *yellow* acl 0.11 0.43 0.04 0.41 1367 Adnominal Clause), e.g. the issues as he *sees* them; a simple way to get amod 0.11 0.42 -0.01 0.32 2974 Adjectival Modifier, e.g. big boat obl 0.16 0.47 0.09 0.33 5002 Oblique Nominal, e.g. last *night*, I swam in the *pool* xcomp 0.21 0.41 0.12 0.38 565 Open Clausal Complement, e.g. I started to *work* obj 0.25 0.44 0.12 0.36 4377 Object, e.g. she got a *gift*
![7_image_3.png](7_image_3.png)
on corresponding validating sets, respectively. We consider this performance sufficient given limitations put on the architecture choice.
## A.2 Interpreter
Implementation We use the attribution method introduced by De Cao et al. (2020). Assuming L
layers for the NLI encoder, the interpreter model contains L+1 classifiers. Each classifier is a single-
![7_image_4.png](7_image_4.png)
hidden-layer MLP, which inputs hidden states and predicts binary probabilities whether to keep or discard input tokens. The implementation details closely follow the original work.
Training We trained on the first 50k samples of the corresponding underlying model's training data, using a learning rate α = 3 × 10−5and a divergence constrain D∗ < 0.1. The number of training samples and the rest of hyper-parameters follow the original work. We trained over 4 epochs with a batch size of 64.
## B Cross-Task Evaluation
The performance of the NLI model in the crosstask evaluation, compared to the baseline model, is displayed in Table 4.
## C Dependency Relations
We examined all dependency relations with a frequency greater than 100 by computing the score difference between child and parent nodes, and averaging them for each every dependency type.
Results are in Table 5.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
After Conclusion
✗ A2. Did you discuss any potential risks of your work?
We believe that our work has no potential risks
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix A
✓ B1. Did you cite the creators of artifacts you used?
Appendix A
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
3.2
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3.2
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We do not publish any data and the data we use are publicly available and used in several studies
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix A
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix A
## C ✓ **Did You Run Computational Experiments?** Appendix A
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
agrawal-etal-2023-context | In-context Examples Selection for Machine Translation | https://aclanthology.org/2023.findings-acl.564 | Large-scale generative models show an impressive ability to perform a wide range of Natural Language Processing (NLP) tasks using in-context learning, where a few examples are used to describe a task to the model. For Machine Translation (MT), these examples are typically randomly sampled from the development dataset with a similar distribution as the evaluation set. However, it is unclear how the choice of these in context examples and their ordering impacts the output translation quality. In this work, we aim to understand the properties of good in-context examples for MT in both in-domain and out-of-domain settings. We show that the translation quality and the domain of the in-context examples matter and that 1-shot noisy unrelated examples can have a catastrophic impact on output quality. While concatenating multiple random examples reduces the effect of noise, a single good prompt optimized to maximize translation quality on the development dataset can elicit learned information from the pre-trained language model. Adding similar examples based on an n-gram overlap with the test source significantly and consistently improves the translation quality of the outputs, outperforming a strong kNN-MT baseline in 2 out of 4 out-of-domain datasets. | # In-Context Examples Selection For Machine Translation Sweta Agrawal1∗ , Chunting Zhou2**, Mike Lewis**2, Luke Zettlemoyer2, **Marjan Ghazvininejad**2
1 University of Maryland 2 Meta AI
[email protected] {chuntinz,mikelewis,lsz,ghazvini}@meta.com
## Abstract
Large-scale generative models show an impressive ability to perform a wide range of Natural Language Processing (NLP) tasks using in-context learning, where a few examples are used to describe a task to the model. For Machine Translation (MT), these examples are typically randomly sampled from the development dataset with a similar distribution as the evaluation set. However, it is unclear how the choice of these in-context examples and their ordering impacts the output translation quality. In this work, we aim to understand the properties of good in-context examples for MT
in both in-domain and out-of-domain settings.
We show that the translation quality and the domain of the in-context examples matter and that 1-shot noisy unrelated example can have a catastrophic impact on output quality. While concatenating multiple random examples reduces the effect of noise, a single *good* prompt optimized to maximize translation quality on the development dataset can elicit learned information from the pre-trained language model.
Adding similar examples based on an n-gram overlap with the test source significantly and consistently improves the translation quality of the outputs, outperforming a strong kNN-MT
baseline in 2 out of 4 out-of-domain datasets.
## 1 Introduction
In-context learning (Brown et al., 2020) has recently received a lot of attention from the NLP
research community due to its remarkable ability to utilize only a few input-output examples to perform many NLP tasks (Liu et al., 2021). For example, Lin et al. (2021) demonstrate that a 7.5B multilingual generative model, XGLM, outperforms a supervised sequence-to-sequence baseline in 45 translation directions on the FLORES-101 machine translation benchmark (Goyal et al., 2022) using just 32 randomly sampled translation examples
∗ Work done during internship at Meta AI Research.
as demonstrations. While these results are compelling, recent work has also shown that the performance and capability of a pre-trained language model (PLM) can be highly sensitive to many factors, such as the choice of in-context examples (Liu et al., 2022b), their ordering (Lu et al., 2022) and the template (Jiang et al., 2020).
Typically, in-context learning for MT uses examples that are randomly sampled from a small development set that resembles the domain of the test dataset. The effect of the aforementioned factors
(such as the choice of the examples) on the translation quality of the PLM hence remains unclear and unexplored. Yet another crucial gap in using in-context learning for MT in the current literature is the effect of the domain of in-context examples on translation quality since out-of-domain generalization is a known and important challenge in MT
(Koehn and Knowles, 2017).
In this work, we systematically analyze how factors such as the choice and the number of few-shot in-context examples and their ordering impact MT output quality. We show that while noisy unrelated 1-shot example can have a significantly adverse effect on translation quality, a single prompt optimized to maximize the translation quality on a development set can sufficiently elicit task-based information from the PLM. Our analysis thus demonstrates the importance of selecting good examples for MT and raises the question: *What are the properties of good in-context examples for* MT? In that direction, our findings suggest that a well-formed meaning-equivalent translation example results in higher quality translation than randomly selected in-context examples.
Motivated by the use of Translation Memory in Computer-Aided Translation (Yamada, 2011)
and its usage in computational approaches to Machine Translation (Somers, 1999; Koehn and Senellart, 2010; Khandelwal et al., 2020, *inter alia*), we retrieve similar examples to the test source from a datastore that includes pairs of the source text and their corresponding translations via BM25, an unsupervised efficient retriever to provide additional context to the model. We propose a novel incontext example selection and re-ranking strategy to maximize the coverage of the source n-grams in the retrieved examples. Experiments on WMT'19 English↔German and English↔Russian datasets show that our proposed strategy can consistently improve the translation quality over the outputs generated using BM25 retrieved examples. Combining optimized 1-shot task-level with examplespecific in-context examples using a simple concatenation strategy further improves translation quality, outperforming state-of-the-art inferenceadapted nearest-neighbor MT models (kNN-MT)
on two out-of-domain datasets (Medical and IT)
while being memory and compute efficient as our approach does not require constructing and querying a dense token-level datastore.
## 2 Background: In-Context Learning
Generating translations from large-scale multilingual language models like mGPT (Shliazhko et al.,
2022), XGLM (Lin et al., 2021) or AlexaTM
20B (Soltan et al., 2022) requires conditioning the decoder-only language model with in-context parallel examples. These examples serve two purposes: a) providing the model with the format and knowledge of the task (**task-level**) and b)
guiding the output generation via providing useful information about the unseen source sentence
(**example-specific**). This is different from the standard sequence-to-sequence models, where the task is always known, and the model learns generalizable patterns from the input-output examples to perform the task (in this case, translation) for the unseen source text.
Source: *Welche Risiken sind mit* **Poulvac**
## Flufend H5N3 Rg Verbunden?
Template: {Source text} = {Target text}.
Example-Specific: *Welche Risiken sind mit* Sebivo *verbunden?* = What are the risks associated with Sebivo?
Task-Level: Bei PROMESS1 werden drei Hauptziele verfolgt. = PROMESS1 has three main objectives.
Table 1: In-context Examples for Machine Translation.
Formally, given k in-context examples {xi, yi}
k 1 the prefix input or the prompt, x p j
, is generated by concatenating the demonstration examples
{(xi, yi)}
k 1 to the test input, x s j according to a *template*, P (see Table 1). The output, yˆ, is then generated via the PLM with parameters θ via greedy decoding as follows:
$${\hat{y}}_{j,t}={\underset{y_{j,t}^{\prime}}{\operatorname{arg\,max}}}\,P_{\mathrm{PLM}}(y_{j,t}^{\prime}|x_{j}^{p},{\hat{y}}_{j,<t};\theta)\quad{\mathrm{(1)}}$$
## 3 Prompt Selection
Ideally, good in-context examples can trigger the pre-trained language model to generate the **desired**
output and also elicit the information learned during pre-training (Jiang et al., 2020). Min et al.
(2022) show that, for classification tasks, the incontext examples provide information about the task (the distribution of the input text, the label space, and the format of the task) and that the model does not rely on these examples to generate the final output. However, their analysis is limited to a) classification tasks and 2) randomly sampled in-context examples. Prior work has also shown that the order of these in-context examples can also lead to high variance in downstream performance (Zhang et al., 2022). However, less is understood about how these factors impact text generation tasks like MT. Do we need multiple incontext examples? What makes good in-context examples for MT? How sensitive is the model to the order of the prompts?
In this work, we aim to better understand the impact of prompt selection on the translation quality of the outputs. Given a training dataset consisting of n parallel examples D = {xi, yi}
n i=1, and a test source xj , we select a subset of m *informative* samples to form a prompt which either provides task-level and/or example-specific information as discussed below.
## 3.1 Task-Level In-Context Examples
A good task-level in-context example should be able to elicit information learned during pretraining from the PLM. One way to measure the efficacy of an example as a prompt is via computing the translation quality of the outputs generated when prompting the PLM given an example. Hence, we select the task-level prompt as follow: For a given example sampled from the training dataset,
(xi, yi) ∈ DS, we create a prompt, x p i by concatenating the example {(xi, yi)} to each source in the 8858
![2_image_0.png](2_image_0.png)
development set. The system outputs are then generated using equation 1. We then rank examples from DSas task-level prompts based on the BLEU
of the generated outputs against the references on this held-out development set, Ddev = {*X, Y* }:
$$(x_{s},y_{s})=\operatorname*{arg\,max}_{(x,y)\in D^{s}}\mathrm{BLEU}(Y,{\hat{Y}})\qquad(2)$$
## 3.2 Example-Specific In-Context Examples
Prior work on retrieving *good* in-context examplespecific prompts for tasks other than MT (like question answering or knowledge retrieval) either trains a dense-retriever (Rubin et al., 2021) or utilizes samples that are closer to the **test source** in the embedding space of a PLM like BERT (Devlin et al.,
2019), RoBERTa (Liu et al., 2019), or XLNET
models (Liu et al., 2022b). While contextual models can generate a global sentence representation, they overlook rare lexicons which can be important for generating translations in unseen domains like medical or IT (Wrzalik and Krechel, 2021).
However, for MT, overlapping n-grams between the source and the retrieved sentences ensures informativeness as the target associated with the retrieved sentence is likely to include partial translations of the source. We can thus use BM25 as an efficient unsupervised retrieval method to retrieve similar examples. However, as the examples are scored independently and BM25 favors rare word matches (Robertson et al., 2009), the top retrieved candidates might not cover all the terms in the source text (Figure 1). Given that the context window of the PLM is usually limited (∼ 3096 tokens, 16 − 20 examples), maximizing the coverage of all the terms found in the test input might be favorable.
![2_image_1.png](2_image_1.png)
Hence, we propose to re-rank the top 100 candidates retrieved from BM25 using our algorithm outlined in 1. We extract all the word n-grams, and their counts from the test source, x s j and source of the BM25 retrieved examples, {Pj (xi)}
k 1
(lines 2-4). Let S and Q denote the set of the source n-grams and the n-grams from a BM25 retrieved example, respectively. We compute a recall-based
(R) n-gram overlap score (line 7):
$$R_{n}={\frac{\sum_{\mathrm{{ngram}}\in S\cap Q}\mathrm{{Count}}_{\mathrm{{matched}}}(\mathrm{{ngram}})}{\sum_{\mathrm{{ngram}}\in S}\mathrm{{Count}}_{S}(\mathrm{{ngram}})}}\quad(3)$$ $$\mathrm{Score}=\exp({\frac{1}{n}}\sum_{n}\log(R_{n}))\quad(4)$$
The example with the maximum score is then added to the set of selected prompts, and the found n-grams from the test source are then downweighted by a factor, λ, for the next iteration of selection (line 14). For example, setting λ = 0 will select the example that covers the n-grams from the test source in the subsequent iteration that has not already been encountered. This process is then repeated over the retrieved pool until a set threshold of the score is reached.
Figure 1 shows the top-100 candidates retrieved via BM25 for the input: "Welche Risiken sind mit Poulvac FluFend H5N3 RG verbunden?". The top few candidates provide the same information to the PLM, i.e., translation of the phrase "Poulvac FluFend H5N3 RG". The examples including the other terms ("Welche Risiken sind mit verbunden ?") from the input text, are ranked lower. On the
Algorithm 1: An N-gram Recall-based Strategy to Re-rank In-context Examples Input: Prompts {Pj (xi, yi)}
k 1 for the test source x s j, λ, Threshold Output :Ordered Selected Prompts {T = Pj (xi, yi)}
s 1, s ≤ k 1 T ← Empty Ordered List 2 S ← EXTRACT_WORD_NGRAMS_WITH_COUNTS (x
![3_image_0.png](3_image_0.png)
3 for i ∈ {1..k} do 4 Q[i] ← EXTRACT_WORD_NGRAMS_WITH_COUNTS (P
![3_image_1.png](3_image_1.png)
5 **while** *True* do 6 for i ∈ {1..k} do 7 Score[i] ← NGRAM_OVERLAP_SCORE (*S, Q*[i])
8 if max(Score) < *Threshold* **then**
9 *break* 10 T.append(Parg max(Score))
![3_image_2.png](3_image_2.png)
12 Q[arg max(Score)] ← ∅
14 CountS(ngram)× = λ 15 Return T
other hand, our proposed re-ranking strategy can
![3_image_3.png](3_image_3.png)
cover all the terms from the input text, in this case, with just the top-2 examples.
## 4 Evaluation Settings 4.1 Datasets And Evaluation Metric
We perform our in-domain evaluation on the WMT19 German (de) ⇔ English (en) and WMT-19 Russian (ru) ⇔ English (en) datasets (Barrault et al.,
2019). For the out-of-domain evaluation, we use the multi-domain dataset from Aharoni and Goldberg (2020) for the following domains: Medical, Law, IT, and Koran. The dataset statistics are reported in the Appendix (Table 8). Following Ng et al. (2019), we normalize punctuation using Moses (Koehn et al., 2007) and remove sentences longer than 250 tokens and sentence pairs with a source/target length ratio exceeding 1.5 from the in-domain datasets. The detokenized length truncated model-generated outputs are evaluated using sacreBLEU (Papineni et al., 2002; Post, 2018).1 The PLM outputs are truncated to twice the source length, as preliminary analysis suggested degeneration in a few (∼10-20) examples.
## 4.2 Experimental Conditions
Language Model We use the publicly available checkpoint of the XGLM7.5B, a decoder-only multilingual language model (Lin et al., 2021) for all 1https://github.com/mjpost/sacrebleu We also report Comet (Rei et al., 2020) scores for evaluating translation quality in Appendix Tables 14 and 15.
our experiments, which has 32 layers and a hidden dimension of 4096.
Baselines and Comparisons We consider the following comparisons:
- **Random**: p random few-shot examples sampled from the training dataset (number of trials=3).
- **Task-level**: top-p examples that achieve the highest BLEU on the development set (§ 3.1).
- **Retrieved In-context (BM25)**: qmax examples retrieved via BM25, since, unlike task-level examples, there is no guarantee that exactly q similar examples will be found in the training dataset for each input.
- **Retrieved Re-ranked In-context (R-BM25)**:
qmax re-ranked examples using our proposed approach as detailed in § 3.2.
We also compare our results with the state-ofthe-art nearest neighbor-based approach for out-ofdomain evaluation, kNN-MT (Khandelwal et al.,
2020). We use λ = 0.1, threshold=1.0 and order the examples according to their similarity to the source, with the most similar examples on the left in all our experiments (Appendix Tables 9,10).
## 5 Results
Table 2 and 3 summarize the main results for the in-domain and the out-of-domain evaluations.
| Method | p + qmax | En-De | De-En | Ru-En | En-Ru | Avg. |
|---------------------|------------|---------|---------|---------|---------|--------|
| Task-level | 1 + 0 | 23.35 | 32.16 | 30.48 | 25.04 | 27.75 |
| BM25 | 0 + 1 | 19.17 | 25.82 | 24.54 | 21.51 | 22.76 |
| R-BM25 | 0 + 1 | 20.60 | 28.19 | 27.26 | 21.92 | 24.49 |
| Random (Baseline) | 16 + 0 | 24.48 | 31.26 | 30.38 | 25.67 | 27.95 |
| Task-level | 16 + 0 | 23.72 | 31.22 | 30.89 | 27.27 | 28.28 |
| BM25 | 0 + 16 | 26.58 | 32.16 | 31.44 | 28.54 | 29.68 |
| R-BM25 | 0 + 16 | 27.07 | 32.59 | 31.85 | 28.90 | 30.10 |
| R-BM25 | 0 + 17 | 27.00 | 32.68 | 31.88 | 28.80 | 30.09 |
| Task-level + R-BM25 | 1 + 16 | 27.09 | 33.24 | 31.90 | 29.50 | 30.43 |
Table 2: Results on WMT'19 test sets: Concatenating task-level prompt to R-BM25 consistently achieves the best BLEU scores across the board. p and qmax are the number of task-level and example-specific prompts respectively.
## 5.1 In-Domain Evaluation A Single Task-Level Prompt Is Competitive With
16 random few-shot examples. Our experiment suggests that it is possible to elicit the task-level knowledge from the large-scale language model using a single prompt as opposed to using 16 random few-shot examples when translating into English
(Table 2). Using a single task-level prompt (optimized on the development set) improves BLEU
over using 16 random few-shot examples for 2 out of 4 translation directions (De-En, Ru-En). We hypothesize that when translating out of English, the model still benefits from getting exposed to multiple and diverse random few-shot examples as the target language model is relatively weaker.
## Multiple Example-Specific Prompts Are Required
to improve translation quality over a single tasklevel prompt. Using a single task-level (p = 1)
prompt attains higher BLEU over using a single example-specific prompt (q = 1; BM25, RBM25) across the board. By contrast, using up to 16 BM25 prompts (qmax = 16) significantly improves output quality over using task-level prompts, with an average gain of 1.41 in BLEU.
## Re-Ranking Bm25 **Retreived Examples Improves**
BLEU. Our proposed re-ranking strategy consistently improves BLEU across the board over BM25 for both values of qmax = {1, 16} showing that both the order and the choice of the in-context examples matters.
Both task-level and R-BM25 examples provide complementary advantages, as combining them using a simple concatenation strategy improves output quality over task-level or R-BM25 examples.
We leave the exploration of optimizing the number and the joint order of task-level and examplespecific prompts to future work.
## 5.2 Out-Of-Domain Evaluation
As XGLM is trained on monolingual Common Crawl snapshots, translation in any domain and language could be considered an out-of-domain task.
However, we hypothesize that translation in specific domains like medical, law, or IT could still be challenging for the PLM as the model is less likely to have observed sufficient monolingual datasets for these specialized domains, in contrast to the news text found in WMT. Examples from these domains will require translating rare terminology and carry domain-specific idiosyncrasies, which is known to pose a challenge even for a well-trained supervised neural MT model (Koehn and Knowles, 2017). Hence, we also evaluate PLM under these specialized out-of-domain scenarios.
## Domain Of Few-Shot In-Context Examples Matter.
Task-level in-context examples drawn from the domain of evaluation, i.e., domain-specific, obtain on an average higher BLEU scores across the board than using examples from a distant WMT corpus as expected (Table 3) in both 1-shot (p = 1: +1.4)
and 16-shot (p = 16: +2.7) settings.
## Example-Specific Prompts Significantly Improve
translation quality over task-level prompts.
Unlike the in-domain evaluation, retrieved and reranked example-specific prompts (R-BM25) im-
| Method | Corpus | p + qmax | MEDICAL | LAW | IT | KORAN | Avg. |
|---------------------|-----------------|------------|-----------|-------|-------|---------|--------|
| Task-level | Domain-specific | 1 + 0 | 31.23 | 32.10 | 28.70 | 14.68 | 26.68 |
| WMT | 30.08 | 31.10 | 26.72 | 13.19 | 25.27 | | |
| R-BM25 | Domain-specific | 0 + 1 | 52.62 | 55.46 | 40.54 | 13.76 | 40.60 |
| Task-level | Domain-specific | 16 + 0 | 32.65 | 33.68 | 28.81 | 15.30 | 27.61 |
| WMT | 30.14 | 30.76 | 26.19 | 12.72 | 24.95 | | |
| R-BM25 | Domain-specific | 0 + 16 | 56.43 | 59.57 | 46.57 | 17.49 | 45.02 |
| R-BM25 | Domain-specific | 0 + 17 | 56.65 | 59.55 | 46.64 | 17.48 | 45.08 |
| Task-level + R-BM25 | 1 + 16 | 56.76 | 59.56 | 47.50 | 17.55 | 45.34 | |
| kNN-MT | - | - | 54.35 | 61.78 | 45.82 | 19.45 | 45.35 |
Table 3: Results on the Multi-Domain Test Set: Prompting XGLM with R-BM25 in-context examples outperforms kNN-MT on 2 out of 4 domains.
prove the translation quality significantly across the board with up to 23 BLEU gain in the Law domain using just a single example as a prompt over a task-level prompt. This can be attributed to the high lexical overlap in the examples retrieved from the training data for these domains (Table 6).
## Task-Level And R-Bm25 **Prompts Are Complementary.** Both Task-Level And R-Bm25 Provide
supporting information for a given test source sentence as concatenating these set of prompts improves output quality over using these methods independently, outperforming a strong kNN-MT baseline on 2 out of 4 domains (Medical and IT). Where kNN-MT utilizes token-level nearestneighbor inference with representations extracted for bitext using and in combination with a strong supervised MT model to reach the reported translation quality, our approach only uses a sentencelevel unsupervised retrieval (BM25) to provide additional context to the unseen source with a multilingual PLM that has not been trained with any known parallel supervision to reach better or comparable translation quality. Hence, our results provide support for further analysis of the translation abilities of retrieval-augmented PLM on new domains and language pairs.
Our manual analysis suggests that the higher gain obtained in the IT domain (+0.86) with both task-level and example-specific prompts can be explained by the observation that for 100 test source sentences, there are no training examples with any lexical overlap with the test source. The task-level prompt can still elicit learned information from the PLM over using no examples for these inputs.
## 6 Analysis 6.1 Task-Level Example Selection
Choice of Few-shot Examples We show the distribution of output quality as measured by BLEU
when using 100 different examples as prompts in Figure 2. Across all four language pairs, there is a large variation in BLEU scores (up to 20 BLEU),
where noisy or unrelated prompts can lead to significantly worse output quality. Given that most existing parallel corpora are web-crawled and the quality of bitext can vary significantly across different language pairs (Kreutzer et al., 2022), randomly sampled examples can under-estimate the translation quality attainable by prompting the PLM.
| 1-shot Prompts | | |
|----------------------------------------------------------------------------------------------|-------|-------|
| 100 | 1000 | |
| Max | 35.82 | 36.29 |
| Mean | 34.06 | 29.95 |
| Stdev | 0.96 | 9.55 |
| Random 10 trials of best over 100 1-shot Prompts Mean over Max - 36.08 Stdev over Max - 0.18 | | |
Table 4: Task-level example selection from 1000 1-shot Prompts on the WMT'19 development dataset.
Impact of Pool Size on Task-level Prompt Selection We select the best task-level prompt based on the translation quality on the development set from a random sample of 100 examples (pool) as detailed in Section 3.1. However, one concern regarding selecting the best task-level prompt in this
| Features | En-De | De-En | En-Ru | Ru-En |
|--------------------------|---------|---------|---------|---------|
| % (Aligned words) Random | 0.818 | 0.837 | 0.594 | 0.663 |
| Task-level | 0.834 | 0.926 | 0.773 | 0.886 |
| Prism-Src Random | -1.027 | -1.081 | -2.214 | -1.767 |
| Task-level | -0.843 | -0.847 | -1.557 | -1.206 |
Properties of good Task-level prompts Our manual analysis on the best task-level prompts suggests that any well-formed and meaning-equivalent translation (Vyas et al., 2018; Briakou and Carpuat, 2020) could make a good task-level prompt (see Figure 2: BLEU distribution on the WMT'18 test set for 100 randomly sampled 1-shot prompts from the training
![6_image_0.png](6_image_0.png)
dataset. The same set of 100 random 1-shot prompts are used for x→y and y →x translation directions.
fashion could be that we might still be underestimating the PLM (s) performance, as a larger pool size could result in better output quality. We study the impact of using a larger pool size in Table 4 where increasing the number of examples from 100 to 1000 only leads to a gain of 0.5 points in the maximum BLEU. From the same table, we can also observe that for any subset of random 100 fewshot examples, we can extract a task-level prompt
(BLEU: 36) with a small standard deviation in overall output quality (0.18).
examples in Appendix Table 11). To quantify the meaning equivalence of the 1-best task-level prompt against random 1-shot examples, we report the percentage of aligned words between the source and reference translation ("% Aligned words") using fastAlign (Dyer et al., 2013) and the log probability of generating the reference translation conditioned on the source using a pre-trained multilingual NMT model, Prism-src (Thompson and Post, 2020; Agrawal et al., 2021) in Table 5.
2 Across all language pairs and both metrics, task-level examples achieve higher semantic similarity scores than random 1-shot examples suggesting that task-level examples are relatively more equivalent in meaning than random examples.
Impact of Ordering To investigate the sensitivity of the few-shot prompts ordering on MT
quality, we use all possible order permutations of four randomly sampled examples and the top four task-level examples as prompts and report BLEU in Table 7. Task-level prompts are less sensitive to prompt order, as suggested by the lower standard deviation achieved in all settings, and result in higher translation quality than randomly selected examples. Across the three different runs of randomly sampled examples, there is a significant difference in BLEU, further corroborating that the
$\mathbf{x}=\mathbf{x}$.
2https://github.com/clab/fast_align, https:
//github.com/thompsonb/prism
| Dataset | Avg. BLEU (Ix, x) | Corr(BLEU (yˆ, y),BLEU (Ix, x)) | Avg. BLEU (Iy, y) | Corr(BLEU (yˆ, y), BLEU (Iy, y)) |
|-----------|---------------------|-----------------------------------|---------------------|------------------------------------|
| Medical | 35.785 | 0.593 | 32.101 | 0.777 |
| Law | 34.982 | 0.677 | 34.349 | 0.786 |
| IT | 25.196 | 0.497 | 19.382 | 0.669 |
| Koran | 36.033 | -0.016 | 10.364 | 0.676 |
| En-De | De-En | En-Ru | Ru-En |
|-----------------------------------------------------------|-------------------------------------------------------------------------------------------------|---------|---------|
| 34.43 ±0.25 25.19 ±0.26 12.48 ±5.72 15.56 ±0.50 | | | |
| Random | 35.63 ±0.48 25.85 ±0.15 24.99 ±0.21 19.04 ±0.39 34.73 ±0.30 23.93 ±0.28 10.92 ±4.64 17.91 ±0.07 | | |
| Optimized 35.95 ±0.24 26.98 ±0.15 25.85 ±0.11 19.96 ±0.24 | | | |
Table 7: BLEU over all 24 permutations of 3 seeds of 4 randomly selected and top 4 task-level prompts.
## 6.2 Informativeness Of Bm25 Examples
To understand the benefit of retrieved examples in the out-of-domain evaluation, we measure the lexical overlap between the test input (x, y) and the prompts (Ix, Iy) using BLEU (Avg. BLEU (Ix, x), Avg. BLEU (Iy, y)), where Ix and Iy are the sources and target translations of the retrieved incontext examples. We also report the correlation against the output translation quality BLEU(ˆ*y, y*).
Table 6 shows that the source lexical overlap is a good indicator of the informativeness of a prompt for 3 out of 4 domains, with Koran as an exception.
For Koran, while the retrieved sentences have a high overlap with the source (36.03), the target associated with the prompts (Iy) does not get high BLEU with the reference (10.36) compared to other domains. We hypothesize that this might be due to a bias in the reference translations towards a particular output style. We provide examples of this phenomenon in the Appendix Section F.
## 6.3 Size Of The Datastore
Figure 3 shows BLEU when varying the size of the datastore used to retrieve similar in-context examples using BM25 on the Medical dataset. As the size of the datastore increases, the likelihood of retrieving a more similar example increases. However, similar output quality in BLEU can be achieved by using multiple in-context examples when a smaller in-domain datastore is available as multiple examples can provide better coverage of
![7_image_0.png](7_image_0.png)
the source terms - BLEU @q=16 with a datastore size of 100k is equivalent to BLEU @q=1 with twice as many examples (200k).
## 7 Related Work
The selection of in-context examples and their impact on downstream NLP task performance has been studied in prior work for tasks other than MT
(Liu et al., 2022b; Lu et al., 2022; Jiang et al., 2020; Min et al., 2022; Zemlyanskiy et al., 2022; Rubin et al., 2021; Liu et al., 2022a). Garcia and Firat
(2022) use natural language prompts to control the target language in multilingual MT and investigate the effect of scale, number of languages, and their similarity for this phenomena. Wang et al. (2022)
utilize BM25 retrieved training examples in a supervised fashion to learn from similar examples during training. Contrary to prior work, we utilize similar examples to form a textual prompt which is used to guide the generation of a translation during inference.
Prior work on domain adaptation for MT uses domain-specific bilingual or monolingual datasets to improve the translation quality of a neural sequence-to-sequence MT model either during training (Luong and Manning, 2015; Freitag and Al-Onaizan, 2016; Wang et al., 2017) or inference
(Zheng et al., 2021; Khandelwal et al., 2020; Martins et al., 2022). Similar to past work, our work utilizes out-of-domain bitext during inference but instead adapts a PLM on unseen domains. However, our approach does not rely on creating a domainspecific token-level datastore, hence is more compute and memory efficient.
Several concurrent works investigate in-context learning for MT: Zhang et al. (2023) study prompting strategies for MT and examine several factors that could impact translation quality. Garcia et al.
(2023) show the effectiveness of using few-shot examples to control translation formality and also corroborates our finding that the quality of the fewshot in-context examples matter. Ghazvininejad et al. (2023) provide control hints to large language models via bilingual dictionaries to improve the translation of rare words. Our work provides both supporting and complementary pieces of evidence to these studies by a) contributing a systematic analysis showing that the impact of the ordering of the demonstration examples on translation quality is dependent upon the nature and the quality of the examples and b) proposing a novel recall-based reranking approach that overcomes the limitations of BM25-based retrieval for in-context examples selection and optimizes for the selection of multiple prompts for MT. To the best of our knowledge, ours is the first work to jointly optimize the selection of multiple prompts for MT either via combining task-level and example-specific prompts or via directly optimizing the joint utility of multiple example-specific prompts by maximizing the coverage of the selected n-grams.
## 8 Conclusion
We investigate the choice of in-context examples selection for MT in both in-domain and out-ofdomain settings. We propose a novel recall-based re-ranking approach to utilize similar training examples as prompts and show their efficacy across multiple datasets and domains. Our findings show that task-level prompts can provide a complementary advantage to example-specific prompts, outperforming a strong kNN-MT baseline in 2 out of 4 out-of-domain datasets while being memory and compute efficient. Our manual analysis of the generated outputs reveals that the PLM can mimic the style of the in-context examples provided and can be used for template-based translation synthesis.
These results allow future research to evaluate the potential of generating diverse and style-specific outputs for MT.
## 9 Limitations
We note a few limitations of our work: a) while we systematically investigate the choice of in-context examples for both in- and out-of-domain settings for higher-resource language pairs (EnglishGerman, English-Russian), it is unclear how this in-context ability of the PLM varies for the lowerresourced language pairs; b) We only experimented with one pre-trained language model, XGLM. Our preliminary experiments suggested XGLM-7.5B
to result in better translation quality than Bloom7B (Scao et al., 2022) under the same settings.
However, further investigation is required to understand how these results vary across different model scales; c) We analyze different orderings for the few-shot task-level prompts but only examine limited sets of ordering (most similar to the left or right) for the example-specific prompts. As the PLM is shown to be sensitive to the ordering of these in-context examples, it remains an open question to study how to best combine the information from multiple example-specific prompts, with prompt ensembling being a viable option, which we leave to future work.
## References
Sweta Agrawal, George Foster, Markus Freitag, and Colin Cherry. 2021. Assessing reference-free peer evaluation for machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1158–1171.
Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747–
7763, Online. Association for Computational Linguistics.
Loïc Barrault, Ondˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019.
Findings of the 2019 conference on machine translation (WMT19). In *Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared* Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics.
Eleftheria Briakou and Marine Carpuat. 2020. Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 1563–1580, Online. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Chris Dyer, Victor Chahuneau, and Noah A. Smith.
2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics.
Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation.
arXiv preprint arXiv:1612.06897.
Xavier Garcia, Yamini Bansal, Colin Cherry, George Foster, Maxim Krikun, Fangxiaoyu Feng, Melvin Johnson, and Orhan Firat. 2023. The unreasonable effectiveness of few-shot learning for machine translation. *arXiv preprint arXiv:2302.01398*.
Xavier Garcia and Orhan Firat. 2022. Using natural language prompts for machine translation. *arXiv* preprint arXiv:2202.11822.
Marjan Ghazvininejad, Hila Gonen, and Luke Zettlemoyer. 2023. Dictionary-based phrase-level prompting of large language models for machine translation.
arXiv preprint arXiv:2302.07856.
Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzman, and Angela Fan. 2022. The flores-101 evaluation benchmark for low-resource and multilingual machine translation. Transactions of the Association for Computational Linguistics, 10:522–538.
Hui Jiang, Ziyao Lu, Fandong Meng, Chulun Zhou, Jie Zhou, Degen Huang, and Jinsong Su. 2022. Towards robust k-nearest-neighbor machine translation. *arXiv* preprint arXiv:2210.08808.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438.
Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Nearest neighbor machine translation. In *International Conference* on Learning Representations.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In *Proceedings* of the First Workshop on Neural Machine Translation, pages 28–39.
Philipp Koehn and Jean Senellart. 2010. Convergence of translation memory and statistical machine translation. In *Proceedings of AMTA Workshop on MT*
Research and the Translation Industry, pages 21–31.
Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, et al. 2022. Quality at a glance: An audit of web-crawled multilingual datasets. *Transactions of* the Association for Computational Linguistics, 10:50–
72.
Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, et al. 2021.
Few-shot learning with multilingual language models.
arXiv preprint arXiv:2112.10668.
Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel.
2022a. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. *arXiv* preprint arXiv:2205.05638.
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022b. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO
2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
CoRR, abs/2107.13586.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098.
Minh-Thang Luong and Christopher Manning. 2015.
Stanford neural machine translation systems for spoken language domains. In *Proceedings of the 12th* International Workshop on Spoken Language Translation: Evaluation Campaign, pages 76–79, Da Nang, Vietnam.
Pedro Martins, Zita Marinho, and Andre Martins. 2022.
Efficient machine translation domain adaptation. In Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge, pages 23–29, Dublin, Ireland and Online. Association for Computational Linguistics.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837.
Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission.
In *Proceedings of the Fourth Conference on Machine* Translation (Volume 2: Shared Task Papers, Day 1), pages 314–319, Florence, Italy. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Ricardo Rei, Ana C Farinha, José G.C. de Souza, Pedro G. Ramos, André F.T. Martins, Luisa Coheur, and Alon Lavie. 2022. Searching for COMETINHO: The little metric that could. In *Proceedings of the 23rd* Annual Conference of the European Association for Machine Translation, pages 61–70, Ghent, Belgium.
European Association for Machine Translation.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT
evaluation. In *Proceedings of the 2020 Conference*
on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics.
Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® *in Information Retrieval*, 3(4):333–389.
Ohad Rubin, Jonathan Herzig, and Jonathan Berant.
2021. Learning to retrieve prompts for in-context learning. *arXiv preprint arXiv:2112.08633*.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´
Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model.
arXiv preprint arXiv:2211.05100.
Oleh Shliazhko, Alena Fenogenova, Maria Tikhonova, Vladislav Mikhailov, Anastasia Kozlova, and Tatiana Shavrina. 2022. mgpt: Few-shot learners go multilingual. *arXiv preprint arXiv:2204.07580*.
Saleh Soltan, Shankar Ananthakrishnan, Jack FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith Peris, Stephen Rawls, Andy Rosenbaum, Anna Rumshisky, et al. 2022. Alexatm 20b: Few-shot learning using a large-scale multilingual seq2seq model. *arXiv preprint arXiv:2208.01448*.
Harold Somers. 1999. Example-based machine translation. *Machine translation*, 14(2):113–157.
Brian Thompson and Matt Post. 2020. Automatic machine translation evaluation in many languages via zero-shot paraphrasing. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 90–121.
Yogarshi Vyas, Xing Niu, and Marine Carpuat. 2018.
Identifying semantic divergences in parallel text without annotations. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)*, pages 1503–1515, New Orleans, Louisiana. Association for Computational Linguistics.
Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482–1488, Copenhagen, Denmark. Association for Computational Linguistics.
Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, and Michael Zeng. 2022. Training data is more valuable than you think: A simple and effective method by retrieving from training data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3170–
3179.
Marco Wrzalik and Dirk Krechel. 2021. CoRT: Complementary rankings from transformers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 4194–4204, Online. Association for Computational Linguistics.
Masaru Yamada. 2011. The effect of translation memory databases on productivity. Translation research projects, 3:63–73.
Yury Zemlyanskiy, Michiel de Jong, Joshua Ainslie, Panupong Pasupat, Peter Shaw, Linlu Qiu, Sumit Sanghai, and Fei Sha. 2022. Generate-and-retrieve:
Use your predictions to improve retrieval for semantic parsing. In *Proceedings of the 29th International Conference on Computational Linguistics*,
pages 4946–4951.
Biao Zhang, Barry Haddow, and Alexandra Birch. 2023.
Prompting large language model for machine translation: A case study. *arXiv preprint arXiv:2301.07069*.
Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, and Chao Zhang. 2022. Prompt-based rule discovery and boosting for interactive weakly-supervised learning.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 745–758, Dublin, Ireland.
Association for Computational Linguistics.
Xin Zheng, Zhirui Zhang, Shujian Huang, Boxing Chen, Jun Xie, Weihua Luo, and Jiajun Chen. 2021. Nonparametric unsupervised domain adaptation for neural machine translation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*,
pages 4234–4241.
## A Statistics Of Datasets
| Dataset | Train | Dev | Test |
|----------------------|---------|-------|--------|
| WMT-19 (de) | 42M | 2998 | 2000 |
| WMT-19 (ru) | 10M | 3000 | 2000 |
| Multi-Domain Medical | 248K | 2000 | 2000 |
| Law | 467K | 2000 | 2000 |
| IT | 223K | 2000 | 2000 |
| Koran | 17K | 2000 | 2000 |
Table 8 includes statistics of training, development and test sets used for the experiments discussed in the paper.
Table 8: Dataset Statistics.
## B Compute Infrastructure & Run Time
Each experiment is run on a single Nvidia Tesla V100 Volta GPU machine with 32G Ram. A single inference experiment on 2000 test examples using XGLM with 16 in-context examples takes around 3-4 hrs to complete.
## C Results Using Second Metric: Comet
We report translation quality using Comet (Rei et al., 2020) in Tables 14 and 15. We use the eamt22-cometinho-da model (Rei et al., 2022)
to generate the scores as it was shown to achieve higher correlations with human judgments than lexical overlap metrics while being computationally efficient. Our re-ranking strategy (with qmax = 16)
consistently performs the best across the board except for Koran, outperforming strong kNN-MT
baselines on the multi-domain test set in 3 out of 4 settings. Adding a task-level prompt to 16 RBM25 prompts via concatenation further improves quality in 5 out of 8 settings.
## D Hyperparameter Search D.1 Order Of Bm25 Retrieved Examples
We report BLEU when using two different orderings of example-specific prompts on the development set for the medical domain. Ordering the examples with the most similar examples on the left attains higher BLEU than the right-to-left order.
We note that the trend could vary depending on the noise in the training dataset, the degree of similarity, and the number of retrieved examples. We leave the exploration of the ordering of example-specific prompts to future work.
Table 9: BLEU using two different orderings of the top-16 example-specific BM25 prompts on the Medical development Set.
## D.2 Choice Of Λ**, Threshold**
Table 10 shows the BLEU and the average number of in-context examples selected when varying λ and the threshold described in Section 3.2. We select λ = 0.1 and threshold value of 1.0 as it achieves the best BLEU on the Medical development set as shown below:
| λ | BLEU |
|---------------|--------|
| Left-to-right | 56.84 |
| Right-to-left | 54.97 |
| λ | Threshold | BLEU | Avg. # of Examples |
|-----|-------------|--------|----------------------|
| 0.1 | 0.1 | 54.55 | 14.16 |
| 1.0 | 54.56 | 12.73 | |
| 5.0 | 53.35 | 8.83 | |
| 0.3 | 0.1 | 54.47 | 15.06 |
| 1.0 | 54.51 | 14.28 | |
| 5.0 | 53.98 | 10.32 | |
| 0.5 | 0.1 | 54.44 | 15.44 |
| 1.0 | 54.39 | 15.10 | |
| 5.0 | 54.44 | 11.85 | |
Table 10: BLEU using different values of λ and threshold on the Medical Development Set (qmax = 16).
## E Example Task-Level Prompts
Table 11 shows the best task-level in-context example selected by our method described in § 3.1 and the respective BLEU scores on the development set for the German-English and Russian-English tasks.
## F Output Analysis
We report two interesting findings when prompting PLM with task-level and example-specific prompts:
Stylistic Outputs One advantage of using a single task-level in-context example to prompt the PLM is that it allows us to systematically study how the choice of prompt influences the style of the generated translation. Table 12 illustrates one German: Beispielsweise der Änderungsantrag zu Artikel 5 in der Stellungnahme des Ausschusses für Landwirtschaft und ländliche Entwicklung weist klar und deutlich darauf hin, dass die Verschlechterung der Qualität des Bodens lokale oder regionale Ursachen und Wirkungen hat und daher unbedingt nationale statt europäischer Maßnahmen ergriffen werden müssen.
English: For example, the amendment to Article 5 in the opinion of the Committee on Agriculture and Rural Development clearly indicates that the degradation of the soil has local or regional causes and effects and it is therefore essential to adopt national as opposed to European measures.
## Development Bleu: 35.82
Russian: Если ваш браузер возвращает ранее сохраненный "cookie", то управляющий им поставщик имеет возможность соединить актуальное посещение пользователя с предыдущими посещениями, но только в отношении своего содержания.
English: If the browser sends back an earlier saved cookie, then the service managing these can connect to the users earlier visit, but only in ´
respect of their own content.
## Development Bleu: 25.63
Table 11: Best task-level prompt For De-En and Ru-En Language Pairs according to the BLEU score on the development set.
such example: we can observe that as the prompt includes a *contraction* ("we are" vs. "we're"), the outputs generated by the PLM also include contractions and can be incorrectly penalized by BLEU
while being meaning equivalent.
Template-based MT Template-based translation in medical, legal, it, or e-commerce domain can be preferable as they reduce the risk of generating errors in automatically generated translations. We present some examples in Table 13 on how PLM
can seamlessly use retrieved prompts to synthesize a translation from the template provided.
Prompt: Wegen des heißen Sommers fangen wir erst spät an. = Because of the hot summer, we're late getting started.
Source: Ja, ich bin sehr zufrieden mit dem Auftritt.
Reference: Yes, I am very happy with the performance.
PLM **Output:** Yes, I'm very satisfied with the performance.
Source: Es ist eine andere Unternehmenskultur. Reference: It is a different corporate culture.
PLM **Output:** It's a different corporate culture.
Table 12: Outputs mimic the style of the prompt.
Prompt: WIE IST SINGULAIR ANZUWENDEN? = HOW TO TAKE SINGULAIR
Source: WIE IST EVOLTRA ANZUWENDEN?
PLM Output: HOW TO TAKE EVOLTRA
Prompt: Zeigt die aktuelle Datei mit Opera an.
= View the current file with Opera.
Source: Zeigt die aktuelle Datei mit Lynx an
(Textbasierter Browser).
PLM **Output:** View the current file with Lynx
(Text-based browser).
Table 13: Outputs follow the template of the prompt.
| Method | p + qmax | En-De | De-En | Ru-En | En-Ru |
|---------------------|------------|---------|---------|---------|---------|
| Task-level | 1 + 0 | 0.354 | 0.403 | 0.428 | 0.626 |
| BM25 | 0 + 1 | 0.107 | 0.149 | 0.139 | 0.346 |
| R-BM25 | 0 + 1 | 0.204 | 0.249 | 0.244 | 0.413 |
| Random-Avg | 16 + 0 | 0.387 | 0.391 | 0.424 | 0.636 |
| Task-level | 16 + 0 | 0.389 | 0.381 | 0.440 | 0.662 |
| BM25 | 0 + 16 | 0.423 | 0.410 | 0.434 | 0.673 |
| R-BM25 | 0 + 16 | 0.438 | 0.420 | 0.444 | 0.677 |
| R-BM25 | 0 + 17 | 0.440 | 0.421 | 0.448 | 0.676 |
| Task-level + R-BM25 | 1 + 16 | 0.434 | 0.430 | 0.447 | 0.694 |
Table 14: Comet Scores on WMT'19 test sets.
Table 15: Comet Scores on the Multi-Domain Test Set.
| Method | Corpus | p + qmax | MEDICAL | LAW | IT | KORAN |
|---------------------------------------------------|-----------------|------------|-----------|--------|--------|---------|
| Results from Jiang et al. (2022) Vanilla kNN-MT - | - | 0.548 | 0.662 | 0.531 | -0.014 | |
| Their model | - | - | 0.578 | 0.703 | 0.585 | 0.047 |
| Task-level | Domain-specific | 1 + 0 | 0.314 | 0.320 | 0.240 | -0.068 |
| WMT | 0.277 | 0.345 | 0.146 | -0.113 | | |
| R-BM25 | Domain-specific | 0 + 1 | 0.464 | 0.553 | 0.389 | -0.216 |
| Task-level | Domain-specific | 16 + 0 | 0.369 | 0.365 | 0.222 | -0.047 |
| WMT | 0.297 | 0.399 | 0.098 | -0.131 | | |
| R-BM25 | Domain-specific | 0 + 16 | 0.697 | 0.697 | 0.666 | -0.105 |
| R-BM25 | Domain-specific | 0 + 17 | 0.699 | 0.697 | 0.667 | -0.104 |
| Task-level + R-BM25 | 1 + 16 | 0.701 | 0.699 | 0.721 | -0.095 | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 9
✓ A2. Did you discuss any potential risks of your work?
Section 9 (Limitation 1). In our work, we study and improve the translation ability of large-scale language models for higher resource language pairs only. It still remains an open question on how these abilities transfer to the lower-resourced language pairs. Furthermore, getting reliable and consistent outputs from generative language models is a known problem: https://openreview.net/forum?id=98p5x51L5af.
Furthermore,
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Yes, our abstract summarizes the main results and takeaways.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4
✓ B1. Did you cite the creators of artifacts you used?
Section 4
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
While large-scale generative models are not directly intended to be used for machine translation and many other downstream NLP tasks, they have been shown to be able to utilize very few examples to perform these tasks. Our work studies this phenomenon and provides analysis and modifications to improve this capability.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix Table A, B and Section 4.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Section 4 And Appendix Table B.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4.2 and Appendix Table B.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4 and Appendix Table D.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sections 4, 5 and 6 and Appendix Table C
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4.1 (Footnote 1).
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
chen-etal-2023-propsegment | {P}rop{S}egm{E}nt: A Large-Scale Corpus for Proposition-Level Segmentation and Entailment Recognition | https://aclanthology.org/2023.findings-acl.565 | The widely studied task of Natural Language Inference (NLI) requires a system to recognize whether one piece of text is textually entailed by another, i.e. whether the entirety of its meaning can be inferred from the other. In current NLI datasets and models, textual entailment relations are typically defined on the sentence- or paragraph-level. However, even a simple sentence often contains multiple propositions, i.e. distinct units of meaning conveyed by the sentence. As these propositions can carry different truth values in the context of a given premise, we argue for the need to recognize the textual entailment relation of each proposition in a sentence individually. We propose PropSegmEnt, a corpus of over 45K propositions annotated by expert human raters. Our dataset structure resembles the tasks of (1) segmenting sentences within a document to the set of propositions, and (2) classifying the entailment relation of each proposition with respect to a different yet topically-aligned document, i.e. documents describing the same event or entity. We establish strong baselines for the segmentation and entailment tasks. Through case studies on summary hallucination detection and document-level NLI, we demonstrate that our conceptual framework is potentially useful for understanding and explaining the compositionality of NLI labels. | # Propsegment**: A Large-Scale Corpus For** Proposition-Level Segmentation And Entailment Recognition Sihao Chen*1,2 Senaka Buthpitiya1 Alex Fabrikant1 Dan Roth2 **Tal Schuster**1
1Google Research 2University of Pennsylvania
{senaka,fabrikant,talschuster}@google.com, {sihaoc,danroth}@cis.upenn.edu
## Abstract
The widely studied task of Natural Language Inference (NLI) requires a system to recognize whether one piece of text is textually entailed by another, i.e. whether the *entirety* of its meaning can be inferred from the other. In current NLI datasets and models, textual entailment relations are typically defined on the sentence- or paragraph-level. However, even a simple sentence often contains multiple *propositions*, i.e. distinct units of *meaning* conveyed by the sentence. As these propositions can carry different truth values in the context of a given premise, we argue for the need to recognize the textual entailment relation of each proposition in a sentence individually.
We propose PROPSEGMENT, a corpus of over 45K propositions annotated by expert human raters. Our dataset structure aligns with the tasks of (1) segmenting sentences within a document to the set of propositions, and (2) classifying the entailment relation of each proposition with respect to a different yet topicallyaligned document, i.e. documents describing the same event or entity. We establish strong baselines for the segmentation and entailment tasks. Through case studies on summary hallucination detection and document-level NLI,
we demonstrate that our conceptual framework is potentially useful for understanding and explaining the compositionality of NLI labels.
## 1 **Introduction**
Natural Language Inference (NLI), or Recognizing Textual Entailment (RTE), is the task of determining whether the meaning of one text expression can be inferred from another (Dagan and Glickman, 2004). Given two pieces of text (*P, H*), we say the premise P *entails* the hypothesis H if the *entirety* of H's meaning can be most likely inferred true after a human reads P. If some units of meaning in H are contradicted by, or cannot be determined Premise Document Andrew Warhola, known as Andy Warhol, is an American artist born August 6, 1928 in Pittsburgh, Pennsylvania and died February 22, 1987 in New York. He is one of the main representatives of pop art. Warhol is known the world over for his work as a painter, music producer, author, avantgarde films... (7 more sentences omitted)
Hypothesis Sentence
(*from another document of the same topic*) ... The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art. ... Propositions Entailment Label The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art.
| The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art. | Neutral |
|---|------------|
| The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art. | Entailment |
| The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art. | Neutral |
Table 1: An example instance from the PROPSEG-MENT dataset with propositions (marked as token subsets highlighted in blue) and their entailment labels.
from P, we describe the relation between the two as contradiction or *neutral* (de Marneffe et al., 2008)
respectively. This fundamentally challenging natural language understanding task provides a general interface for semantic inference and comparison across different sources of textual information.
In reality, most naturally occurring text expressions are composed of a variable number of *propositions*, i.e. distinct units of meaning conveyed by the piece of text. Consider the sentence shown in Table 1: "*The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art.*" Despite the sentence being relatively compact, it still contains (at least) three propositions, as listed in Table 1. While the entire hypothesis would be classified as *neutral* or *not-entailed* to the premise, one of its proposi-
* Work done as an intern at Google tions "*Andy Warhol's hometown is in Pittsburgh,*
Pennsylvania" is in fact entailed by the premise, while the premise provides no support for the other two propositions. This phenomenon, namely *partial entailment* (Levy et al., 2013), is a blind spot for existing sentence- or paragraph-level NLI formulations. When a hypothesis is *compositional*, NLI
labels coarsely defined on the sentence/paragraphlevel cannot express the difference between partial entailment from the non-entailment cases.
This work argues for the need to study and model textual entailment relations on the level of *propositions*. As NLI tasks and applications typically involve different genre of text with variable length and number of propositions (Yin et al.,
2021), decomposing textual entailment relation to the propositional level provides a more fine-grained yet accurate description of textual entailment relation between two arbitrary text expressions.
Modeling *propositional textual entailment* provides a more unified inference format across NLI tasks, and would potentially improve the generalization capabilities of NLI models, e.g. with respect to the variability in input lengths (Schuster et al., 2022).
We propose PROPSEGMENT, a multi-domain corpus with over 45K human-annotated propositions. 1 We define the tasks of proposition-level segmentation and entailment. Given a hypothesis sentence and a premise document, a system is expected to segment the hypothesis into the set of propositions, and recognize whether each proposition can be inferred from the premise.
Interestingly, we observe that existing notions of proposition adopted by Open Information Extraction (OpenIE) or Semantic Role Labeling (SRL)
(Baker et al., 1998; Kingsbury and Palmer, 2002; Meyers et al., 2004) often fail to account for the complete set of propositions in a sentence, partly due to the fact that predicates and arguments in different propositions do not necessarily follow the same granularity (§2). We therefore adopt a more flexible and unified way of representing a proposition as a *subset of tokens* from the input sentence, without explicitly annotating the semantic role or predicate-argument structure within the proposition, as illustrated in Table 1. We discuss the motivation and design desiderata in §2.
We construct PROPSEGMENT by sampling clusters of topically-aligned documents, i.e. documents focusing on the same entity or event, from WIKIPEDIA (Schuster et al., 2022) and the news domains (Gu et al., 2020). We train and instruct expert annotators to identify all propositions exhaustively in a document, and label the textual entailment relation of each proposition with respect to another document in the cluster, viewed as the premise.
We discuss the modeling challenges, and establish strong baselines for the segmentation and entailment tasks. We demonstrate the utility of our dataset and models through downstream use case studies on summary hallucination detection
(Maynez et al., 2020), and DocNLI (Yin et al.,
2021), through which we show that recognizing and decomposing entailment relations at the proposition-level could provide fine-grained characterization and explanation for NLI-like tasks, especially with long and compositional hypotheses.
In summary, the main contributions in our paper include: (1) Motivating the need to recognize textual entailment relation on proposition level; (2)
Introducing the first large-scale dataset for studying proposition-level segmentation and entailment recognition; and (3) Leveraging PROPSEGMENT
to train Seq2Seq models as strong baselines for the tasks, and demonstrating their utility in documentlevel NLI and hallucination detection tasks.
## 2 **Motivations & Design Challenges**
Our study concerns the challenges of applying NLI/RTE task formulations and systems in *realworld* downstream applications and settings. As textual entailment describes the relation between the meanings of two text expressions, one natural type of downstream use cases for NLI systems is to identify alignments and discrepancies between the semantic content presented in different documents/sources (Kryscinski et al., 2020; Schuster et al., 2021; Chen et al., 2022).
Our study is motivated by the task of comparing the content of topically-related documents, e.g.
news documents covering the same event (Gu et al.,
2020), or Wikipedia pages from different languages for similar entities (Schuster et al., 2022). As existing NLI datasets typically define the textual entailment relations at the sentence or paragraph level
(Bowman et al., 2015; Williams et al., 2018), NLI
systems trained on such resources can only recognize whether or not the entirety of a hypothesis sentence/paragraph is entailed by a premise. However, we estimate that, in these two domains, around
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
90% of the sentences that convey any informational propositions contain more than one proposition
(Figure 1). In the presence of multiple propositions, partial entailment (Levy et al., 2013) describes the phenomenon where only a subset of propositions in the hypothesis is entailed by the premise.
Partial entailment is 3× **more common than**
full-sentence entailment. In our corpus, we observe that, given two topically related documents from news or Wikipedia, 46% of sentences in one document have at least some information supported by the other document (Figure 2). But 74% of such sentences are *partially entailed*, with only some propositions supported by the other document. In this sense, a sentence-level NLI model can only detect a quarter of sentences that have meaningful entailment relations. In applications that seek a full understanding of cross-document semantic links, there is thus 4× headroom, a significant blind spot for sentence-level NLI models.
As we observe that most natural sentences are compositional, i.e. contain more than one proposition, we argue for the need to decompose and recognize textual entailment relation at the more granular level of propositions. In other words, instead of assessing the entire hypothesis as one unit in the context of a premise, we propose to evaluate the truth value of each proposition individually, and aggregate for the truth value of the hypothesis.
Current predicate-argument based methods often fail to extract all propositions in a sentence.
The linguistic notion of a proposition refers to a single, contextualized unit of meaning conveyed in a sentence. In the NLP community, propositions are usually represented by the predicate-argument structure of a sentence. For example, resources like FrameNet (Baker et al., 1998), PropBank (Palmer et al., 2005), NomBank (Meyers et al., 2004),
among others, represent a proposition by a predicate (verbal, nominal, etc.), with arguments filling its thematic proto-roles. Such resources facilitate the development of SRL systems (Palmer et al.,
2010) for proposition extraction, with a closed, predefined set of proto-roles. To increase the coverage of propositions extracted, OpenIE formulations (Etzioni et al., 2008; Del Corro and Gemulla, 2013; Cui et al., 2018) were proposed to forgo the limits on fixed semantic roles and account for both explicit and implicit predicates. However, we observe that OpenIE systems often fail to account for the complete set of propositions in a sentence. In many cases, e.g. the *Andy Warhol's hometown* example in Table 1, arguments of a proposition might not follow the same granularity as the ones in the sentence, e.g. Andy Warhol vs *Andy Warhol Museum*.
Also, as OpenIE triples are still defined on direct predicate-argument relations, they often fail to produce a *decontextualized* (Choi et al., 2021) view of a proposition. For example, an OpenIE system would recognize the possessive relation "he has a hometown", but fail to resolve the references of he
→ *Andy Warhol*, and hometown → *Pittsburgh*.
Furthermore, Gashteovski et al. (2020) and Fatahi Bayat et al. (2022) observe that *neural* OpenIE systems tend to extract long arguments that could potentially be decomposed into more compact propositions. For textual entailment, we argue for the need to extract the complete set of propositions in their most *compact* form, due to the fact that their truth value could vary individually.
To illustrate the difference between OpenIE and our approach, we offer a list of example propositions from our proposed PROPSEGMENT dataset, and compared them to extractions from rule-based and neural OpenIE systems, in Appendix D.
## 3 Propsegment **Dataset**
We propose PROPSEGMENT, a large-scale dataset featuring clusters of topically similar news and
![3_image_0.png](3_image_0.png)
Wikipedia documents, with human annotated propositions and entailment labels.
## 3.1 **Task Definitions**
We formulate the task of recognizing propositional textual entailment into two sub-tasks (Fig. 3).
Given a hypothesis sentence and a premise document, a system is expected to (1) identify all the propositions within the hypothesis sentence, and
(2) classify the textual entailment relation of each proposition with respect to the premise document.
T1**: Propositional Segmentation** Given a sentence S with tokens [t0, t1*, ..., t*l] from a document D, a system is expected to identify the set of propositions P ⊆ 2 S, where each proposition p ∈ P is represented by a unique subset of tokens in sentence S. In other words, each proposition can be represented in sequence labeling format, per the example from Table 1. Each proposition is expected
(1) to correspond to a distinct fact that a reader learns directly from reading the given sentence,
(2) include all tokens within the sentence that are relevant to learning this fact, and (3) to not be equivalent to a conjunction of other propositions. We opt for this format as it does not require explicit annotation of the predicate-argument structure. This allows for more expressive power for propositions with implied or implicit predicates (Stern and Dagan, 2014). Also, representing each proposition as a separate sequence could effectively account for cases with shared predicate or arguments spans, and make evaluation more readily accessible.
Since the propositions, as we demonstrated earlier, do not necessarily have a unique and identifiable predicate word in the sentence, the typical inference strategy, e.g. in SRL or OpenIE, which first extracts the set of predicates, and then identifies the arguments with respect to each predicate would not work in this case. For this reason, given an input sentence, we expect a model on the task to directly output all propositions. In such *one-to-set* prediction setting, the output propositions of the model are evaluated as an unordered set.
T2**: Propositional Entailment** Given a hypothesis proposition p from document Dhyp and a whole premise document D*prem*, a system is expected to classify whether the premise entails the proposition, i.e. if the information conveyed by the proposition would be inferred true from the premise.
## 3.2 **Dataset Construction**
We sample 250 document clusters from both the Wiki Clusters (Schuster et al., 2022) and NewSHead (Gu et al., 2020) datasets. Each cluster contains the first 10 sentences of three documents, either news articles on the same event, or Wikipedia pages in different languages (machine-translated into English) of the same entity. For each sentence, we train and instruct three human raters to annotate the set of propositions, each of which represented by a unique subset of tokens from the sentence.
Conceptually, we instruct raters to include all the words that (1) pertain to the content of a proposition, and (2) are explicitly present in the sentence.
For example, if there does not exist a predicate word for a proposition in the sentence, then only include the corresponding arguments. Referents present within the sentence are included in addition to pronominal and nominal references. We provide a more detailed description of our rater guidelines and how propositions are defined with respect to various linguistic phenomena in Appendix B.
Given the three sets of propositions from the three raters for a sentence, we reconcile and select one of the three raters' responses with the highest number of propositions that the other raters also annotate. Since the exact selection of tokens used to mark a proposition may vary across different raters, we allow for fuzziness when measuring the match between two propositions. Following FitzGerald et al. (2018) and Roit et al. (2020), we use Jaccard similarity, i.e. intersection over union of the two
| Item | WIKIPEDIA | NEWS | FULL DATASET | | | | | | |
|--------------------|-------------|--------|----------------|-------|-------|-------|-------|-------|-------|
| Train | Dev | Test | Train | Dev | Test | Train | Dev | Test | |
| News Clusters | 210 | 15 | 24 | 210 | 15 | 25 | 420 | 30 | 49 |
| Documents | 630 | 45 | 72 | 630 | 45 | 75 | 1260 | 90 | 147 |
| Sentences | 4990 | 376 | 532 | 4923 | 348 | 596 | 9913 | 724 | 1128 |
| Propositions | 21191 | 1597 | 2380 | 17015 | 1344 | 2023 | 38206 | 2941 | 4403 |
| Prop.→Doc. Label # | 14083 | 1057 | 4729 | 11369 | 948 | 4008 | 25452 | 2005 | 8737 |
| ENTAIL Label % | 34.70 | 33.24 | 34.85 | 20.27 | 19.98 | 20.13 | 28.26 | 26.99 | 28.19 |
sets of selected tokens, to measure the similarity between two propositions. We say two propositions match if their Jaccard similarity is greater or equal to a threshold θ = 0.8, and align two raters' responses using unweighted bipartite matching between propositions satisfying the Jaccard threshold.
Next, for all propositions in a document, we sample one other document from the document cluster as premise, and ask three raters to label the textual entailment relation between each proposition and the premise, i.e. one of {*Entailment, Neutral,*
Contradiction}. We take the majority vote from the three as the gold entailment label. Interestingly, we observe that only 0.2% of all annotated labels from the rater are "*contradictions*". We speculate that the low presence of contraditions can in part be attributed to the difficulty in establishing reference determinacy (Bowman et al., 2015) between the premise and hypothesis. We discuss more details in Appendix C. For this reason, we choose to only consider two-way label ({*Entailment, NonEntailment*}) for the entailment task evaluation.
We create the train/dev/test splits based on clusters, so that documents in each cluster exclusively belong to only one of the splits. Overall, the dataset features 1497 documents with ∼45K propositions with entailment labels; More statistics in Table 2.
## 3.3 **Inter-Rater Agreement**
For the propositional segmentation task (T1), as the inter-rater agreement involves set-to-set comparison between the propositions annotated by a pair of raters, we report two different metrics.
First, between each pair of raters, we use the same Jaccard similarity with θ = 0.8 and find the matched set of propositions between the raters with bipartite matching for each example. We measure the coverage of the matched set by either rater with F1 score. We observe 0.57 F1 among all raters.
As comparison, we use the same metric for model evaluation and human performance estimation, as we will discuss in § 5.1. In addition, we measure the token-level agreement on the matched set of propositions among raters with Fleiss' kappa
(Fleiss, 1971), i.e. whether raters agree on whether each token should be included in a proposition or not. We observed κ = 0.63, which indicates moderate to substantial agreement among raters.
For the entailment task, (T2), we observe Fleiss' kappa = 0.84 across three-way {*Entailment, Neutral, Contradiction*} labels.
## 4 **Baseline Methods** 4.1 **Propositional Segmentation Baselines**
The key challenge with the proposition extraction task lies within the one-to-set structured prediction setting. Our one-to-set prediction format is similar to QA-driven semantic parsing such as QA-SRL
(He et al., 2015; Klein et al., 2022), as both involve generating a variable number of units of semantic content under no particular order between them.
As in propositions, there might not necessarily be a unique and identifiable predicate word associated with each proposition, extracting predicates first
(e.g. as a sequence tagging task), and later individually produce one proposition for each predicate would not be a sufficient solution in this case.
For this particular one-to-set problem setup, We introduce two classes of baseline models.
Seq2Seq: T5 (Raffel et al., 2020) When formatting a output set as a sequence, Seq2Seq models have been found to be a strong method for tasks with set outputs, as they employ chain-rules to efficiently model the joint probability of outputs
(Vinyals et al., 2016). The obvious caveat for representing set outputs as sequences is that we need an ordering for the outputs. Having a consistent ordering helps seq2seq model learn to maintain the output set structure (Vinyals et al., 2016), and the best ordering scheme is often both model- and taskspecific (Klein et al., 2022). In our experiments, we observe that sorting the propositions by the appearance order of the tokens in the sentence, i.e.
| Task/Setting | Model | Jaccard θ = 0.8 | Exact Match | | | |
|--------------------------------|----------------------------|-----------------------------|---------------|---------|-------|-------|
| Precision | Recall | F1 | Precision | Recall | F1 | |
| BERT-Base | 33.77 | 33.53 | 33.65 | 14.33 | 14.60 | 14.47 |
| BERT-Large | 34.97 | 33.42 | 34.17 | 14.61 | 14.16 | 14.38 |
| T5-Base | 54.96 | 51.93 | 53.41 | 32.87 | 31.54 | 32.19 |
| T5-Base w/ Entail. | 53.54 | 51.50 | 52.50 | 31.61 | 30.67 | 31.13 |
| T5-Large | 55.95 | 55.05 | 55.50 | 32.40 | 32.16 | 32.28 |
| T5-Large w/ Entail. | 56.27 | 55.50 | 55.89 | 31.94 | 32.11 | 32.02 |
| Human Performance | 69.63 | 64.69 | 67.07 | 44.86 | 42.93 | 43.87 |
| T1: Propositional Segmentation | Performance (2-way Class.) | Per-Label F1 (3-way Class.) | | | | |
| Accuracy | Balanced Accuracy | Entail. | Neutral | Contra. | | |
| Always Entails. | 27.89 | 50.00 | 43.62 | 0.00 | 0.00 | |
| Always Neutral | 72.10 | 50.00 | 0.00 | 83.54 | 0.00 | |
| T5-Base | 85.17 | 81.44 | 73.32 | 89.68 | 11.21 | |
| T5-Large | 91.38 | 89.75 | 84.78 | 93.98 | 20.34 | |
| Human Performance | 90.20 | 88.31 | - | - | - | |
| T2: Propositional Entailment | | | | | | |
## 4.2 **Propositional Entailment Baselines**
positions of the foremost tokens of each proposition in the sentence, yields the best performance.
We start from the pretrained T5 1.1 checkpoints from the T5x library (Roberts et al., 2022). Given a sentence input, we finetune the T5 model to output the propositions in a single sequence. For each input sentence, we sort the output propositions using the aforementioned ordering scheme, and join them by a special token [TARGET]. The spans of tokens included in each proposition is surrounded by special tokens [M] and [/M]. For instance, "[M]Alice[/M] and Bob [M]*went to the* Zoo[/M]. [TARGET] Alice and [M]*Bob went to* the Zoo.*[/M]* ". In addition, we evaluate the setting where the model is also given the premise document D*prem*, and learns to output the entailment label along with each proposition (T5 *w/ Entail.* in Table 3).
Encoder+Tagger: BERT (Devlin et al., 2019)
For comparison, we provide a simpler baseline that does not model joint probability of the output propositions. On top of the last layer an encoder model, i.e. BERT, we add k linear layers that each correspond to one output proposition. Given an input sentence, the i th linear layer produces a binary (0/1) label per token, indicating whether the token is in the i th proposition or not. k is set to be a sufficiently large number, e.g. k = 20 in our experiments. We use the label of the [CLS] token of the i th linear layer to indicate whether the i th proposition should exist in the output. For such, we follow the same ordering of the output propositions as in the seq2seq (T5) baseline setup.
We formulate the task as a sequence labeling problem, and finetune T5 model as our baseline. The inputs consist of the hypothesis proposition p with its document context Dhyp, plus the premise document D*prem*. The output is one of the three-way labels {*Entailment, Neutral, Contradiction*}. Due to low presence of contradictions, we merge the neutral and *contradiction* outputs from the model as *non-entailments* during evaluation. To ensure that the model has access to the essential context information, our task input also include the document Dhyp of the hypothesis proposition p, so that model has a decontextualized view of p when inferring its textual entailment relation with D*prem*.
## 5 **Experiments And Results** 5.1 **Evaluation Metrics**
Propositional Segmentation We measure the precision and recall between the set of predicted and gold propositions for a given sentence. As the set of gold propositions do not follow any particular ordering, we first produce a bipartite matching between them using the Hungarian algorithm (Kuhn, 1955). We again use the Jaccard similarity over θ = 0.8 as a fuzzy match between two propositions (§ 3.2). We also use exact match, an even more restrictive measure where two propositions match if and only if they have the exact same tokens. We report the macro-averaged precision and recall over sentences in the test set.
Propositional Entailment We report the baseline performance under two-way classification re-
![6_image_0.png](6_image_0.png)
Table 4: Cross-domain (i.e. train on NEWS → test on WIKI, and train on WIKI → test on NEWS) generalization results of T5-large on the segmentation (T1) task.
sults in accuracy. We also report the balanced accuracy, i.e. average of true positive and true negative rate, due to label imbalance (Table 2). To understand the per-label performance, we also report the F1 score w.r.t. each of the three-way label.
## 5.2 **Baseline Results**
Table 3 shows the evaluation results for the segmentation (T1) and entailment task (T2) respectively.
For the segmentation task (T1), the seq2seq T5 model setup yields superior performance compared to the simpler encoder+tagger BERT setup. As the encoder+tagger setup predicts each proposition individually, and does not attend on other propositions during inference, we observe that the model predicts repeated/redundant propositions in > 20%
of the input sentences. In the seq2seq T5 setup, the repetition rate is < 1%. For both setups, we remove the redundant outputs as a post processing step. We also evaluate the multi-task setup (i.e.
T5 *w/ Entail.* in Table 3) where the model jointly learns the entailment label with each proposition, and observe no significant improvements. For the entailment task (T2), we see that T5-Large yields the best overall performance. We observe that the performance with respect to the *entailment* label is lower compared to the *neutral* label.
For both tasks, we estimate the averaged human expert performance by comparing annotations from three of the authors to ground truth on 50 randomly sampled examples from the dataset. We observe that for the segmentation task T1, we observe that the human performance increases after reconciling and selecting the ground truth response
(0.57 → 0.67 F1). We see that there remains a sizable gap between the best model, T5-Large, and human performance. On the entailment task T2, T5-Large exceeds human performance, which is not uncommon among language inference tasks of similar classification formats (Wang et al., 2019).
Document: The incident happened near Dr Gray's Hospital shortly after 10:00. The man was taken to the hospital with what police said were serious but not life-threatening injuries. The A96 was closed in the area for several hours, but it has since reopened. Summary w/ human labeled hallucinated spans:
A man has been taken to hospital following a one-vehicle crash on the A96 in Aberdeenshire. Predicted propositions (blue) and entailment labels
\#1: A man has been taken to hospital following a onevehicle crash on the A96 in Aberdeenshire. ✔
\#2: A man has been taken to hospital following a onevehicle crash on the A96 in Aberdeenshire. ✗
\#3: A man has been taken to hospital following a onevehicle crash on the A96 in Aberdeenshire. ✗ \#4: A man has been taken to hospital following a onevehicle crash on the A96 in Aberdeenshire. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
A man has been taken to hospital following a one-vehicle crash on the A96 in Aberdeenshire.
Table 5: An example model generated summary on the XSum dataset, with human-annotated hallucination spans from Maynez et al. (2020). We show that we can infer the hallucinated spans from the set of four propositions and their entailment labels (entail=✔, *notentail*=✗), predicted by our T5-Large models. More examples can be found in Appendix E
## 5.3 **Cross-Domain Generalization**
On the propositional segmentation (T1) task, we evaluate the how the best baseline model generalizes across the Wikipedia (Wiki) and News domains. Table 4 shows the results of T5-Large models finetuned on data from each domain, and evaluated on the test split of both domains.
When applying a model trained on Wiki, we see a larger drop in performance when tested on News, as the News domain features more syntactic and stylistic variations compared to the Wiki domain.
## 6 **Analysis And Discussion**
We exemplify the utilities of our propositional segmentation and entailment framework, which we refer to as PropNLI, through the lens of two downstream use cases, e.g. summary hallucination detection (§ 6.1), and document-level NLI w/ variablelength hypotheses (§ 6.2).
## 6.1 **Application: Hallucination Detection**
We first look at the task of summary hallucination detection, i.e. given a summary of a source document, identify whether the summary's content is faithful to the document. Naturally the task can be represented as a NLI problem, and NLI systems
![7_image_0.png](7_image_0.png)
have been shown effective on the task (Kryscinski et al., 2020; Chen et al., 2021). As summaries can be long and compositional, recognizing partial entailment, and identifying which part(s) of a summary is hallucinated becomes important (Goyal and Durrett, 2020; Laban et al., 2022).
To show that PropNLI can be used for hallucination detection, we experiment on the model generated summaries on the XSum dataset (Narayan et al., 2018), where Maynez et al. (2020) provide human annotations of the sets of hallucinated spans
(if they exist) in the summaries. Table 5 illustrates our idea. If a proposition in a summary is *entailed* by the document, then all spans covered by the proposition are faithful. Otherwise, *some* spans would likely contain *hallucinated* information.
Following such intuitions, we first evaluate the performance of our method in zero-shot settings as a hallucination classifier , i.e. binary classification for whether a summary is hallucinated or not. For baseline comparison, we use a T5-large model finetuned on MNLI (Williams et al., 2018) to classify a full summary as entailed (→ *faithful*) or not (→
hallucinated). As ˜89% of the summaries annotated by Maynez et al. (2020) are hallucinated, we again adopt balanced accuracy (§ 5.1) as the metric.
On 2500 examples, our method achieved 61.68%
balanced accuracy, while MNLI achieved 58.79%.
Next, we study whether the entailment labels of propositions can be composed to detect hallucinated spans in a summary. As in Table 5, we take the union of the spans in *non-entailed* propositions, and exclude the spans that has appeared in *entailed* propositions. The intuition is that the hallucinated information likely only exists in the non-entailed propositions , but not the entailed ones.
We evaluate hallucinated span detection as a token classification task. For each summary, we evaluate the precision and recall of the *faithful* and hallucinated set of predicted tokens respectively against the human-labeled ground truth set. We report the macro-averaged precision, recall and F1
![7_image_1.png](7_image_1.png)
score over all 2,500 summaries. We compare our method to a T5-Large model finetuned on MNLI,
where we label all tokens as *faithful* if the summary is predicted to be *entailed*, and all tokens as *hallucinated* otherwise. We report the performance with respect to each of the two labels in Table 6. As the MNLI model don't distinguish partial entailment from non-entailment cases, it predicts more tokens to be hallucinated, and thus having low precision and high recall on the hallucinated tokens, and vice versa. On the other hand, we observe our model can be used to detect the nuance between faithful and hallucinated tokens with good and more balanced performance for both cases. Table 5 shows one example summary and PropNLI's predictions, and we include more examples in Appendix E.
## 6.2 **Proposition-Level** → Sentence/Paragraph-Level Entailment
We would like to see whether proposition-level entailment labels can potentially be *composed* to explain sentence/paragraph-level NLI predictions.
Given a hypothesis sentence/paragraph and a premise, our PropNLI framework takes three steps.
First we segment the hypothesis into propositions. For each proposition, we infer its entailment relation with the premise. In cases where multiple propositions exist in the hypothesis, the proposition-level entailment labels can be aggregated to obtain the entailment label for the *entire* hypothesis, similar to ideas presented in Stacey et al. (2022). As a starting point, we assume logical conjunction as the aggregation function, and hypothesize that this will offer a more fine-grained and explainable way of conducting NLI inference.
To demonstrate the utility of the idea, we conduct a case study on DocNLI (Yin et al., 2021),
which features premise and hypothesis of different length, and so varying number and compositions of propositions. We take the baseline T5-
Large segmentation and entailment models respectively, and use logical conjunction to aggregate the proposition-level entailment prediction. We compare PropNLI in a zero-shot setting against the T5-Large MNLI model. The MNLI model takes the entire hypothesis and premise and input without any segmentation or decomposition.
The results are shown in Figure 4. We take the development set of DocNLI and split examples into buckets according to number of tokens in the hypothesis. We examine the zero-shot performance of the PropNLI setup versus the finetuned MNLI
model. We observe that with shorter hypotheses
(< 100 tokens), the two setups demonstrated similar performance, as the hypothesis length is similar to the distribution of MNLI training set (avg. 21.73 tokens ±30.70). As the length of the hypothesis increases, the performance of MNLI model starts to drop, while PropNLI's performance remains relatively stable. Such observations suggest the potential of using the PropNLI framework to describe the textual entailment relations between a pair of premise and hypothesis in a more precise and finegrained manner. In the realistic case where input hypotheses are compositional, the PROPSEGMENT
present an opportunity for developing more generalizable NLI models and solutions.
## 7 **Conclusion**
In this paper, we presented PROPSEGMENT, the first large-scale dataset for studying propositionlevel segmentation and entailment. We demonstrate that segmenting a text expression into propositions, i.e. atomic units of meanings, and assessing their truth values would provide a finer-grained characterization of the textual entailment relation between two pieces of text. Beyond NLI/RTE tasks, we hypothesize that proposition-level segmentation might be helpful in similar ways for other text classification tasks as well. We hope that PROPSEG-MENT will serve as a starting point, and pave a path for research forward along the line.
## Limitations
Since the PROPSEGMENT dataset feature entailment labels for all propositions in a document, the label distribution are naturally imbalanced, which would potentially pose challenge for modeling. We observe low presence of contradiction examples in our dataset construction process, which could be a limiting factor for the utility of the dataset. Unlike previous NLI datasets (Bowman et al., 2015; Williams et al., 2018), we speculate that reference determinacy, i.e. whether the hypothesis and premise refer to the same scenario at the same time, cannot be certainly guaranteed and safely assumed in our case, which in part leads to low presence of contradictions during annotation. We offer a detailed discussion on the implications of reference determinacy and contradictions in Appendix C. We leave the exploration on *natural* contradictions for future work.
As the annotation complexity and cost scales quadratically w.r.t. the number of propositions in a document, we truncate the documents in PROPSEG-MENT to the first ten sentences of the original document.
## Ethical Considerations
In the proposition-level entailment task (T2), the inference of the entailment relation between a premise document and a hypothesis proposition uses the *assumption* that the premise document is true. The assumption is common to NLI datasets
(Dagan et al., 2005; Bowman et al., 2015; Williams et al., 2018), and is necessary for the task's structure. With the documents in PROPSEGMENT, we make the assumption only for the experimental purpose of T2, and make no claim about the actual veracity of the premise documents.
## Acknowledgements
We thank Michael Collins, Corinna Cortes, Paul Haahr, Ilya Kornakov, Ivan Kuznetsov, Annie Louis, Don Metzler, Jeremiah Milbauer, Pavel Nalivayko, Fernando Pereira, Sandeep Tata, Yi Tay, Andrew Tomkins, and Victor Zaytsev for insightful discussions, suggestions, and support. We are grateful to the annotators for their work in creating PROPSEGMENT.
## References
Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. TensorFlow: a system for LargeScale machine learning. In *12th USENIX symposium on operating systems design and implementation (OSDI 16)*, pages 265–283.
Collin F Baker, Charles J Fillmore, and John B Lowe.
1998. The Berkeley FrameNet project. In *COLING* 1998 Volume 1: The 17th International Conference on Computational Linguistics.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Jifan Chen, Aniruddh Sriram, Eunsol Choi, and Greg Durrett. 2022. Generating literal and implied subquestions to fact-check complex claims. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3495–3516, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Sihao Chen, Fan Zhang, Kazoo Sone, and Dan Roth.
2021. Improving faithfulness in abstractive summarization with contrast candidate generation and selection. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5935–5941, Online. Association for Computational Linguistics.
Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das, and Michael Collins. 2021. Decontextualization: Making sentences stand-alone. Transactions of the Association for Computational Linguistics, 9:447–461.
Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
pages 407–413, Melbourne, Australia. Association for Computational Linguistics.
I. Dagan and O. Glickman. 2004. Probabilistic textual entailment: Generic applied modeling of language variability. In *Learning Methods for Text Understanding and Mining*.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2005. The PASCAL recognising textual entailment challenge. In *Machine learning challenges workshop*, pages 177–190. Springer.
Marie-Catherine de Marneffe, Anna N. Rafferty, and Christopher D. Manning. 2008. Finding contradictions in text. In *Proceedings of ACL-08: HLT*, pages 1039–1047, Columbus, Ohio. Association for Computational Linguistics.
Luciano Del Corro and Rainer Gemulla. 2013.
ClausIE: clause-based open information extraction. In *Proceedings of the 22nd international conference* on World Wide Web, pages 355–366.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),
pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extraction from the web. *Communications of the ACM*,
51(12):68–74.
Farima Fatahi Bayat, Nikita Bhutani, and H. Jagadish.
2022. CompactIE: Compact facts in open information extraction. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 900–910, Seattle, United States. Association for Computational Linguistics.
Nicholas FitzGerald, Julian Michael, Luheng He, and Luke Zettlemoyer. 2018. Large-scale QA-SRL parsing. In *Proceedings of the 56th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 2051–2060, Melbourne, Australia. Association for Computational Linguistics.
Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*,
76(5):378.
Kiril Gashteovski, Rainer Gemulla, Bhushan Kotnis, Sven Hertling, and Christian Meilicke. 2020. On aligning OpenIE extractions with knowledge bases:
A case study. In *Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems*, pages 143–154, Online. Association for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3592–3603, Online. Association for Computational Linguistics.
Xiaotao Gu, Yuning Mao, Jiawei Han, Jialu Liu, You Wu, Cong Yu, Daniel Finnie, Hongkun Yu, Jiaqi Zhai, and Nicholas Zukoski. 2020. Generating representative headlines for news stories. In *Proceedings of The Web Conference 2020*, pages 1773–
1784.
Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015.
Question-answer driven semantic role labeling: Using natural language to annotate natural language.
In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 643–653, Lisbon, Portugal. Association for Computational Linguistics.
Paul R Kingsbury and Martha Palmer. 2002. From TreeBank to PropBank. In *LREC*, pages 1989–1993.
Ayal Klein, Eran Hirsch, Ron Eliav, Valentina Pyatkin, Avi Caciularu, and Ido Dagan. 2022. QASem parsing: Text-to-text modeling of QA-based semantics.
arXiv preprint arXiv:2205.11413.
Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9332–9346, Online. Association for Computational Linguistics.
Harold W Kuhn. 1955. The Hungarian method for the assignment problem. *Naval research logistics quarterly*, 2(1-2):83–97.
Philippe Laban, Tobias Schnabel, Paul N Bennett, and Marti A Hearst. 2022. SummaC: Re-visiting NLIbased models for inconsistency detection in summarization. *Transactions of the Association for Computational Linguistics*, 10:163–177.
Omer Levy, Torsten Zesch, Ido Dagan, and Iryna Gurevych. 2013. Recognizing partial textual entailment. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 451–455, Sofia, Bulgaria. Association for Computational Linguistics.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics.
Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. The NomBank project:
An interim report. In *Proceedings of the workshop* frontiers in corpus annotation at hlt-naacl 2004, pages 24–31.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Martha Palmer, Daniel Gildea, and Paul Kingsbury.
2005. The proposition bank: An annotated corpus of semantic roles. *Computational linguistics*,
31(1):71–106.
Martha Palmer, Daniel Gildea, and Nianwen Xue. 2010.
Semantic role labeling. *Synthesis Lectures on Human Language Technologies*, 3(1):1–103.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21:1–67.
Adam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, et al. 2022. Scaling up models and data with t5x and seqio. arXiv preprint arXiv:2203.17189.
Paul Roit, Ayal Klein, Daniela Stepanov, Jonathan Mamou, Julian Michael, Gabriel Stanovsky, Luke Zettlemoyer, and Ido Dagan. 2020. Controlled crowdsourcing for high-quality QA-SRL annotation. In *Proceedings of the 58th Annual Meeting of the* Association for Computational Linguistics, pages 7008–7013, Online. Association for Computational Linguistics.
Tal Schuster, Sihao Chen, Senaka Buthpitiya, Alex Fabrikant, and Donald Metzler. 2022. Stretching sentence-pair NLI models to reason over long documents and clusters. In *Findings of the Association* for Computational Linguistics: EMNLP 2022, pages 394–412, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Tal Schuster, Adam Fisch, and Regina Barzilay. 2021.
Get your vitamin C! robust fact verification with contrastive evidence. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 624–643, Online. Association for Computational Linguistics.
Joe Stacey, Pasquale Minervini, Haim Dubossarsky, and Marek Rei. 2022. Logical reasoning with span predictions: Span-level logical atoms for interpretable and robust nli models. In The Conference on Empirical Methods in Natural Language Processing (EMNLP).
Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In *Proceedings of the 2018 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 885–
895, New Orleans, Louisiana. Association for Computational Linguistics.
Asher Stern and Ido Dagan. 2014. Recognizing implied predicate-argument relationships in textual inference. In *Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)*, pages 739–744, Baltimore, Maryland. Association for Computational Linguistics.
Oriol Vinyals, Samy Bengio, and Manjunath Kudlur.
2016. Order matters: Sequence to sequence for sets. In Proceedings of the International Conference on Learning Representations.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform
for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Wenpeng Yin, Dragomir Radev, and Caiming Xiong.
2021. DocNLI: A large-scale dataset for documentlevel natural language inference. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 4913–4922, Online. Association for Computational Linguistics.
## A **Model Implementation**
T5 We use T5 1.1 checkpoints from the T5x library (Roberts et al., 2022), with Flaxformer2implementation. For all sizes of T5 model and all tasks, we finetune the model for three epoch, with 1e − 3 learning rate, 0.1 dropout rate, batch size of 128. We train the models on 16 TPU v3 slices.
BERT We use the BERT English uncased models from Tensorflow (Abadi et al., 2016), in large
(24 layers, 16 attention heads, 1024 max sequence length) and base (12 layers, 12 attention heads, 768 max sequence length) sizes. For both sizes, we finetune the model for five epoch, with 1e − 5 learning rate, 0.1 dropout rate, batch size of 16. We train the models on 8 TPU v3 slices.
## B **Annotation Guidelines** B.1 **Segmentation Annotation Guidelines**
There is no unequivocally unique definition for precisely how to segment an English sentence in the context of a document into propositions defined as token subsets, due to a variety of complex language phenomena. Our raters were instructed to follow the following overall guidelines for the segmentation task:
1. Each proposition is expected to correspond to a distinct fact that a reader learns directly from reading the given sentence.
(a) The raters are instructed to focus on the text's most literal *denotation*, rather than drawing further inferences from the text based on world knowledge, external knowledge, or common sense.
(b) The raters are instructed to consider *factivity*, marking only propositions that, in their judgement, the author intends the reader to take as factual from reading the sentence.
(c) With regard to quotes, raters are asked to estimate the author's intent, including the proposition quoted when the reader is expected to take it as factual, and/or the proposition of the quote itself having been uttered if the reader is expected to learn that a speaker uttered that quote.
(d) The raters are instructed to omit text that are clearly non-factual, such as rhetorical 2https://github.com/google/flaxformer
flourishes or first-person account of an article author's emotional response to the topic. This rule is specific to the news and Wikipedia domains, since in other domains of prose, first-person emotions may well be part of the intended informational payload.
2. Each proposition should include all tokens within the sentence that are relevant to learning this fact.
(a) Specifically, the raters are asked to include any tokens in the same sentence that are antecedents of pronouns or other endophora in the proposition, or relevant bridging references.
(b) Raters are asked to ignore punctuation, spacing, and word inflections when selecting tokens, though a number of other minutiae, such as whether to include articles, are left unspecified in the rater instructions.
3. Choose the simplest possible propositions, so that no proposition is equivalent to a conjunction of the other propositions, and so that the union of all of the sentence's proposition gives us all the information a reader learns from the sentence.
The raters are also asked to omit propositions from any text that doesn't constitute well-formed sentences, typically arising from parsing errors or from colloquialisms.
Note that the resulting subsets of tokens do not, generally, constitute well-formed English sentences when concatenated directly, but can, in our ad hoc trials, easily be reconstituted into stand-alone sentences by a human reader.
## B.2 **Entailment Annotation Guidelines**
For the propositional entailment task, our instructions are somewhat similar to the RTE task (Dagan and Glickman, 2004), but specialized to the proposition level.
The raters are asked to read the premise document and decide whether a specific hypothesis proposition is entailed by it, contradicted, or neither. In the first two cases, the raters are asked to mark a proposition in the premise document that most closely supports the hypothesis proposition, using the same definition of proposition as above.
The interface nudges the raters to select one of the propositions marked by the segmentation rater, but allows the entailment rater to create a new proposition as well. Note that the choice of a specific supporting proposition is sometimes not well defined.
To judge entailment, the raters are asked "from reading just the premise document, do we learn that the hypothesis proposition is true, learn that it's false, or neither?" More specifically, the raters are asked:
1. To consider the full document of the hypothesis as the context of the hypothesis proposition, and the full premise document.
2. To allow straightforward entailment based on
"common sense or widely-held world knowledge", but otherwise avoid entailment labels whenever "significant analysis" (any complex reasoning, specialized knowledge, or subjective judgement) is required to align the two texts.
3. To assume that the two documents were written in the same coarse spatiotemporal context
- same geographical area, and the same week.
Raters have the option of marking that they don't understand the premise and/or the hypothesis and skipping the question.
## C **Reference Determinacy And** Contradictions
The PROPSEGMENT dataset is constructed in document-to-document comparison settings. Even though the document clusters are sampled so that documents in a cluster target the same event or event, the documents typically have different focus.
Besides the factual information, which are mostly consistent across documents, the focus or specific perspective of each document varies largely, which is in part why we observe very few contradictions.
Apart from such, We speculate that the low presence of contradictions can also be in part attributed to the difficulty in establishing reference determinacy, i.e. whether the entities and events described in a hypothesis can be assumed to refer to the same ones or happening at the same point in the premise.
To illustrate the importance of this, consider the following example from SNLI (Bowman et al., 2015).
Premise: A black race car starts up in front of a crowd of people.
Hypothesis: A man is driving down a lonely road.
In SNLI, reference determinacy is assumed to be true. In other words, the human raters assume that the scenario described in the premise and hypothesis happens in the same context at the same time point. Therefore, the example pair is labeled as contradiction, as "lonely road" contradicts "a crowd of people" if we assume both happen on the same road. Without such assumption, the example would likely be labeled as *neutral*, since there is no extra context that would indicate the two events happen in the same context.
In reality, reference determinacy is often difficult to establish with certainty. Unlike existing NLI/RTE datasets (Dagan et al., 2005; Bowman et al., 2015; Williams et al., 2018), in the creation process of PROPSEGMENT, we do not assume reference determinacy between the hypothesis proposition and premise document, but rather relay the judgement to human raters by reading context information presented in the documents. We observe that it is often hard to tell if a specific proposition within a document can establish reference determinacy with the other document, unless the proposition describes a property that is stationary with respect to time. For this reason, most contradictions, among the few that exist in our dataset, are factual statements. Here is an example from the development split.
Premise: ... The team was founded in 1946 as a founding member of the AllAmerica Football Conference (AAFC)
and joined the NFL in 1949 when the leagues merged..
Hypothesis: The 49ers have been members of the NFL since the AAFC and National Football League (NFL) merged in 1950...
We view the lack of contradictions as a potential limitation for the dataset for practical purposes. We argue for the need to circumscribe the exact definition of contradiction (from the practical perspective) when reference determinacy cannot be simply assumed. We leave this part for future work.
## D **Example Propositions From Openie** Vs. Propsegment
To illustrate the difference between how we define propositions in PROPSEGMENT, versus OpenIE
formulations, we include a few examples sentences with propositions in PROPSEGMENT in Table 7 and 8, and compare propositions extracted with ClausIE, a rule-based OpenIE model (Del Corro and Gemulla, 2013), and a neural Bi-LSTM model from Stanovsky et al. (2018).
## E **Xsum Hallucination Detection -** Examples
Table 9 and 10 show two example documents, with propositions and the inferred hallucinated spans in model-generated and gold summaries by our PropNLI model. We compare the predictions to the annotations of hallucinated span provided by Maynez et al. (2020).
Sentence: The 82nd NFL Draft took place from April 27-29, 2017 in Philadelphia.
PROPSEGMENT
\#1: The 82nd NFL Draft took place from April 27-29, 2017 in Philadelphia. \#2: The 82nd NFL Draft took place from April 27-29, 2017 in Philadelphia. ClausIE \#1: (The 82nd NFL Draft, took place, from April 27-29, 2017 in Philadelphia)
\#2: (The 82nd NFL Draft, took place, from April 27-29, 2017)
Neural Bi-LSTM OIE *(Splitting each modifier, i.e. ARGM)*
\#1: (The 82nd NFL Draft, took, place, from April 27-29, 2017)
\#2: (The 82nd NFL Draft, took, place, in Philadelphia)
Sentence: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables (1998) and Orson Welles y yo (2009).
PROPSEGMENT
\#1: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
\#2: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
\#3: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
\#4: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
\#5: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
\#6: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
\#7: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
\#8: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
\#9: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
\#10: She has also appeared in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables
(1998) and Orson Welles y yo (2009).
ClausIE \#1: (She, has appeared, in films such as Little Women also)
\#2: (She, has appeared, in films such as The Hours also)
\#3: (She, has appeared, in films such as Self Defense also) \#4: (She, has appeared, in films such as Les Miserables also)
\#5: (She, has appeared, in films such as Orson Welles y yo also)
\#6: (She, has appeared, in films such as Little Women) \#7: (She, has appeared, in films such as The Hours)
\#8: (She, has appeared, in films such as Self Defense)
\#9: (She, has appeared, in films such as Les Miserables) \#10: (She, has appeared, in films such as Orson Welles y yo) \#11: (Little Women, is, 1994) \#12: (The Hours, is, 1994)
\#13: (Self Defense, is, 1994)
\#14: (Les Miserables, is, 1994) \#15: (Orson Welles y yo, is, 1994)
\#16: (The Hours, is, 2002)
\#17: (Self Defense, is, 1997) \#18: (Les Miserables, is, 1998)
\#19: (Orson Welles y yo, is, 2009) Neural Bi-LSTM OIE \#1: (She, appeared, in films such as Little Women (1994), The Hours (2002), Self Defense (1997), Les Miserables (1998)
and Orson Welles y yo (2009))
Table 7: Comparison of propositions in PROPSEGMENT with extractions with ClausIE (Del Corro and Gemulla, 2013), and the neural Bi-LSTM OIE model from Stanovsky et al. (2018).
Sentence: The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art.
PROPSEGMENT
\#1: The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art.
\#2: The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art.
\#3: The Andy Warhol Museum in his hometown, Pittsburgh, Pennsylvania, contains an extensive permanent collection of art. ClausIE \#1: (his, has, hometown) \#2: (his hometown, is, Pittsburgh Pennsylvania)
\#3: (The Andy Warhol Museum in his hometown, contains, an extensive permanent collection of art)
Neural Bi-LSTM OIE \#1: (The Andy Warhol Museum in his hometown Pittsburgh Pennsylvania, contains, an extensive permanent collection of art)
Sentence: The Cleveland Cavaliers got the first choice in the lottery, which was used on 20-year-old forward Anthony Bennett, a freshman from the University of Nevada.
PROPSEGMENT
\#1: The Cleveland Cavaliers got the first choice in the lottery, which was used on 20-year-old forward Anthony Bennett, a freshman from the University of Nevada.
\#2: The Cleveland Cavaliers got the first choice in the lottery, which was used on 20-year-old forward Anthony Bennett, a freshman from the University of Nevada.
\#3: The Cleveland Cavaliers got the first choice in the lottery, which was used on 20-year-old forward Anthony Bennett, a freshman from the University of Nevada.
\#4: The Cleveland Cavaliers got the first choice in the lottery, which was used on 20-year-old forward Anthony Bennett, a freshman from the University of Nevada. \#5: The Cleveland Cavaliers got the first choice in the lottery, which was used on 20-year-old forward Anthony Bennett, a freshman from the University of Nevada.
\#6: The Cleveland Cavaliers got the first choice in the lottery, which was used on 20-year-old forward Anthony Bennett, a freshman from the University of Nevada.
ClausIE \#1: (The Cleveland Cavaliers, got, the first choice in the lottery)
\#2: (the lottery, was used, on 20-year-old forward Anthony Bennett)
\#3: (Anthony Bennett, is, a freshman from the University of Nevada)
Neural Bi-LSTM OIE \#1: (The Cleveland Cavaliers, got, the first choice in the lottery, which was used on 20-year-old forward Anthony Bennett, a freshman from the University of Nevada.) \#2: (the lottery, was used, on 20-year-old forward Anthony Bennett, a freshman from the University of Nevada.)
Table 8: (Cont.) Comparison of propositions in PROPSEGMENT with extractions with ClausIE (Del Corro and Gemulla, 2013), and the neural Bi-LSTM OIE model from Stanovsky et al. (2018).
Document: The incident happened near Dr Gray's Hospital shortly after 10:00. The man was taken to the hospital with what police said were serious but not life-threatening injuries. The A96 was closed in the area for several hours, but it has since reopened.
Summary from **BertS2S**
A man has been taken to hospital following a one-vehicle crash on the A96 in Aberdeenshire. Predicted propositions (blue) and entailment labels
\#1: A man has been taken to hospital following a one-vehicle crash on the A96 in Aberdeenshire. ✔ \#2: A man has been taken to hospital following a one-vehicle crash on the A96 in Aberdeenshire. ✗ \#3: A man has been taken to hospital following a one-vehicle crash on the A96 in Aberdeenshire. ✗
\#4: A man has been taken to hospital following a one-vehicle crash on the A96 in Aberdeenshire. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
A man has been taken to hospital following a one-vehicle crash on the A96 in Aberdeenshire.
Summary from **TConvS2S**
a man has been taken to hospital after being hit by a car in Moray.
Predicted propositions (blue) and entailment labels
\#1: a man has been taken to hospital after being hit by a car in Moray. ✔
\#2: a man has been taken to hospital after being hit by a car in Moray. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
a man has been taken to hospital after being hit by a car in Moray.
Gold Summary from the XSum dataset A cyclist has suffered serious head injuries after a collision with a car in Elgin.
Predicted propositions (blue) and entailment labels
\#1: A cyclist has suffered serious head injuries after a collision with a car in Elgin. ✗ \#2: A cyclist has suffered serious head injuries after a collision with a car in Elgin. ✗
\#3: A cyclist has suffered serious head injuries after a collision with a car in Elgin. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
A cyclist has suffered serious head injuries after a collision with a car in Elgin.
Summary from **PTGen**
A man has been taken to hospital after being hit by a car in the A96 area of Glasgow. Predicted propositions (blue) and entailment labels
\#1: A man has been taken to hospital after being hit by a car in the A96 area of Glasgow. ✔ \#2: A man has been taken to hospital after being hit by a car in the A96 area of Glasgow. ✗ \#3: A man has been taken to hospital after being hit by a car in the A96 area of Glasgow. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
A man has been taken to hospital after being hit by a car in the A96 area of Glasgow Summary from **TranS2S**
A man has been taken to hospital after a two-vehicle crash on the A96 in County Antrim.
Predicted propositions (blue) and entailment labels
\#1: A man has been taken to hospital after a two-vehicle crash on the A96 in County Antrim. ✔
\#2: A man has been taken to hospital after a two-vehicle crash on the A96 in County Antrim. ✗ \#3: A man has been taken to hospital after a two-vehicle crash on the A96 in County Antrim. ✗
\#4: A man has been taken to hospital after a two-vehicle crash on the A96 in County Antrim. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
A man has been taken to hospital after a two-vehicle crash on the A96 in County Antrim.
Table 9: More example of model generated summaries on the XSum dataset, with human-annotated hallucination spans from Maynez et al. (2020). For each document, Maynez et al. (2020) provide summaries and hallucination annotations from 5 different summarization systems. We randomly sample documents and show our model's predictions for all 5 summaries here.
Document: Dervite, 28, made 14 appearances last season to help Wanderers finish second in League One and secure promotion. The French centre-back joined Bolton from Charlton in 2014 and has made 83 appearances in all competitions.
"Dorian was a bit of a forgotten man last year but came in and made an excellent contribution towards the end of the campaign," manager Phil Parkinson told the club website. Dervite follows David Wheater, Gary Madine and Jem Karacan in signing new contracts with Bolton, following their promotion to the Championship.
Summary from **BertS2S**
Bolton defender Dorian Dervite has signed a new two-year contract with the championship club.
Predicted propositions (blue) and entailment labels
\#1: Bolton defender Dorian Dervite has signed a new two-year contract with the championship club. ✔ \#2: Bolton defender Dorian Dervite has signed a new two-year contract with the championship club. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
Bolton defender Dorian Dervite has signed a new two-year contract with the championship club.
Summary from **TConvS2S**
Bolton Wanderers have signed defender Dorian Dervite from bolton wanderers for an undisclosed fee. Predicted propositions (blue) and entailment labels
\#1: Bolton Wanderers have signed defender Dorian Dervite from bolton wanderers for an undisclosed fee. ✗
\#2: Bolton Wanderers have signed defender Dorian Dervite from bolton wanderers for an undisclosed fee. ✗
\#3: Bolton Wanderers have signed defender Dorian Dervite from bolton wanderers for an undisclosed fee. ✗ \#4: Bolton Wanderers have signed defender Dorian Dervite from bolton wanderers for an undisclosed fee. ✔
Predicted hallucinated spans (union of ✗- union of ✔)
Bolton Wanderers have signed defender Dorian Dervite from bolton wanderers for an undisclosed fee.
Gold Summary from the XSum dataset Defender Dorian Dervite has signed a new one-year contract with Bolton.
Predicted propositions (blue) and entailment labels
\#1: Defender Dorian Dervite has signed a new one-year contract with Bolton ✔ \#2: Defender Dorian Dervite has signed a new one-year contract with Bolton. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
Defender Dorian Dervite has signed a new one-year contract with Bolton.
Summary from **PTGen**
Bolton Wanderers defender Dorian Dervite has signed a new three-and-a-half-year contract with the league one club until the end of the 2018-19 season.
Predicted propositions (blue) and entailment labels
\#1: Bolton Wanderers defender Dorian Dervite has signed a new three-and-a-half-year contract with the league one club until the end of the 2018-19 season. ✔
\#2: Bolton Wanderers defender Dorian Dervite has signed a new three-and-a-half-year contract with the league one club until the end of the 2018-19 season. ✗
\#3: Bolton Wanderers defender Dorian Dervite has signed a new three-and-a-half-year contract with the league one club until the end of the 2018-19 season. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
Bolton Wanderers defender Dorian Dervite has signed a new three-and-a-half-year contract with the league one club until the end of the 2018-19 season.
Summary from **TranS2S**
Bolton Wanderers midfielder Gary Wheat has signed a new one-year contract with the championship side.
Predicted propositions (blue) and entailment labels
\#1: Bolton Wanderers midfielder Gary Wheat has signed a new one-year contract with the championship side. ✗ \#2: Bolton Wanderers midfielder Gary Wheat has signed a new one-year contract with the championship side. ✗
Predicted hallucinated spans (union of ✗- union of ✔)
Bolton Wanderers midfielder Gary Wheat has signed a new one-year contract with the championship side.
Table 10: (Cont.) More example of model generated summaries on the XSum dataset, with human-annotated hallucination spans from Maynez et al. (2020).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Yes, in the "Limitations" section
✓ A2. Did you discuss any potential risks of your work?
Yes, in the "Ethical Considerations" section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and introduction (section 1) summarize the main contributions of the paper
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3
✓ B1. Did you cite the creators of artifacts you used?
section 3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We mention that it will be released upon publication in Ethical Considerations
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. section 3
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Appendix A
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix B
✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Cannot disclose due to legal reasons (proprietary information).
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Cannot disclose due to legal reasons (proprietary information).
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Cannot disclose due to legal reasons (proprietary information).
✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Cannot disclose due to legal reasons (proprietary information). |
dong-etal-2023-cif | {CIF}-{PT}: Bridging Speech and Text Representations for Spoken Language Understanding via Continuous Integrate-and-Fire Pre-Training | https://aclanthology.org/2023.findings-acl.566 | Speech or text representation generated by pre-trained models contains modal-specific information that could be combined for benefiting spoken language understanding (SLU) tasks. In this work, we propose a novel pre-training paradigm termed Continuous Integrate-and-Fire Pre-Training (CIF-PT). It relies on a simple but effective frame-to-token alignment: continuous integrate-and-fire (CIF) to bridge the representations between speech and text. It jointly performs speech-to-text training and language model distillation through CIF as the pre-training (PT). Evaluated on SLU benchmark SLURP dataset, CIF-PT outperforms the state-of-the-art model by 1.94{\%} of accuracy and 2.71{\%} of SLU-F1 on the tasks of intent classification and slot filling, respectively. We also observe the cross-modal representation extracted by CIF-PT obtains better performance than other neural interfaces for the tasks of SLU, including the dominant speech representation learned from self-supervised pre-training. | # Cif-Pt: Bridging Speech And Text Representations For Spoken Language Understanding Via Continuous Integrate-And-Fire Pre-Training
Linhao Dong∗
, Zhecheng An∗
, Peihao Wu, Jun Zhang, Lu Lu, Zejun Ma ByteDance AI Lab
{donglinhao, anzhecheng, wupeihao, zhangjun.jarry, lulu.0314, mazejun}@bytedance.com
## Abstract
Speech or text representation generated by pre-trained models contains modal-specific information that could be combined for benefiting spoken language understanding (SLU)
tasks. In this work, we propose a novel pretraining paradigm termed Continuous Integrateand-Fire Pre-Training (CIF-PT). It relies on a simple but effective frame-to-token alignment:
continuous integrate-and-fire (CIF) to bridge the representations between speech and text. It jointly performs speech-to-text training and language model distillation through CIF as the pretraining (PT). Evaluated on SLU benchmark SLURP dataset, CIF-PT outperforms the stateof-the-art model by 1.94% of accuracy and 2.71% of SLU-F1 on the tasks of intent classification and slot filling, respectively. We also observe the cross-modal representation extracted by CIF-PT obtains better performance than other neural interfaces for the tasks of SLU,
including the dominant speech representation learned from self-supervised pre-training.
## 1 Introduction
Spoken language understanding (SLU) plays a key role in speech interaction systems such as spoken dialogue systems, voice assistants, automated calling robots, etc. It focuses on extracting key information and making predictions from audio signals of human speech (Wang et al., 2005; Tur and Mori, 2011). Traditional methods decompose SLU into two cascading tasks: automated speech recognition
(ASR) and natural language understanding (NLU), where audio signals are first transcribed into texts, and then processed by a text-based language understanding model. In the cascading scheme, the errors of ASR module will be accumulated in the NLU module and degrade the final performance.
Moreover, predicted text of ASR module may not be the ideal interface for the language understanding task. For example, acoustic information such
∗Equal contribution.
as intonation and pitch that may be helpful for understanding tasks are lost after ASR. To tackle the problems above, resent researches employ endto-end approaches for SLU (Serdyuk et al., 2018; Haghani et al., 2018; Chung et al., 2021; Arora et al., 2022), where the language understanding is directly performed from audio signals without explicitly utilizing predicted text of ASR.
For text-based language understanding tasks, pre-trained language models such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b) and GPT (Radford et al., 2019) have achieved remarkable success. These models utilize self-supervised pre-training on large-scale unlabeled corpora to learn contextual representations in token or sentence level with rich syntactic and semantic knowledge (Liu et al., 2019a), which significantly benefit downstream tasks such as NLU during fine-tuning.
This self-supervised pre-training fashion has been extended into the representative learning on speech. Researches such as wav2vec (Baevski et al., 2020), HuBERT (Hsu et al., 2021) and data2vec (Baevski et al., 2022a) focus on learning better frame-level contextual representations using unlabeled speech data, to improve the performance of ASR as well as other speech processing tasks. For end-to-end SLU, these self-supervised speech models have been proven to be powerful backbones on learning semantic representations (Wang et al., 2021; Arora et al., 2022).
The self-supervised pre-training methods for speech mainly focus on leveraging speech data to model acoustic information (Chung et al., 2021) on the frame level, while pre-trained language models work on higher token or sentence levels to encode linguistic knowledge (Liu et al., 2019a). These two kinds of representation could be combined for better benefiting downstream tasks such as SLU.
The combination of speech and text representations can be performed by jointly pre-training on data of the two modalites (Chuang et al., 2020),
or distillating one pre-trained representations into another (Kim et al., 2021). In either way the framelevel speech representation needs to be aligned with the token-level textual representation. Frame-totoken alignment methods such as forced alignment has been applied to speech-text joint pre-training
(Chuang et al., 2020). However, these alignment methods mainly rely on external models or rules, and can only generate hard alignment mapping that can not be updated in end-to-end training. On the other hand, aligning frames and tokens through cross-attention (Arora et al., 2022; Zhu et al., 2022)
suffers from high complexity and lack of token timestamps that synchronized to frames.
The frame-to-token alignment also plays a critical role in ASR systems. Various works, such as Connectionist Temporal Classification (CTC)
(Graves et al., 2006), Listen, Attend and Spell
(LAS) (Chan et al., 2016), RNN Transducer (RNNT) (Graves, 2012) and Continuous Integrate-andFire (CIF) (Dong and Xu, 2020), focus on bringing effective alignment methods for better speech recognition performance. Among these works, the CIF alignment, which explicitly aggregates framelevel speech representations into token-level, is adopted in our work to combine with text representation. Specifically, we propose a novel pretraining paradigm: Continuous Integrate-and-Fire Pre-Training (CIF-PT) for end-to-end SLU. Two pre-training tasks are included in CIF-PT: the first task is speech-to-text modeling (Wang et al., 2020)
with CIF alignment. In this work, ASR task that transcribes speech to text is applied. The second task is language model distillation (LMD).
Since the integrated speech representation by CIF
is at token-level, token-level distillation from a pretrained language model can be performed to inject text-based linguistic knowledge into the representation. Through the joint pre-training of the two tasks, CIF-PT is able to generate representations with information from both speech and text modalites.
We examine our CIF-PT methods in downstream SLU tasks including intent classification and slot filling. On SLU benchmark SLURP (Bastianelli et al., 2020) dataset, the end-to-end SLU model with CIF-PT outperforms the state-of-the-art model by 1.94% of accuracy and 2.71% of SLU-F1 on the tasks of intent classification and slot filling, respectively. The cross-modal representation extracted by CIF-PT also shows its competitiveness in comparison of other neural interfaces (Rao et al., 2020; Raju et al., 2022) utilized in SLU. The obtained results and a series of experiments including ablation study and the pre-training on out-of-domain data demonstrate the effectiveness and generalization of CIF-PT.
## 2 Related Works
End-to-End SLU Various works extend models originally designed for ASR into the field of SLU.
Peng et al. (2022) propose Branchformer as an alternative to Conformer (Gulati et al., 2020), and show performance gains in SLU as well as ASR.
Huang et al. (2022) jointly train ASR and SLU
as multitasks to exploit shared knowledge from different tasks. Seo et al. (2022) use the probability distribution output of ASR model as continuous token interface (CTI) for downstream NLU. Selfsupervised representative learning on speech data provides powerful backbones such as wav2vec 2.0
(Baevski et al., 2020), HuBERT (Hsu et al., 2021)
for SLU. Arora et al. (2022) propose ESPnet-SLU
and analyze the performance of HuBERT encoder pre-trained with ASR as feature extractor for SLU.
Wang et al. (2021) perform partial fine-tuning and entire fine-tuning on pre-trained wav2vec 2.0 and HuBERT on SLU tasks.
Cross-Modal Pre-training for SLU In order to exploit information from speech and text for SLU,
jointly pre-training on both of speech and text data has been proposed. SpeechBERT (Chuang et al., 2020) extends the masked language model
(MLM) pre-training from BERT into the mixture of audio and text data. In SPLAT (Chung et al.,
2021), a speech module and a language module are jointly pre-trained with token-level and sentencelevel alignment. Another branch of researches focus on knowledge distillation from pre-trained language model into pre-trained speech encoder.
Kim et al. (2021) utilize BERT as a teacher to perform sentence-level knowledge distillation at the pre-training stage and target-specific distillation during fine-tuning. Zhu et al. (2022) introduce cross-attention between text and speech and perform distillation on the attention heads for knowledge transfering.
Frame-to-Token Alignment in SLU In SpeechBERT (Chuang et al., 2020), forced alignment based on external ASR engine is used to train the initial phonetic-semantic joint embedding. Chung et al. (2021) adopt a heuristic alignment approach in SPLAT, where alignment scores is computed by the cosine similarity between the output embeddings of the pre-trained speech and text models.
The cross-attention alignment is introduced in (Zhu et al., 2022) to capture the interactions between text tokens and speech frames. For SpeechT5, since the pre-training does not strictly rely on audio-text pair data, (Ao et al., 2022) adopt shared codebook for speech and text representation and a diversity loss to encourage the alignment in latent space.
## 3 Method
In this section, we present the architecture of our proposed continuous integrate-and-fire pre-training
(CIF-PT) method for SLU. As shown in Figure 1, our end-to-end SLU models go through two stages: CIF-PT and SLU training.
During CIF-PT, we employ two pre-training tasks: ASR training with CIF alignment and tokenlevel language model distillation (LMD). These two tasks help the model learn contextual representation of the speech features aligned to the tokens with high level linguistic knowledge. After CIF-PT,
the pre-trained parameters including the speech encoder and CIF part are used for downstream SLU
tasks such as intent classification and slot filling.
## 3.1 Asr Training With Cif Alignment
As shown in Figure 1(a), the structure of CIF-based ASR model includes three parts: speech encoder, CIF part, and the corresponding decoder. For an input speech utterance, it is first processed into a sequence of frames x = [x1, x2, · · · , xT
′ ] with length T
′via speech feature extractor (e.g. melfilter bank, convolutional front-end (Baevski et al.,
2020)), where xtis the feature vector of the t th frame. The speech encoder converts the frame-level input vector into frame-level hidden states:
$h=[h_{1},h_{2},\cdots,h_{T}]=\mbox{enc}([x_{1},x_{2},\cdots,x_{T}])$
CIF part follows the speech encoder to convert the frame-level hidden states h into tokenlevel speech representations c. We follow the CIF setup from Dong and Xu (2020), which is briefed as follows. At first, the encoded hidden states h = [h1, h2, · · · , hT ] are fed into a weight estimator module to calculate a series of weights α = [α1, α2, · · · , αT ]. The weights α and the frame-level hidden states h are input to CIF to obtain c = [c1, c2, · · · , ci, · · · , cN ], where N is the number of total tokens. Each token-level representation ciis a linear combination of frame-level representations {ht}. At each frame step t, the weight αt added to an accumulated weight α a i ← α a i + αt, and the frame-level hidden state htis integrated into token-level representation ci ← ci + αtht, until the accumulated weight α a i exceeds a threshold β. When α a iexceeds β, the weight of the boundary hidden state is divided into two parts αt = αt1 + αt2, to ensure the accumulated weight for each token is exactly β, and the second part αt2 is accumulated to the next token representation. In such way, the frame-level hidden states are integrated into token-level representation, which not only reduces the redundancy of speech information but also reduces computation complexity when used for the subsequent ASR decoder and downstream understanding tasks.
We use the autoregressive ASR decoder in (Dong and Xu, 2020). It accepts previous token yi−1 and the integrated ci from CIF part as inputs, and autoregressively predicts the token output distribution for each ci. The CIF-based encoder-decoder model is trained with a cross entropy (CE) loss in a teacher-forcing manner:
$${\mathcal{L}}_{\mathrm{CE}}=\sum_{i=1}^{N}\log p(y_{i}|y_{<i},c_{i}).$$
Optionally, LCTC can be applied on the frame-level hidden states h to be jointly trained. The quantity loss LQUA is to supervise the CIF part to predict the quantity of tokens closer the number of target tokens:
tokens. $$\mathcal{L}_{\text{QUA}}=\left|\sum_{i=1}^{T}\alpha_{i}-N\right|.$$ The final CIF loss is the weighted sum of three:
$${\mathcal{L}}_{\mathrm{CIF}}={\mathcal{L}}_{\mathrm{CE}}+\lambda_{1}{\mathcal{L}}_{\mathrm{CTC}}+\lambda_{2}{\mathcal{L}}_{\mathrm{QUA}}.\qquad(1)$$ enumer Model Distribution
3.2 Language Model Distillation Since the speech representation ciintegrates speech information into the token-level, we use a pretrained BERT model as a knowledge distillation teacher to inject textual knowledge into speech representation. Let x = {xi}
T
i=1 be the speech frame sequence and y = {yi}
N
i=1 be the corresponding transcript token sequence. As shown in Figure 1, x is encoded into speech feature {ci}
N i=1 by the speech encoder and the CIF part. y is encoded by BERT into contextual representation vectors {h t i}
N
i=1. Since {ci} are aligned to tokens, directly token-level knowledge distillation can be
![3_image_0.png](3_image_0.png)
performed to make the speech representation close to the contextual representation brought by BERT,
thus forming a cross-modal representation.
We consider three types of language model distillation (LMD) loss in our paper, MSE loss, smoothed L1 loss and contrastive loss. Using BERT hidden output h t i as target, the MSE loss of ciis L
MSE
LMD(h t i
, ci) = ∥h t i − ci∥
2. The smoothed L1 loss is proposed in (Baevski et al., 2022a), where a γ is used to control the transition from a squared loss to an L1 loss, i.e.
$$\mathcal{L}_{\mathrm{LMD}}^{\mathrm{SLL}}(h_{i}^{t},c_{i})=\begin{cases}\frac{1}{2}(h_{i}^{t}-c_{i})^{2}/\gamma&|h_{i}^{t}-c_{i}|\leq\gamma,\\ (|h_{i}^{t}-c_{i}|-\frac{1}{2}\gamma)&\text{otherwise.}\end{cases}$$
The contrastive loss encourage cito be closer to h t i than other c′sampled from an in-batch negative set Nc.
$${\mathcal{L}}_{\mathrm{LMD}}^{\mathrm{cont}}(h_{i}^{t},c_{i})={\frac{\exp[\mathrm{sim}(h_{i}^{t},c_{i})/\tau]}{\sum_{c^{\prime}\in{\mathcal{N}}_{c}}\exp[\mathrm{sim}(h_{i}^{t},c^{\prime})/\tau]}},$$
where sim(·, ·) is the cosine similarity function and τ is the temperature scalar.
The LMD task is trained simultaneously with CIF-based ASR training as multitasks, which forms the training loss L of CIF-PT as follows:
$${\mathcal{L}}={\mathcal{L}}_{\mathrm{CIF}}+\lambda{\mathcal{L}}_{\mathrm{MLD}}.$$
L = LCIF + λLMLD. (2)
## 3.3 Spoken Language Understanding
After CIF-PT, the pre-trained speech encoder and CIF part convert speech input into the sequence of cross-modal representation {ci}, which is used for downstream SLU training. We evaluate our pretrained model on SLU tasks of intent classification and slot filling. The corresponding intent decoder and slot decider are shown in Figure 1(b).
For intent classification, {ci}
N
i=0 is fed into additional Transformer layers to generate task specific decoder states. We use the average of decoder state on all position as the utterance representation for intent prediction through a linear projection.
The slot filling task is performed in a sequence generation style. The slot types and slot values are concatenated as targets {y s i} to train a sequence-tosequence model, i.e. "[SEP] slot_type1 slot_value1
[SEP] slot_type2 slot_value2". The slot decoder consists of Transformer decoder layers where the sequence of ciis used as the key and value of the cross-attention layer. We train the encoderdecoder to generate slot target sequence {y s i}
K i=0 with teacher-forcing.
## 4 Experimental Setup 4.1 Dataset And Preprocessing
We conduct experiments on the dataset of SLURP
(Bastianelli et al., 2020), which is currently the
$$(2)$$
largest SLU benchmark and is also linguistically more diverse than other datasets. It is collected for developing an in-home personal robot assistant. The train, development and test sets split in the SLURP paper are used for the training and evaluation of our methods. In addition to use the in-domain SLURP data for pre-training, we also introduce the Librispeech (Panayotov et al., 2015)
dataset that contains 960 hours of speech derived from audiobooks as the out-domain pre-training dataset, which is only used in Section 5.4. All speech data is re-sampled or kept at 16 kHz, and all text data is converted into a sequence of subword units by the subword-nmt (Sennrich et al., 2016)
toolkit 1. Specifically, we generate 10706 subword units by performing 36000 merge operations on the training set of Librispeech datasets, and use the learned BPE as the only tokenizer for text of all datasets.
## 4.2 Model Configuration
In this part, we detail the model structure and configuration utilized in our experiments. All the models are implemented using (Paszke et al., 2019):
Encoder we use two types of speech encoder which are denoted as conformer and data2vec in subsequent experiments. For the encoder of conformer, it consists of a two-layer convolutional front-end and 15-layer conformer blocks (Li et al., 2021). It applies a 8-time temporal down-sampling similar to (Dong et al., 2019). The hidden size in the conformer block uses 400. For the encoder of data2vec, it follows the official data2vec-large configuration (Baevski et al., 2022b) and uses the released model 2from (Wolf et al., 2020). For the text encoder that provides text representation in CIF-PT, we follow the BASE configuration of BERT (Devlin et al., 2019) and use our learned BPE tokenizer to perform pre-training on the English Wikipedia corpus.
CIF part we follow the implementation of weight estimator and CIF calculator in (Dong and Xu, 2020). The channel number in convolutional layer keeps the same as the hidden size in decoder.
The threshold β during CIF calculation is set to 1.0. The corresponding scaling strategy and tail handling methods are also used.
| IC | SF | |
|---------------------------------------|----------|--------|
| (Acc.) | (SLU-F1) | |
| MTL-SLT (Huang et al., 2022) | 83.10% | 74.49% |
| Speech-Brain (Ravanelli et al., 2021) | 85.34% | 74.26% |
| ESPNET-SLU (Arora et al., 2022) | 86.30% | 71.90% |
| CTI (Seo et al., 2022) | 86.92% | 74.66% |
| Branchformer (Peng et al., 2022) | 88.10% | 77.70% |
| Hubert SLU (Wang et al., 2021) | 89.38% | 78.92% |
| CIF-PT (Conformer encoder) | 89.60% | 78.67% |
| CIF-PT (Data2vec encoder) | 91.32% | 81.63% |
Decoder we use three types of decoder in our experiments, including the ASR decoder for speechto-text training in CIF-PT, the down-streaming intent decoder for IC and slot decoder for SF. For ASR decoder, it uses the original autoregressive decoder (Dong and Xu, 2020) with 2-layer selfattention networks (SANs, also known as transformer encoder layers (Vaswani et al., 2017)). The hidden size is 400 when the encoder uses conformer and 512 for data2vec. For intent decoder, it uses 2-layer SANs and a following average pooling layer . For slot decoder, it uses 4-layer SANs for the tag-based slot decoder and uses 4-layer transformer decoder layers (with additional crossattention layer) for the generation-based slot decoder. Without specific statement, the generationbased slot decoder is used by default. The hidden size keeps the same as ASR decoder for the two types of SLU decoder.
## 4.3 Training And Evaluation
We use an AdamW (Loshchilov and Hutter, 2018)
optimizer with β1 = 0.9, β2 = 0.98 and weight decay of 1e-5. During CIF pre-training, we warm up the learning rate for the first 4% of updates to a peak of 1e-3 and keep it constant in the later 64% of updates, then linearly decay it to 1e-4. The number of total training steps is 80k. We set the weight of CTC loss λ1 = 0.5, and the weight of quantity loss λ2 = 1.0. The hyper-parameter of LMD loss is explored in section 5.2. During SLU training, we follow the Noam scheduler (Vaswani et al., 2017)
with 1600 warm-up steps and peak learning rate of 5e-4. The number of total training steps is 32k.
After training, we first perform model average on the last 10 checkpoints for all models and then use the averaged model for evaluation. We follow
| (Model Id.) | Method | Speech Encoder | Intent Classification | Slot Filling |
|-----------------------------------------------------------------------------------------------------------------|-----------------|------------------|-------------------------|-----------------|
| (Acc.) | (SLU-F1) | | | |
| M0 | CIF-PT | Conformer | 89.60% | 78.67% |
| M1 | CIF-PT | Data2vec | 91.32% | 81.63% |
| On the importance of CIF-PT M2 M0 w/o any PT | Conformer | 86.43% (-3.17%) | 72.51% (-6.16%) | |
| M3 | + triple steps | Conformer | 87.28% (-2.32%) | 74.92% (-3.75%) |
| M4 | + CTC-PT | Conformer | 86.41% (-3.19%) | 75.87% (-2.80%) |
| On the importance of language model distillation (LMD) M5 M0 w/o LMD Conformer 88.31% (-1.29%) | 77.84% (-0.83%) | | | |
| M6 | M1 w/o LMD | Data2vec | 91.18% (-0.14%) | 81.02% (-0.61%) |
| On the importance of CIF alignment (all w/o language model distillation) M7 M3 w/o CIF Data2vec 90.36% (-0.96%) | 79.29% (-2.34%) | | | |
| M8 | +CTC-PT | Data2vec | 90.63% (-0.69%) | 80.31% (-1.32%) |
the metric of accuracy and SLU-F1 (Bastianelli et al., 2020) to evaluate the models on task of IC
and SF, respectively. During the inference of SF
task, we perform beam search with beam width 10 and a temperature scalar of 1.25 . All experimental results are averaged at least 2 runs.
## 5 Results And Analysis 5.1 Main Results
To verify the effectiveness of our proposed methods, we first conduct three sets of experiments to explore the importance of designs in CIF-PT. The main results are summarized in Table 2.
The first two rows of Table 2 show the performance of our end-to-end SLU models using CIFPT. Consistent with our expectation, the model M1 with the self-supervised data2vec encoder obtains better results than the model M0 with conformer encoder on both tasks. We also compare the performance of our methods with the published results.
As shown in Table 1, the model with CIF-PT (M1 in Table 2) achieves state-of-the-art result on both of IC and SF tasks. The performance advantages on the task of SF reaches 2.71% SLU-F1. We suspect that the cross-modal representation extracted by CIF-PT contains more language knowledge that benefits more to SF, which needs to predict the slot key and speech content simultaneously . It is worthy to mention that the model M0 with conformer encoer also achieves competitive performance, which is even superior or comparable to the published strong models (Wang et al., 2021; Seo et al., 2022) with self-supervised speech encoder.
For the model of M2 in Table 2, we ablate CIFPT utilized in the model M0 and conduct a joint training of ASR and SLU tasks from scratch. The results show that ablating CIF-PT leads to a large performance degradation on both SLU tasks. Since CIF-PT consumes extra pre-training steps, we suspect the total training step maybe a factor of the performance gap. Therefore, we increase the training step to triple (from 32k to 96k) to obtain the model M3. The performance gap is narrowed but the model M0 with CIF-PT still has a certain performance advantage over model M3 with longer SLU training.
For the model of M5 and M6 in Table 2, we ablate language model distillation (LMD) utilized in CIF-PT. During pre-training, we find applying LMD bring 3.9% (14.83 → 14.25) relative WER reduction on the model with conformer encoder. During SLU training, we also observe the introduced LMD methods boosts the performance improvements on the two tasks in Table 2. For the reason of the smaller performance improvements of data2vec encoder , it may be that the model with data2vec encoder itself has strong modeling power and already learns effective pattern and textual knowledge, so that the injected textual knowledge can only be helpful for fewer evaluation samples.
We also compare the cross-modal representation extracted by CIF-PT with the speech representation derived from self-supervised learning. For the model M7 in Table 2, we ablate the frame-to-token CIF alignment in SLU models and directly pass
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
the frame-level speech representation extracted by data2vec to the SLU decoder. Although achieving competitive results, model M7 could achieve further improvements after combining with CIF.
We suspect the reason is two-folds: 1) CIF performs frame-to-text mapping that integrates relevant speech/semantic information, thus able to remove information redundancy in adjacent frames, 2) CIF-PT bridges the speech representation and text representation through ASR training and LMD,
thus providing more textual knowledge that benefits SLU performance. To further verify our hypothesis, we introduce CTC-based ASR pre-training
(CTC-PT) before the training of SLU model. Results show that CTC-PT provides improvements on SLU tasks (M7 → M8, M2 → M4), but it still has gap from CIF-PT . Above observations demonstrate the effectiveness of CIF-PT.
## 5.2 Comparison On Language Model Distillation
In this part, we compare different language model distillation (LMD) methods applied in CIF-PT. From the Figure 2 we get three observations: (1)
All LMD methods provide positive effects on SLU
performance in most cases, except for one outlier uses MSE loss with a weight of 0.01 on SF. The degradation disappears as the loss weight increases;
(2) The contrastive LMD method shows better quality on both SLU tasks than the other two methods.
We suspect the reason is contrastive distillation with proper temperature scalar mainly focuses on distinguishing hard negatives, instead of forcing the representation to be consistent like MSE. This helps the extracted representation retain speech and language information at the same time, which may benefit SLU modeling; (3) Different temperature scalar in contrastive LMD method has effects on down-streaming SLU tasks, with a τ value of 0.01 producing the best results on both SLU tasks.
## 5.3 Comparison On Neural Interfaces
We have compared the token-level representation ci extracted by CIF-PT with frame-level speech representations in section 5.1. In this part, we continue to compare ci with other popular token-level neural interfaces (or representations) summarized in (Raju et al., 2022), including hidden interface mi, posterior interface pi, tied embedding interface ei and the combinations. For fair comparison, we give up using LMD loss in CIF-PT which benefits ci. The results are shown in Table 3.
On the task of IC, we find all token-level neural interfaces achieve comparable accuracy. This may be because these interfaces contain close information that is useful for IC, and the pooling operation in intent decoder further reduce the discrimination between representations. The combination of ci and ei achieves the best performance. We suspect this is because they are located at the beginning
(ci) and end (ei) of ASR decoder respectively, so they may have a large information difference and complementarity.
On the task of SF, we observe a relatively large differentiation among these neural interfaces. On the model using generation-based slot decoder, ci obtains the best performance, while other interfaces have a certain performance gap in comparison. This phenomenon can be understood as ci, which is sourced from pure speech inputs, contains more original and comprehensive speech information. It can provide sufficient information for the calculation of cross-attention in the slot decoder. In contrast, the other interfaces are all calculated
| Interfaces | IC (Acc.) | SF (SLU-F1) | |
|--------------|-------------|---------------|--------|
| generation | tag | | |
| ci | 88.31% | 77.84% | 71.68% |
| mi | 88.26% | 68.24% | 74.21% |
| ei | 88.25% | 65.42% | 74.42% |
| pi | 88.30% | 54.04% | 72.94% |
| ci , mi | 88.48% | 74.94% | 74.61% |
| ci , ei | 88.79% | 76.22% | 74.52% |
via the autoregressive ASR decoder, thus the information may be biased to a certain hypothesis with errors in inference. In addition, using ci as the interface can also avoid the mismatch between the teacher-forcing inputs and predicted inputs in inference.
Interestingly, on the model using tag-based slot decoder, ci performs inferior to other neural interfaces. Since the tag-based slot model predicts slot key for each token of the one-best ASR hypothesis, the neural interfaces mi, pi, eithat are updated synchronously with the ASR decoding could provide closer slot prediction for the final ASR hypothesis.
The original speech information provided by ci can also provide supplements to these interfaces, and the best performance is obtained by the combination of ci and mi.
Between two types of slot decoder, the model with generation-based slot decoder is superior to the tag-based slot decoder, we believe this is because generation-based decoder utilize the bidirectional contextual information from full sequence, which makes it have higher ceiling in the prediction of slot information. In contrast, tagbased decoder could only use the uni-directional information that is limited by the autoregressive ASR decoder. However, this characteristic makes the tag-based model suitable for the application scenario with low-latency.
| Model | IC (Acc.) | SF (SLU-F1) |
|----------------|-------------|---------------|
| Slurp-Frozen | 89.60% | 78.67% |
| Slurp-Unfrozen | 88.84% | 78.08% |
| LS-Frozen | 80.65% | 64.02% |
| LS-Unfrozen | 90.65% | 79.74% |
## 5.4 Comparison On Out-Of-Domain Data
In above experiments, CIF pre-training is performed on the in-domain SLURP dataset. During SLU training, the pre-trained parameters are kept frozen ('Slurp-Frozen' in Table 4) and only the part of SLU decoder is trained. In this part, we first explore unfreezing the pre-trained parameters during SLU training ('Slurp-Unfrozen' in Table 4). Specifically, we hold the pre-trained parameters frozen in the first half of training, and then make the model entirely trained by performing joint training of ASR and SLU tasks. Results show that unfreezing pre-trained parameters leads to slight performance degradation. We suspect it is because the textual knowledge injected by LMD suffers catastrophic forgetting during SLU training. But the result achieved by 'Slurp-Unfrozen' is still better than the model using the frozen pre-trained model without LMD (model M5 in Table 2).
We also conduct experiments on an out-ofdomain pre-training dataset (Librispeech) to explore its effects on the final SLU performance. Consistent with our expectations, freezing the parameters pre-trained on out-of-domain data ('LS-Frozen' in Table 4) leads to a large performance degradation on SLU tasks of SLURP. When unfreezing these pre-trained parameters ('LS-Unfrozen' in Table 4) during SLU fine-tuning, the model obtains a noticable performance boost, and even outperforms the model achieved on slurp dataset. This partly reflects the good generalization and the potential on transfer learning of our proposed CIF-PT method.
## 6 Conclusion
In this work, we propose a new pre-training paradigm: Continuous Integrate-and-Fire PreTraining (CIF-PT) for end-to-end SLU. CIF serves as a bridge connecting speech and text modality:
on the one hand, it integrates speech representation into token-level through its frame-to-token alignment ability learned from ASR pre-training task.
On the other hand, it support one-to-one transfer of the textual knowledge into the integrated tokenlevel speech representation via the pre-training of language model distillation. After CIF-PT, we obtain a cross-model representation that is used as neural interface into down-streaming SLU tasks.
Evaluated on the largest SLU benchmark of SLURP, CIF-PT creates new state-of-the-art result on both of IC and SF tasks. We further validate the effectiveness and generalization of CIF-PT by a series of experiments including ablation study and the pre-training on out-of-domain data. We also observe the cross-modal representation extracted by CIF-PT shows its competitiveness in comparison with other neural interfaces on SLU.
We believe that CIF-PT has the potential to better encode long-form speech content (e.g. spoken paragraph) through its language model distillation, and will explore to combine it with LLM methods like ChatGPT to further empower spoken language understanding (SLU) systems.
## 7 Limitation
In the process of conducting experiments, we find our method has some limitations. First, CIF-PT
needs to be performed on the dataset with speechtext pair. For some small-scale dataset that only contains speech and SLU labels, our method needs to use external ASR dataset to conduct the pretraining, leading to the increase of complexity of model building. In addition, in CIF-PT, we need to ensure that the tokenizer of the pre-trained language model is consistent with the tokenizer in the ASR task. However, there is usually a gap between the two in terms of vocabulary size. In consideration of performance, it is necessary to modify the tokenzier of one or both sides.
## References
Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, et al. 2022. Speecht5: Unified-modal encoder-decoder pre-training for spoken language processing. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 5723–5738.
Siddhant Arora, Siddharth Dalmia, Pavel Denisov, Xuankai Chang, Yushi Ueda, Yifan Peng, Yuekai Zhang, Sujay Kumar, Karthik Ganesan, Brian Yan, et al.
2022. Espnet-slu: Advancing spoken language understanding through espnet. In *ICASSP 2022-2022* IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7167–7171.
IEEE.
Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. 2022a. data2vec:
A General Framework for Self-supervised Learning in Speech, Vision and Language. In *Proceedings* of the 39th International Conference on Machine Learning, Baltimore, Maryland, USA.
Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. 2022b. Data2vec:
A general framework for self-supervised learning in speech, vision and language. arXiv preprint arXiv:2202.03555.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
volume 33, pages 12449–12460.
Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser. 2020. Slurp: A spoken language understanding resource package. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7252–7262.
William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pages 4960–4964.
Yung Sung Chuang, Chi Liang Liu, Hung Yi Lee, and Lin Shan Lee. 2020. SpeechBERT: An audioand-text jointly learned language model for end-toend spoken question answering. *Proceedings of* the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2020-
Octob:4168–4172.
Yu-An Chung, Chenguang Zhu, and Michael Zeng.
2021. Splat: Speech-language joint pre-training for spoken language understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1897–1907.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Linhao Dong, Feng Wang, and Bo Xu. 2019. Selfattention aligner: A latency-control end-to-end model for asr using self-attention network and chunkhopping. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5656–5660. IEEE.
Linhao Dong and Bo Xu. 2020. CIF: Continuous integrate-and-fire for end-to-end speech recognition.
In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pages 6079–6083. IEEE.
Alex Graves. 2012. Sequence transduction with recurrent neural networks. In the International Conference of Machine Learning (ICML) 2012 Workshop on Representation Learning.
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369–376.
Anmol Gulati, James Qin, Chung Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang.
2020. Conformer: Convolution-augmented transformer for speech recognition. In Proceedings of the 2020 Annual Conference of the International Speech Communication Association, INTERSPEECH, pages 5036–5040.
Parisa Haghani, Arun Narayanan, Michiel Bacchiani, Galen Chuang, Neeraj Gaur, Pedro Moreno, Rohit Prabhavalkar, Zhongdi Qu, and Austin Waters. 2018. From audio to semantics: Approaches to end-to-end spoken language understanding. In *2018 IEEE Spoken Language Technology Workshop (SLT)*, pages 720–726. IEEE.
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. HuBERT: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451–3460.
Zhiqi Huang, Milind Rao, Anirudh Raju, Zhe Zhang, Bach Bui, and Chul Lee. 2022. Mtl-slt: Multi-task learning for spoken language tasks. In *Proceedings* of the 4th Workshop on NLP for Conversational AI,
pages 120–130.
Seongbin Kim, Gyuwan Kim, Seongjin Shin, and Sangmin Lee. 2021. Two-stage textual knowledge distillation for end-to-end spoken language understanding.
volume 2021-June, pages 7463–7467.
Bo Li, Anmol Gulati, Jiahui Yu, Tara N Sainath, ChungCheng Chiu, Arun Narayanan, Shuo-Yiin Chang, Ruoming Pang, Yanzhang He, James Qin, et al. 2021.
A better and faster end-to-end model for streaming asr. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pages 5634–5638. IEEE.
Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 1073–1094, Minneapolis, Minnesota.
Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
RoBERTa: A Robustly Optimized BERT Pretraining Approach.
Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *International Conference on Learning Representations*.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206–5210.
IEEE.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *Advances in* neural information processing systems, 32.
Yifan Peng, Siddharth Dalmia, Ian Lane, and Shinji Watanabe. 2022. Branchformer: Parallel mlpattention architectures to capture local and global context for speech recognition and understanding.
In *International Conference on Machine Learning*,
pages 17627–17643. PMLR.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI*
blog, 1(8):9.
Anirudh Raju, Milind Rao, Gautam Tiwari, Pranav Dheram, Bryan Anderson, Zhe Zhang, Chul Lee, Bach Bui, and Ariya Rastrow. 2022. On joint training with interfaces for spoken language understanding.
In *Interspeech 2022*.
Milind Rao, Anirudh Raju, Pranav Dheram, Bach Bui, and Ariya Rastrow. 2020. Speech to semantics: Improve asr and nlu jointly via all-neural interfaces.
arXiv preprint arXiv:2008.06173.
Mirco Ravanelli, Titouan Parcollet, Peter Plantinga, Aku Rouhe, Samuele Cornell, Loren Lugosch, Cem Subakan, Nauman Dawalatabad, Abdelwahab Heba, Jianyuan Zhong, et al. 2021. Speechbrain: A
general-purpose speech toolkit. arXiv preprint arXiv:2106.04624.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with subword units. In *54th Annual Meeting of* the Association for Computational Linguistics, pages 1715–1725. Association for Computational Linguistics (ACL).
Seunghyun Seo, Donghyun Kwak, and Bowon Lee.
2022. Integration of pre-trained networks with continuous token interface for end-to-end spoken language understanding. In *ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pages 7152–7156. IEEE.
Dmitriy Serdyuk, Yongqiang Wang, Christian Fuegen, Anuj Kumar, Baiyang Liu, and Yoshua Bengio. 2018.
Towards end-to-end spoken language understanding.
In *2018 IEEE International Conference on Acoustics,*
Speech and Signal Processing (ICASSP), pages 5754–
5758.
Gokhan Tur and Renato De Mori. 2011. *Spoken Language Understanding: Systems for Extracting Semantic Information from Speech*. John Wiley & Sons.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, and Juan Pino. 2020. Fairseq s2t: Fast speech-to-text modeling with fairseq. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations, pages 33–39.
Ye-Yi Wang, Li Deng, and Alex Acero. 2005. Spoken language understanding. IEEE Signal Processing Magazine, 22(5):16–31.
Yingzhi Wang, Abdelmoumene Boumadane, and Abdelwahab Heba. 2021. A fine-tuned wav2vec 2.0/hubert benchmark for speech emotion recognition, speaker verification and spoken language understanding. *arXiv preprint arXiv:2111.02735*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 conference on empirical methods in natural language* processing: system demonstrations, pages 38–45.
Yi Zhu, Zexun Wang, Hang Liu, Peiying Wang, Mingchao Feng, Meng Chen, and Xiaodong He.
2022. Cross-modal transfer learning via multigrained alignment for end-to-end spoken language understanding. *Proc. Interspeech 2022*, pages 1131–
1135.
## A Appendix A.1 Computational Experiments
The total parameters for our SLU model with conformer encoder is 95.07 M. It costs 10.1 hours and 12.0 hours for CIF-PT and SLU fine-tunig on 8 A100 GPUs, respectively. The batch size of both stages is set to 30000 frames on each GPU. For our CIF SLU model with data2vec encoder, it has 357.50M parameters and needs 23.0 hours and 7.5 hours to finish CIF-PT and SLU fine-tuning, the corresponding batch size for the two stages is set to 1.2M and 1.6M samples, respectively.
## A.2 Details Of Model Structure
![11_image_0.png](11_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In section 7
✗ A2. Did you discuss any potential risks of your work?
We believe there is no risk in our work. We only use the open-resource codes and publicly release data from the community, and no misuse of any resource and no other stateholders involved in our work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In Abstract and Introductions section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** In Results And Analysis Section 5
✓ B1. Did you cite the creators of artifacts you used?
In Experimental Setup Section 4, and Results and Analysis Section 5
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
We all use open-resource code an public resource in our work and cite them correctly. We will discuss and state the license used when publicly releasing our code in more detail.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
In Results and Analysis Section 5
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
in Experimental Setup Section 4 we reported the used public dataset and their resource. No anonymous problem of data our work.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? all artifacts in our work are publicly available and frequently used. We have cited them clearly.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. in Experimental Setup Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?**
In Experimental Setup Section4, Results and Analysis Section5 and Appendix A
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In Appendix A
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In Experimental Setup Section4, Results and Analysis Section5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In Experimental Setup Section4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In Experimental Setup Section4
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
alsulaimani-moreau-2023-improving | Improving Diachronic Word Sense Induction with a Nonparametric {B}ayesian method | https://aclanthology.org/2023.findings-acl.567 | Diachronic Word Sense Induction (DWSI) is the task of inducing the temporal representations of a word meaning from the context, as a set of senses and their prevalence over time. We introduce two new models for DWSI, based on topic modelling techniques: one is based on Hierarchical Dirichlet Processes (HDP), a nonparametric model; the other is based on the Dynamic Embedded Topic Model (DETM), a recent dynamic neural model. We evaluate these models against two state of the art DWSI models, using a time-stamped labelled dataset from the biomedical domain. We demonstrate that the two proposed models perform better than the state of the art. In particular, the HDP-based model drastically outperforms all the other models, including the dynamic neural model. | # Improving Diachronic Word Sense Induction With A Nonparametric Bayesian Method
Ashjan Alsulaimani School of Computer Science and Statistics Trinity College of Dublin [email protected]
## Abstract
Diachronic Word Sense Induction (DWSI) is the task of inducing the temporal representations of a word meaning from the context, as a set of senses and their prevalence over time. We introduce two new models for DWSI, based on topic modelling techniques: one is based on Hierarchical Dirichlet Processes
(HDP), a nonparametric model; the other is based on the Dynamic Embedded Topic Model
(DETM), a recent dynamic neural model. We evaluate these models against two state of the art DWSI models, using a time-stamped labelled dataset from the biomedical domain.
We demonstrate that the two proposed models perform better than the state of the art. In particular, the HDP-based model drastically outperforms all the other models, including the dynamic neural models.1
## 1 Introduction
Word meanings evolve over time. Recent research works have focused on how to model such dynamic behaviour. The unsupervised task of Diachronic Word Sense Induction (DWSI) aims to capture how the meaning of a word varies continuously over time, in particular when new senses appear or old senses disappear. DWSI takes the time dimension into account and assumes that the data spans over a long continuous period of time in order to model the progressive evolution of senses across time.
The dynamic behaviour of words contributes to semantic ambiguity, which is a challenge in many NLP tasks. DWSI can serve as an analytical tool to help building terminology resources and indexing documents more accurately and therefore can be beneficial for information retrieval tasks.
1The code corresponding to this work is available at https://github.com/AshjanAlsulaimani/
DWSI-advanced-models Erwan Moreau School of Computer Science and Statistics Trinity College of Dublin [email protected] DWSI follows the probabilistic graphical modelling approach to approximate the true meanings from the observed data. Thus, in this paper, we explore the relation of DWSI with topic modelling in general and to the dynamic topic modelling techniques in particular: they both aim to discover a latent variable (sense or topic respectively) from a sequential collection of documents. Despite a close relation between the tasks, topic modelling techniques are not fully explored or compared against in the current state of the art of DWSI.
The state of the art of DWSI consists of only two models: (Emms and Kumar Jayapal, 2016) and
(Frermann and Lapata, 2016). They are both designed specifically for DWSI; both are parametric; and both are dynamic, in the sense that they both introduce a time variable into the model in order to capture the evolution of the meaning over time. Emms and Kumar Jayapal (2016) propose a parametric generative model (NEO) where each sense is represented as a |V |-dimensional multinomial distribution over the vocabulary V , each document is represented as a mixture of senses, and the dependency of the sense proportions on time is represented as a K-dimensional multinomial distribution over the K senses. The parameters of the model have finite Dirichlet priors. A more complex model called SCAN (Frermann and Lapata, 2016) allows each sense distribution over the vocabulary to evolve sequentially from adjacent time slices, as well as the senses proportion. The multinomial parameters of words and senses have logistic normal priors.
The two above-mentioned models are parametric, in the sense that the number of senses (which reflects the structure of the hidden meanings in the data) is a hyper-parameter which has to be known a priori. This is not ideal given the nature of the DWSI task, which is meant to infer senses from the data. The same issue has been studied for the tasks of topic modelling and WSI; Hierarchical Dirichlet Processes (HDP), a nonparametric hierarchical model introduced by Teh et al. (2006), offer an powerful solution to this problem. HDP extends Latent Dirichlet Allocation (LDA) (Blei et al., 2003) by placing Dirichlet processes priors (DPs) (Ferguson, 1973) on the infinite-dimensional space of multinomial probability distributions. Thus the number of mixture components is infinite a priori and to be inferred from the data. In contrast, LDA posits a predefined number K of topics, each of which is a multinomial distribution over the vocabulary. Each document has specific topic proportions from a Dirichlet prior, and the topics are shared among the documents. Additionally, the HDP model allows sharing topics not only among documents but also across hierarchical levels by the use of multiple DPs.
The intuition behind our approach relies on the fact that the hierarchical DPs allow "new" senses to appear as needed, thanks to the theoretically infinite number of possible senses. Therefore, the hierarchical design of Dirichlet processes can capture the dynamic behaviour of the words, while inferring the optimal number of clusters directly from the data across time.
Word embeddings are another natural direction of potential improvement for DWSI. Introduced by Rumelhart and Abrahamson (1973); Bengio et al.
(2003, 2006), they provide a distributed representation where words with similar meanings are close in a lower-dimensional vector space. Recently, various models have been proposed which integrate word embeddings for topic modelling, however these models do not necessarily represent both words and topics using embeddings. Dieng et al.
(2019) provide an elegant solution to this problem:
Dynamic Embedded Topic Model (DETM) is a parametric generative model inspired by D-LDA
(Dynamic LDA) Blei and Lafferty (2006), in which each word is represented with a word embedding, and per-time topics are represented as embeddings as well. Topics and topic proportions evolve sequentially from adjacent time slices. DETM also directly models per-topic conditional probability of a word as the exponentiated inner product between the word embeddings and per-time topic embeddings. This results in a closer semantic correspondence between words and topics, and thus
## Leads To Better Topics Quality.
By contrast to previous contributions in DWSI
which were mostly theoretical, this paper is an empirical contribution focusing on adapting different existing topic modelling techniques to DWSI.
The aim is to set the state of the art DWSI models up against two serious competitors, in order to check whether they actually fit the task of DWSI
optimally. In this perspective, we adapt HDP and DETM to the task of DWSI, describing our approach in §3. We test the ability of these models to detect meaning change over time using the evaluation framework proposed by (Alsulaimani et al., 2020), described in §4: using a large corpus of biomedical time-stamped data, including 188 ambiguous target words, we compare the proposed models with the current state of the art models NEO
and SCAN. The results, presented in §5, show that HDP-based models achieve the best results over the dataset, establishing a new state of the art for DWSI.
## 2 Related Work
Topic modelling techniques are hierarchical probabilistic Bayesian models used originally for discovering topics in a collection of documents (Blei et al.,
2010). Topic models have also been adopted for the Word Sense Induction (WSI) task, as introduced by (Brody and Lapata, 2009; Yao and Van Durme, 2011): word senses are treated as topics, and a short window around the target word (context) is considered instead of a full document. Topic modelling techniques have been extended further to similar tasks, such as Novel Sense Detection.
Novel Sense Detection (NSD; also called Novel Sense Identification), introduced by Lau et al.
(2012), consists of determining whether a target word acquires a new sense over two independent periods of time, separated by a large gap. Several authors have used Hierarchical Dirichlet Processes (HDP) for this task over a small set of target words and/or small set of data (Lau et al., 2012, 2014; Cook et al., 2014). Yao and Van Durme
(2011); Lau et al. (2012) show in a preliminary study that HDP is also superior to LDA for WSI, due to its ability to adapt to varying degrees of granularity. Lau et al. (2012) extend this study using an oracle-based method to identify new senses from HDP predictions for the task of NSD, and for only five target words. Sarsfield and Tayyar Madabushi (2020) used HDP for NSD on a larger dataset (Schlechtweg et al., 2020), which was proposed in a recent shared task about Lexical Semantic Change Detection (LSCD), a refined version of NSD: LSCD intends to answer the question of whether the meaning of a target word has changed between two independent periods of time
(also separated by a large time gap). In the LSCD
task, methods based on static word embeddings
(where the meaning of the word is represented by a single vector) achieved the highest performance.
In contrast to NSD/LSCD, DWSI takes the time dimension into account and thus the task of DWSI is technically broader: it aims to discriminate senses and also models the temporal dynamics of word meaning across a long continuous period of time, e.g. year by year. As a result, DWSI can track the evolution of senses, the emergence of new senses and detect the year where a new sense appears. The DWSI task is introduced independently by Emms and Kumar Jayapal (2016) and Frermann and Lapata (2016); given a target word and a time-stamped corpus, both models estimate two main parameters: the senses as distributions over words, and the senses proportions over time. Frermann and Lapata (2016) extend this by also inferring the subtle meaning changes within a single sense over time, i.e. by allowing different word distributions over time for the same sense.
However, these models are parametric and require the number of senses to be chosen in advance. Previous approaches dealt with this issue by increasing the number of senses. For example, Emms and Kumar Jayapal (2016) vary the number of senses manually for every target word, while Frermann and Lapata (2016) choose an arbitrary fixed large number of senses for all the target words.
Additionally, evaluating and comparing such models on the DWSI task is difficult: the lack of large scale time-stamped and sense-annotated data hinders direct quantitative evaluation. The state of the art models, (Emms and Kumar Jayapal, 2016; Frermann and Lapata, 2016), were originally evaluated only qualitatively on a few hand-picked target words, with a manual investigation of the quality of the associated top words in each cluster; Frermann and Lapata (2016) also evaluated their model on several indirect tasks. Alsulaimani et al. (2020)
demonstrate that these evaluation methods are insufficient, and consequently propose a quantitative evaluation of these DWSI models based on a large set of data. In particular, they show that the senses size distribution plays a significant role in capturing the senses representations and emergence of new senses. The number of senses is clearly a crucial hyperparameter for a DWSI model, the choice of which should in theory depend on the characteristics of the data.
## 3 Approach 3.1 Parameters Notation
DWSI aims to discover the senses S across time Y for each target word in a sequential collection of documents, where senses are latent variables and the number of senses is unknown a priori. A
DWSI model estimates at least two multinomial distributions:
- P(W|S), the word given sense distribution.
The changes within senses across time can also be represented as P(W|*S, Y* ), the word given sense and year distribution. These distributions represent the sense.
- P(S|Y ), the sense given year distribution.
This distribution represents the relative prevalence of a sense over time.
## 3.2 Hdp-Dwsi
HDP allows senses (i.e. clusters) to appear when a new context occurs, as the number of senses is determined by the data. HDP-DWSI directly relies on this property: in the first step, all the documents, independently from their year, are clustered by HDP.
Appendix A provides details about the description of HDP. This means that in this step the documents are assumed to be exchangeable, as opposed to dynamic models in which documents are only exchangeable within a time period. In the second step, the year of the document (observed variable)
is reintroduced and the time-related multinomial parameters P(S = s|Y = y) are estimated by marginalising across the documents of each year j independently �d∈y �
freq(sd)
s� *freq*(s�d) , where *freq*(sd)
8910 the number of words predicted as sense s in the document d, and d ∈ y represents the condition that the document d belongs to year y.
HDP-DWSI is intended to be used as a nonparametric method, but a parametric mode is also proposed for the purpose of evaluation and comparison against parametric models. In the nonparametric mode, the model parameters are obtained directly as described above. In the parametric mode, an additional step is required to reduce the number of senses because HDP-DWSI tends to induce a higher number of clusters than the gold number of senses, i.e. to split senses into multiple clusters.
Depending on the context of the application, it can also be relevant to reduce the number of senses even in the nonparametric mode. This can also be done with the method described below for the parametric mode, called HDP-DWSIm.
HDP-DWSIm consists in merging the predicted senses which are the most semantically similar. Agglomerative hierarchical clustering (Ward Jr, 1963)
is used to merge senses, based on a sense cooccurrence matrix obtained from the HDP clustering output.
Pointwise Mutual Information (PMI) is used to
represent how strongly two predicted senses are
statistically associated, under the assumption of
independence:
$$P M I(s_{i},s_{j})=\log_{2}{\frac{P(s_{i},s_{j})}{P(s_{i})P(s_{j})}}$$
P(si)P(sj ) (1)
where i �= j and P(si, sj ) is the joint probability
of observing both si and sj in the same document.
P(si) (resp. P(sj )) is the probability of a predicted
sense with respect to the entire corpus, i.e. an
occurrence is counted for every document in which
the predicted sense si (resp. sj ) independently
occurs.
Moreover, since a pair of predicted senses with negative PMI is uninformative for the purpose of merging similar senses, Positive Pointwise Mutual Information (PPMI), as defined in Equation 2, is used for constructing the sense cooccurrence matrix.
$$PPMI=\left\{\begin{array}{ll}PMI(s_{i},s_{j})&\mbox{if}PMI(s_{i},s_{j})>0\\ 0&\mbox{else}\end{array}\right.\tag{2}$$
(P)PMI is sensitive to low frequency events, particularly in the event when one of the predicted senses (or both of them) is/are less frequent with respect to the whole corpus; thus it is possible that two senses mostly cooccur together by chance, yet obtain a high (P)PMI value. In such a case, the two predicted senses are not semantically associated, so this is a potential bias in the merging process.
To counter this bias, we use the linkage criterion defined in Equation 3 as the average of the PPMI
values weighted by their corresponding joint probabilities. The linkage criterion for two clusters C1, C2:
$$\sum_{\begin{subarray}{c}\forall s_{1}\in C_{1}\\ \forall s_{2}\in C_{2}\end{subarray}}w(s_{1},s_{2})\times PPMI(s_{1},s_{2})\tag{3}$$ where $w(s_{1},s_{2})=\dfrac{P(s_{1},s_{2})}{\sum_{\begin{subarray}{c}\forall s_{1}\in C_{1}\\ \forall s_{2}\in C_{2}\end{subarray}}P(s_{1},s_{2})}$
The evaluation method proposed by Alsulaimani et al. (2020) (see §4) relies on the gold number of senses, as it is originally intended for parametric methods. In order to compare an HDP-based model against parametric models in an equivalent setting, the HDP-DWSIm merging method is used to reduce the predicted number of senses to the gold-standard number of senses.
## 3.3 Detm-Dwsi
DETM represents not only the observed words but also latent topics/senses as embeddings, while preserving the traditional representation of a topic/sense as a probability distribution across words. The categorical distributions over the vocabulary is time dependent, i.e. P(W|*S, Y* ) and is derived from the corresponding word embeddings and sense embedding at a given time. DETM
also places time-dependent priors over senses proportions: the use of Markov chain over the sense proportions allows smoothness of the variations between the adjacent senses at neighboring times
(see Appendix A for the description of DETM). We propose two modes for DETM-DWSI as follows:
- In the regular DETM-DWSI, both the word and sense embeddings are trained simultaneously. This mode does not require any additional resource but the corpus must be large enough for the embeddings to be accurate.
- In DETM-DWSIi, the model is trained with prefitted word embeddings. This mode leverages the external information contained in the
embeddings, potentially obtaining a more accurate representation of the senses as a consequence. It also allows the application of the model to text containing words not present in the corpus, as long as their embedding is available.
In the experiments described below, the DETMDWSIi models are trained using the BioWordVec pretrained word embeddings2 (Zhang et al.,
2019). The fastText subword embedding model
(Bojanowski et al., 2017) is a variant of the continuous skip-gram model (Mikolov et al., 2013). The fastText subword embedding can learn a distinct vector for each word while exploiting subword information in a unified n-gram embedding space.
BioWordVec embeddings are trained with fastText on the PubMed text and MeSH terms, combined into a unified embedding space. In the biomedical domain, the advantage of a subword embedding model is that it can handle Out of Vocabulary
(OOV) words (Zhang et al., 2019).3 This leads to a more precise word representation, in theory better able to capture the semantics of specialised concepts. We use the *intrinsic* BioWordVec embeddings (as opposed to the extrinsic type), meant to represent the semantic similarity between words
(Zhang et al., 2019).
## 4 Experimental Setup 4.1 Data
We use the DWSI evaluation framework proposed by Alsulaimani et al. (2020): the biomedical literature is used as a source of labelled and timestamped data which covers the years 1946 to 2019.4 The dataset is collected from resources provided by the US National Library of Medicine (NLM):
PubMed (a platform which includes the major biomedical literature databases) and MeSH (a controlled vocabulary thesaurus, created manually to index NLM databases).5 The data is preprocessed 2https://github.com/ncbi-nlp/BioSentVec. 3Note that the PubMed and MeSH terms are biomedical resources, collected from the US National Library of Medicine
(NLM) and based on the database of 2019 and 2018 respectively. These are the same version for the DWSI evaluation data.4https://github.com/AshjanAlsulaimani/
DWSI-eval 5https://www.nlm.nih.gov/
as in (Alsulaimani et al., 2020). The data consists of 188 ambiguous target words and 379 goldstandard senses (Jimeno-Yepes et al., 2011): 75 ambiguous target words have 2 senses, 12 have 3 and one has 5 senses. The total data size is 15.36 × 109 words, and the average number of documents is 61,352 by sense. The input documents for every target word consist of the occurrences of the target word which are provided with a window of 5-word context on each side as well as the year of publication. The gold-standard sense label is also available for evaluation purposes.
## 4.2 Algorithms Settings
- The HDP-DWSI and HDP-DWSIm models are trained using the official C++ implementation of HDP.6 No additional preprocessing is needed.
- The DETM-DWSI and DETM-DWSIi models are trained using the implementation provided by Dieng et al. (2019).7 The preprocessing is adapted to the DWSI dataset: since the data is strongly imbalanced across time, stratified sampling is used in order to ensure a representative time distribution (with at least one instance by year) across the data partitions.
The data is split into 85% of instances for training and 15% for validation. The document frequency thresholds are unused so as to include all the words. For efficiency reasons, during training the number of instances is capped at 2,000 instances per year.
## 4.3 Evaluation Methodology
Since DWSI is an unsupervised task (clustering)
and our evaluation is based on the external sense labels, both the estimation of the model and the evaluation are performed on the full set of documents for each target word. The gold-standard number of senses of each ambiguous target word is provided for all the parametric models (excluding HDP-DWSI). The default parameters are used in all the systems,8 except the number of itera6https://github.com/blei-lab/hdp. 7https://github.com/adjidieng/DETM. 8This means that we do not tune any hyper-parameter for any of the systems. Since DWSI applications would usually not have access to any labelled data, the performance would be unrealistic if the parameters were tuned.
tions/epochs (set to 500 for all the systems),9 and specifically for DETM-DWSI the batch size is set to 1000 and the dimension of the embeddings is set to 200.
After estimating each model for each ambiguous target word, the posterior probability is calculated for every document. The sense with the highest probability is assigned.
## 4.4 Evaluation Measures
We follow Alsulaimani et al. (2020) for the evaluation measures with some adjustments, detailed below.
The "Global Matching" method, presented by Alsulaimani et al. (2020), consists in determining a one-to-one assignment between predicted senses and gold senses based on their joint frequency: the pair with the highest frequency is matched first, and this process is iterated until all the senses are matched. In the case of HDP-DWSI, the number of predicted senses may be higher than the gold number of senses, and the instances of the predicted senses which remain unmatched are considered as false negative. This allows to compare HDP-DWSI
with the parametric models, assuming that in theory the ideal nonparametric model would infer exactly the true number of senses. Of course, HDP-DWSIm is by definition more appropriate for a comparison in the parametric setting of HDP-based methods.
We also propose to use the V-measure as a different method of evaluation. The V-measure is introduced by Rosenberg and Hirschberg (2007), providing a different way to evaluate a clustering solution. In this case, it evaluates every cluster against every gold sense without relying on a matching method, thus providing an objective assessment even when the number of the clusters is higher than the true number of senses. The V-measure is based on entropy (entropy is a measure of the uncertainty associated with a random variable): it is defined as the harmonic mean of homogeneity and completeness, which are both based on the normalised conditional entropy.
Alsulaimani et al. (2020) also propose to evaluate the emergence of a new sense by considering whether the system predicts the true emergence year of a sense. This requires a method to determine the year from the P(S|Y ) distribution, for which the original algorithm "EmergeTime" was proposed in Jayapal (2017). We introduce "LREmergeTime" (see Appendix B Algorithm 1), an improved version of "EmergeTime" using linear regression instead of multiple thresholds within a window. Indeed, the original algorithm is very sensitive to the noise which sometimes occurs in the emergence pattern. Linear regression handles this issue better, since it measures the global trend across the window.10 The emergence year is evaluated as in (Alsulaimani et al., 2020): (1) with standard classification measures, considering the sense as correctly predicted if the year is within 5 years of the true emergence year; (2) with (normalized) Mean Absolute Error, representing the average difference in number of years but also penalizing the wrongly predicted presence/absence of emergence.
Finally we also use the distance between the true and predicted evolution of the senses over time
(P(S|Y )) as an evaluation method for DWSI, again following Alsulaimani et al. (2020).
## 5 Results 5.1 Qualitative Exploration
We explore the temporal meanings of "SARSassociated coronavirus" over the years (2002-2018)
as an example. The ambiguous word has two gold-standard senses described by UMLS concepts C1175175 and C1175743: *Severe Acute Respiratory Syndrome* (refers to the disease caused by the virus) and *SARS Virus* (refers to the virus related to the Coronavirus family causing the disease) respectively. The top words represented by the inferred parameter word given sense, identified by HDP10 The superiority of "LREmergeTime" was confirmed using a subset of manually annotated targets (the targets are chosen based on the visual clarity of the emergence pattern).
The evaluation results on this subset show that "LREmergeTime" performs closer to the annotated senses. Following the evaluation measures by Alsulaimani et al. (2020), the results of "EmergeTime" and "LREmergeTime" are respectively 0.7 and 0.8 for Fscore, 12.06 and 6.74 for MAE, 0.21 and 0.11 for Normalised MAE. See Appendix C for details of algorithms outputs.
DWSIm for the first sense are {patients, outbreak, sars, 2003, epidemic, health, case, transmission, hospital} and for the second sense are {cov, sars, coronavirus, patients, infection, protein, respiratory, acute, syndrome, cells}. Figure 1 shows the relative prevalence of the two inferred and gold senses over time, and Table 1 shows the top inferred words/usages associated with sense C1175175 at specific times.
In Figure 1, both senses data start in 2002, however the prevalence of sense C1175175 was decreasing progressively from 2002 to 2018 since SARS was successfully contained in 2004, while the prevalence of the sense C1175743 kept increasing since the research about the *SARS virus* became a priority for the public health around the world.
The temporal changes of the top words within C1175175 are highlighted in Table 1. Historically, the first known case of SARS appears in November 2002, causing the 2002-2004 SARS outbreaks in cities and hospitals. Global attention then started and in 2016, for instance, the top words shifted to facemask, post, era, sars. Finally, the year 2018 shows the concerns about a second wave of SARS.
![6_image_0.png](6_image_0.png)
| 2002 | 2003 | 2004 | =⇒ | 2016 | 2017 | 2018 |
|--------------------------------------------------------|----------|--------------|-------------------|-----------|----------|--------|
| case | patients | patients | outbreak outbreak | second | | |
| outbreak outbreak | outbreak | facemask | 2003 | 2003 | | |
| lessons | case | sars | post | patients | impact | |
| learned | health | transmission | 2003 | china | epidemic | |
| health | 2003 | hospital | era | data | wave | |
| chief | sars | case | sars | outbreaks | n't | |
| falls | hospital | patient | hong | health | link | |
| Table 1: Temporal evolution of the top-7 words for the | | | | | | |
## 5.2 Matching-Based Evaluation
Table 2 shows the performance of the six models according to standard classification and regression measures using "Global Matching". In general, DWSI models based on HDP perform well compared to NEO or SCAN. In the case of HDP-DWSI,
"Global Matching" causes two observable effects:
it increases precision, by allowing the system to choose the best predicted clusters matched with the gold senses; but it also decreases recall by introducing a large number of false negative cases due the discarded unmatched predicted clusters. Nevertheless the macro F1 score for HDP-DWSI is much higher than both NEO and SCAN, by 17.7% and 13.8% respectively. This shows that HDP-DWSI
can distinguish minority senses significantly better.
This can also be seen in Table 3 which shows the mean F1-score by senses size.
Systems Macro-average Micro-average P R F1 P R F1 MAE
DETM-DWSIi 0.553 0.561 0.557 0.704 0.704 0.704 0.401 DETM-DWSI 0.559 0.590 0.574 0.650 0.650 0.650 0.379 HDP-DWSI **0.726** 0.599 0.657 0.739 0.424 0.539 -
HDP-DWSIm 0.666 0.681 0.674 0.744 0.744 0.744 **0.26**
NEO 0.548 0.569 0.558 0.595 0.595 0.595 0.425 SCAN 0.562 0.591 0.577 0.558 0.558 0.558 0.444 Table 2: Global performance results for all systems using "Global Matching". P/R/F1 stand for Precision/Recall/F1-score (higher is better) MAE
stands for Mean Absolute Error (lower is better). Best performance in bold.
The superiority of HDP-DWSIm is even clearer:
the macro F1 score is 20.8% higher than NEO and 16.8% higher than SCAN; the performance difference in micro F1 score is even stronger: 21.0%
above DETM-DWSIi, 17.4% higher than DETMDWSI, 25.0% above NEO and 33.3% above SCAN.
Contrary to the differences between NEO and SCAN, HDP-DWSIm improves performance significantly across the board: both precision and recall are drastically higher, according to both micro and macro scores. This means that HDP-based models are fundamentally much better at discriminating the different senses (with a very significant p-value
< 0.05), as opposed to strategically favouring large senses for instance. This is confirmed in Table 3.
11 The two DETM-based models perform very well, in particular achieving micro F1-score much higher than NEO and SCAN. However their macroaverage performance is comparable to NEO and 11A Wilcoxon rank sum test is applied on the F1-scores of the senses for the results in Table 2 and 3.
SCAN, a clear sign that they do not separate the senses better. Table 3 confirms that the DETMbased models perform closely to NEO and SCAN.
Finally the MAE scores confirm that DETMDWSIi and DETM-DWSI peform better than NEO
and SCAN, but also that these four models are drastically outperformed by HDP-DWSIm.
![7_image_1.png](7_image_1.png)
Table 3: Comparison of the performance by sense according to the "Global Matching" method, ranked by proportion within a target. The sense rank is ordered by the size of senses (in number of instances), from the smallest sense (rank first) to the largest (rank last). "-"
means any number of senses (all the data). The systems are referred to by their initials.
![7_image_2.png](7_image_2.png)
Table 4: V-measure, homogeneity and completeness for all the systems. Both the mean and median across targets are reported, because the strong differences between targets in terms of size and distribution of the senses may cause a bias with the mean.
Table 4 shows the results of the systems for Vmeasure, with details about homogeneity and completeness. HDP-DWSI and HDP-DWSIm perform the best at all three levels, with values far above the other systems. HDP-DWSI has the highest homogeneity mean, because this model produces a higher number of smaller predicted senses; these predicted senses are therefore more homogeneous in general, but also less complete since the gold senses are often split. HDP-DWSIm merges the senses predicted by HDP-DWSI, thus obtaining lower homogeneity but compensating with higher completeness, leading to higher mean V-measure.
Figure 2 offers a more precise picture of the differences between systems about their V-measure distribution. It confirms that DETM-DWSI, DETMDWSIi and SCAN perform very similarly. It
![7_image_0.png](7_image_0.png)
shows that the higher performance of DETMDWSI, DETM-DWSIi and SCAN compared to NEO is due to a minority of targets, as their 75%
lowest scores are almost identical. These targets cause most of the high difference in mean between NEO and SCAN, as the smaller difference in medians shows.
By contrast, HDP-DWSI and HDP-DWSIm have
![7_image_3.png](7_image_3.png)
a much smaller proportion of low scores. Interestingly, HDP-DWSI has higher low scores than HDPDWSIm, i.e. HDP-DWSI performs better until both systems reach the median. However HDP-DWSIm skyrockets just after the median and surpasses HDP
by having much higher high scores. This explains why the median is slightly lower for HDP-DWSIm than HDP while the mean is much higher for HDPDWSIm.
12
## 5.4 Comparison Between Measures
![7_image_4.png](7_image_4.png)
Table 5: Pearson correlation coefficients: the relationship between the performance according to different measures. All the results are significantly correlated with p-value <= 5.6e-13. The systems are referred to by their initials.
V-measure can introduce a bias towards systems which predict a number of clusters larger than the number of gold senses. Such systems tend to have very high homogeneity scores and low completeness scores. However, this is not the case for HDPDWSI. The HDP-DWSI performance is high not only according to the V-measure but also confirmed 12This can be verified visually on the quantile plot, because the area under the curve is equal to the mean.
by the F1 scores. The number of senses predicted by HDP-DWSI in average is 8 senses, with the minimum 4 senses and the maximum 13 senses. The Pearson correlation between homogeneity and completeness is 0.853 and with very significant p-value, 2.2e-16. Also, it is found that there is virtually no correlation between the predicted number of senses and either the size of the data or V-measure by target: 0.065, 0.008 (non significant: p-value = 0.3746, 0.261). This indicates that HDP-DWSI is not biased towards generating more senses when the data is larger.
Table 5 shows that all the evaluation measures are significantly correlated. The macro-F1 scores are positively correlated in all four systems. However, the micro F-score favours systems that perform well on the majority sense, whereas the V-measure explicitly evaluates every cluster, taking into account not only the majority sense but also the minority one. Therefore systems which favour the majority sense, like NEO and DETM-DWSIi, have a lower correlation.
## 5.5 Emergence-Based Evaluation
![8_image_1.png](8_image_1.png)
Table 6: Sense emergence evaluation results for all the systems. The values in bold indicate the best score achieved among the systems.
DWSI systems can also be evaluated based on their ability to predict the year of emergence of a new sense. Table 6 shows the performance of the systems after applying "LREmergeTime" (see §4.4 )
on the predictions of the systems. HDP-DWSIm and NEO perfom closely to each other and much better than the other systems, according to both classification measures and MAE. NEO was designed and implemented with a focus on detecting sense emergence, this probably explains why it performs particularly well in this task (Jayapal, 2017).
## 5.6 Evaluation Based On The Predicted Evolution Over Time
Table 7 shows for every system how well their prediction of P(S|Y ) matches the true evolution of sense. Among all the systems, HDP-DWSIm predicts the closest P(S|Y ) to the true evolution
![8_image_0.png](8_image_0.png)
according to both distance measures. This confirms that not only HDP-DWSIm produces accurate predictions of the emergence year of novel senses but also predicts accurately the P(S|Y ) trends in general, with significantly less errors than the other systems.
## 6 Conclusion And Discussion
In this paper we adapted two topic modelling methods to the task of DWSI and evaluated them against two state of art DWSI systems, NEO and SCAN,
using the evaluation framework proposed by Alsulaimani et al. (2020). We also compared using the V-measure, and proposed an improved version of the emergence algorithm.
The results show that HDP-based models are able to fit the data better than the parametric models.
The results strongly show that merging HDP-DWSI
clusters performs better than the DETM-DWSI
models and LDA-like clustering, such as NEO and SCAN. The properties of HDP make it better at accurately fitting the topics/senses, in particular when there is a high imbalance between the senses proportions, i.e. with senses smaller in size (see Table 3). Furthermore, the fact that HDP-DWSIm outperforms all the other parametric models also demonstrates that these models do not find the optimal separation between the senses. It seems that the additional complexity of the time dimension together with the parametric constraints do not cope well with data imbalance across years.
One could naturally assume that models designed specifically for a task would perform better on it.
Implicitly, the research community encourages the creation of new models and tends to reward theoretical contribution over empirical ones. Thus there might be a bias in favor of designing sophisticated ad-hoc models (like NEO and SCAN) rather than adapting existing robust models (like HDP).
## 7 Limitations 7.1 Biomedical Domain
The dataset used in these experiments belongs to the biomedical domain and it is in English language.
There is no clear reason why the comparison between models would lead to different results on different domains, therefore we would expect the reported results (at least the major tendencies) to be also valid on the general domain.
Nevertheless this assumption would need to be tested experimentally. To our knowledge, there is no equivalent dataset available in the general domain which satisfies the two following conditions:
- Time-stamped documents spanning a relatively long period of time;
- Every document labelled with the sense of the target word.
## 7.2 Duration Of The Training Stage
In the table below, we present the computational cost of training the different models presented in this paper. Most of the experiments were carried out on a computing cluster containing 20 to 30 machines with varying characteristics, thus the total duration is approximative.
Computing times are reported in hours of CPU/GPU activity required to train the total of 188 target datasets. It is important to note that the two DETM models are trained on GPUs, whereas all the other models are trained on regular CPUs.
Thus in overall computing power, the DETM models are the most costly to train (more than HDP,
despite the higher duration).
| System | Duration | Notes |
|------------|------------|--------------------------|
| DETM-DWSIi | 523.4 | Trained on GPU |
| DETM-DWSI | 474.2 | Trained on GPU |
| HDP-DWSI | 2,471.4 | |
| HDP-DWSIm | 0.1 | Only the merging process |
| NEO | 25.1 | |
| SCAN | 77.9 | |
## Acknowledgements
We would like to thank the anonymous reviewers for their valuable comments. The first author is grateful to the Custodian of the Two Holy Mosques Scholarship Program from the Saudi Arabian Government as well as to Mr. Saud Alsulaimani for supporting this work. This work was conducted using high-performance clusters facilitated by the ADAPT Centre, Trinity College of Dublin.
## References
Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and discrimination systems. In *Proceedings of the fourth international workshop on semantic evaluations (semeval2007)*, pages 7–12.
Ashjan Alsulaimani, Erwan Moreau, and Carl Vogel.
2020. An evaluation method for diachronic word sense induction. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3171–3180.
Association for Computational Linguistics.
Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. journal of machine learning research, 3 (feb): 1137-1155, 2003. Google Scholar Google Scholar Digital Library Digital Library.
Yoshua Bengio, Holger Schwenk, Jean-Sébastien Senécal, Fréderic Morin, and Jean-Luc Gauvain. 2006. Innovations in machine learning. *Neural Probabilistic* Language Models, 194:137–186.
David Blei, Lawrence Carin, and David Dunson. 2010.
Probabilistic topic models: A focus on graphical model design and applications to document and image analysis. *IEEE signal processing magazine*, 27(6):55.
David M Blei and John D Lafferty. 2006. Dynamic topic models. In *Proceedings of the 23rd international conference on Machine learning*, pages 113–120.
ACM.
David M Blei, Andrew Y Ng, and Michael I Jordan.
2003. Latent dirichlet allocation. *Journal of machine* Learning research, 3(Jan):993–1022.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Samuel Brody and Mirella Lapata. 2009. Bayesian word sense induction. In *Proceedings of the 12th Conference of the European Chapter of the Association* for Computational Linguistics, pages 103–111. Association for Computational Linguistics.
Paul Cook, Jey Han Lau, Diana McCarthy, and Timothy Baldwin. 2014. Novel word-sense identification.
In *Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers*, pages 1624–1635.
Adji B Dieng, Francisco JR Ruiz, and David M Blei.
2019. The dynamic embedded topic model. arXiv preprint arXiv:1907.05545.
Martin Emms and Arun Kumar Jayapal. 2016. Dynamic generative model for diachronic sense emergence detection. In *Proceedings of COLING 2016, the* 26th International Conference on Computational Linguistics: Technical Papers, pages 1362–1373.
Thomas S. Ferguson. 1973. A bayesian analysis of some nonparametric problems. *Annals of Statistics*,
1:209–230.
Lea Frermann and Mirella Lapata. 2016. A bayesian model of diachronic meaning change. *Transactions of* the Association for Computational Linguistics, 4:31–
45.
Arun Jayapal. 2017. *Finding Sense Changes by Unsupervised Methods*. Phd thesis, Trinity College Dublin.
Antonio J Jimeno-Yepes, Bridget T McInnes, and Alan R Aronson. 2011. Exploiting mesh indexing in medline to generate a data set for word sense disambiguation. *BMC bioinformatics*, 12(1):223.
Jey Han Lau, Paul Cook, Diana McCarthy, Spandana Gella, and Timothy Baldwin. 2014. Learning word sense distributions, detecting unattested senses and identifying novel senses using topic models. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 259–270.
Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word sense induction for novel sense detection. In *Proceedings of the* 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 591–601.
Association for Computational Linguistics.
Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. SemEval-2010 task 14:
Word sense induction &disambiguation. In *Proceedings of the 5th International Workshop on Semantic* Evaluation, pages 63–68, Uppsala, Sweden. Association for Computational Linguistics.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In *Advances in neural information processing systems*, pages 3111–3119.
Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), pages 410–420.
David E Rumelhart and Adele A Abrahamson. 1973. A model for analogical reasoning. *Cognitive Psychology*, 5(1):1–28.
Eleri Sarsfield and Harish Tayyar Madabushi. 2020.
UoB at SemEval-2020 task 1: Automatic identification of novel word senses. In *Proceedings of the Fourteenth Workshop on Semantic Evaluation*, pages 239–
245, Barcelona (online). International Committee for Computational Linguistics.
Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi.
2020. Semeval-2020 task 1: Unsupervised lexical semantic change detection. *arXiv preprint* arXiv:2007.11464.
Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. 2006. Hierarchical dirichlet processes. *Journal of the american statistical association*,
101(476):1566–1581.
Joe H Ward Jr. 1963. Hierarchical grouping to optimize an objective function. Journal of the American statistical association, 58(301):236–244.
Xuchen Yao and Benjamin Van Durme. 2011. Nonparametric bayesian word sense induction. In *Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing*, pages 10–14. Association for Computational Linguistics.
Yijia Zhang, Qingyu Chen, Zhihao Yang, Hongfei Lin, and Zhiyong Lu. 2019. Biowordvec, improving biomedical word embeddings with subword information and mesh. *Scientific data*, 6(1):1–9.
## A Hierarchical Bayesian Models Background A.1 Hierarchical Dirichlet Processes
Hierarchical Dirichlet Processes (HDP), introduced by Teh et al. (2006), uses Dirichlet processes priors
(DPs), on the infinite-dimensional space of multinomial probability distributions and thus the number of mixture components (senses) is infinite a priori.
The Hierarical DPs allow new senses to emerge naturally at any point in time and guarantee the senses are shared within and across the documents. The DP provides a distribution on distributions over an arbitrary space. H is a symmetric Dirichlet on the word simplex and γ is a concentration parameter that controls the amount of variability of senses on the base distribution G0, a distribution over senses drawn from a DP. α is also a concentration parameter that controls the amount of variability of per-document senses on Gd, a multinomial probability distribution over senses drawn from a DP.
Then, for each word w we draw a sense βd,n from Gd and finally draw the word w from that sense βd,n The graphical model and the generative story of HDP are described in Figure 3.
## A.2 Dynamic Embedded Topic Model
Dynamic Embedded Topic Model (DETM), introduced by Dieng et al. (2019), uses embedding representations of words and topics. For each term v, it considers an L-dimensional embedding representation pv. It also considers an embedding αtk ∈ RL
for each topic k at a given time step t = 1*, ..., T*.
The topics (i.e. distributions over the vocabulary)
are represented by the normalised exponentiated dot product between the embedding represenation of the word and the assigned topic's embedding at every time t for each word in a document d:
p(wd,n = v|zd,n = k, α td k ) ∝ exp{pTv α(td)
k }. The DETM uses a Markov chain over the topic embeddings αtk and thus they evolve under Gaussian noise with variance δ2. Moreover, DETM posits time-varying prior, the logistic-normal distribution LN over the topic proportions θd, which depends on a latent variable ηtd .
## B Emergence Algorithm
"LREmergeTime " algorithm in 1 is linear regression based algorithm, an improved version of
"EmergeTime" proposed by (Jayapal, 2017).
ALGORITHM 1 Emergence Detection algorithm based on linear regression Input π: π[i] is the probability at time i, with 1 ≤ i ≤ N
Input r: window size � Value used for window size: 5 Input s: slope threshold. � Value used for slope threshold:
0.04 function LREMERGTIME(*π, r, s*)
Surges= φ for n:=1 to (N-r+1) do if SurgeStart(n,π,s) **then**
Surges = Surges ∪ {n}
end if end for if Surges �= φ **then**
return min(Surges)
else return φ end if end function
function SURGESTART(n, *π, s*)
(slope, intercept) = fit linear regression model on X =
[ *n, . . . , n* + r − 1 ] and Y = [ π[n], . . . , π[n + r − 1] ]
if slope < s ∗ *max(π*) **then**
return false
**end in** PrevYears = $\{n^{\prime}:1\leq n^{\prime}<n\}$ **if**$|\{n^{\prime}:n^{\prime}\in$ PrevYear and $\pi[n^{\prime}]\leq0.1\ast max(\pi)\}|$ / $|$PrevYear$|\geq0.8$**then** return true **else** return false
end if
end function
$${\mathfrak{m}}\ {\mathrm{falsec}}$$
## C Data: Gold Standard Dataset
The table C below shows the gold standard output
(senses and year of emergence), as obtained by the
"LREmergeTime" emergence detection algorithm based on the original gold data in (Alsulaimani et al., 2020).
The total number of targets which has emergence is 146 and which has no emergence is 42. This consists of 233 senses with emergence and 158 senses with no emergence. The table includes three type of emergence:
- N: Number of senses
- LRET: "LREmergeTime" emergence year,
![12_image_0.png](12_image_0.png)
- Draw the base distribution over senses G0 ∼
DP(*γ, H*),
- For d ∈ 1*, ..., D*, draw the per-document distribution over senses Gd ∼ DP(*α, G*0),
- For each word w ∈ 1*, ..., N*d in each document d,
- Draw the sense for the word βd,n ∼ Gd - Draw the word wd,n ∼ Mult(βd,n)
![12_image_1.png](12_image_1.png)
- Draw initial sense embedding αk
(0) ∼ N (0, I)
- Draw initial sense proportion mean η0 ∼
N (0, I)
- For time step t = 1*, ...., T* :
- Draw sense embeddings αk
(t) ∼
N (αk
(t−1), δ2I) *for k* = 1*, ....., K*
- Draw sense proportion means ηt ∼
N (ηt−1, δ2I)
- For each document d:
- Draw sense proportions θd ∼
LN (ηtd , a2I)
- For each word n in the document d:
* Draw sense assignment zd,n ∼
Cat(θd)
* Draw word wd,n ∼
Cat(sof tmax(pTαtdzd,n ))
Figure 4: Left: graphical representation of DETM for DWSI. Observed variables represented by shaded nodes and latent variables by clear nodes. Right: the corresponding generative process. Note that in DWSI the sense related variables replace the topic related variables.
- ET: "EmergeTime" emergence year,
- FYO: indicates the "First Year Occurrence" of a sense, determined by the start date of each sense in the data,
- MS: indicates the "Manual Surge", i.e. the visual manual annotations by the authors. The value "NA" indicates cases when no emergence found and "?" indicates visually ambiguous cases found during the manual annotation by the authors.
![12_image_2.png](12_image_2.png)
| Target | N | CUI | ET | LRET | FYO | MS | |
|----------------------------------|--------|----------|------|--------|-------|------|----|
| ANA | 2 | C0003243 | 1962 | | | | |
| Astragalus | 2 | C0039277 | 1947 | ? | | | |
| Astragalus | 2 | C0330845 | 1947 | ? | | | |
| B-Cell Leukemia | 2 | C0023434 | 1986 | | | | |
| B-Cell Leukemia | 2 | C2004493 | 1986 | 1988 | 1988 | | |
| BAT | 2 | C0006298 | 1946 | 1949 | 1949 | | |
| BAT | 2 | C0008139 | 1946 | | | | |
| BLM | 2 | C0005740 | 1971 | | | | |
| BLM | 2 | C0005859 | 1978 | 1981 | 1981 | | |
| Borrelia | 2 | C0006033 | 1979 | | | | |
| Borrelia | 2 | C0024198 | 1980 | 1983 | 1983 | | |
| BPD | 2 | C0006012 | 1980 | ? | | | |
| BPD | 2 | C0006287 | 1980 | 1980 | 1981 | ? | |
| BR | 2 | C0006137 | 1946 | 1946 | ? | | |
| BR | 2 | C0006222 | 1946 | ? | | | |
| Brucella abortus | 2 | C0006304 | 1946 | | | | |
| Brucella abortus | 2 | C0302363 | 1946 | 1958 | | | |
| BSA | 2 | C0005902 | 1952 | ? | | | |
| BSA | 2 | C0036774 | 1952 | 1952 | ? | | |
| BSE | 2 | C0085105 | 1991 | ? | | | |
| BSE | 2 | C0085209 | 1991 | 1991 | ? | | |
| Ca | 3 | C0006675 | 1945 | ? | | | |
| Ca | 3 | C0006754 | 1945 | 1945 | ? | | |
| Ca | 3 | C0006823 | 1945 | ? | | | |
| CAD | 2 | C0011905 | 1983 | | | | |
| CAD | 2 | C1956346 | 1983 | 1983 | 1985 | 1985 | |
| Callus | 2 | C0006767 | 1972 | ? | | | |
| Callus | 2 | C0376154 | 1972 | ? | | | |
| CAM | 2 | C0007578 | 1981 | | | | |
| CAM | 2 | C0178551 | 2002 | 2001 | 2003 | 2003 | |
| CCD | 2 | C0008928 | 1965 | | | | |
| CCD | 2 | C0751951 | 1997 | 1995 | 1965 | 1998 | |
| CCl4 | 2 | C0007022 | 1946 | | | | |
| CCl4 | 2 | C0209338 | 1994 | 1992 | 1991 | 1994 | |
| CDA | 2 | C0002876 | 1979 | | | | |
| CDA | 2 | C0092801 | 1982 | 1979 | 1983 | 1988 | |
| CDR | 2 | C0011485 | 1973 | | | | |
| CDR | 2 | C0021024 | 1994 | 1998 | 1998 | | |
| Cell | 2 | C0007634 | 1969 | | | | |
| Cell | 2 | C1136359 | 2010 | 1998 | 1999 | 2002 | |
| Cement | 2 | C0011343 | 1957 | ? | | | |
| Cement | 2 | C1706094 | 1957 | ? | | | |
| CH | 2 | C0008115 | 1946 | ? | | | |
| CH | 2 | C0039021 | 1946 | 1946 | 1946 | ? | |
| Cholera | 2 | C0008354 | 1945 | | | | |
| Cholera | 2 | C0008359 | 1945 | 1946 | 1961 | | |
| CI | 2 | C0008107 | 1949 | | | | |
| CI | 2 | C0022326 | 1951 | 1955 | | | |
| Cilia | 2 | C0008778 | 1950 | 1950 | ? | | |
| Cilia | 2 | C0015422 | 1950 | ? | | | |
| CIS | 2 | C0007099 | 1972 | | | | |
| CIS | 2 | C0162854 | 1991 | 1989 | 1992 | 1992 | |
| CLS | 2 | C0265252 | 1998 | 2002 | 2002 | | |
| CLS | 2 | C0343084 | 1996 | | | | |
| Coffee | 2 | C0009237 | 1960 | | | | |
| Coffee | 2 | C0085952 | 2001 | 1998 | 1962 | 2002 | |
| Cold | 3 | C0009264 | 1945 | | | | |
| Cold | 3 | C0009443 | 1945 | 1945 | 1945 | | |
| Cold | 3 | C0024117 | 1998 | 1997 | 1959 | 1998 | |
| Compliance | 2 | C0009563 | 1974 | ? | | | |
| Compliance | 2 | C1321605 | 1974 | 1974 | ? | | |
| Cortex | 2 | C0001614 | 1948 | 1947 | 1950 | ? | |
| Cortex | 2 | C0007776 | 1945 | ? | | | |
| Cortical | 3 | C0001613 | 1945 | 1945 | 1945 | | |
| Cortical | 3 | C0007776 | 1945 | | | | |
| Cortical | 3 | C0022655 | 1947 | 1971 | | | |
| CP | 3 | C0007789 | 1946 | | | | |
| CP | 3 | C0008925 | 1946 | | | | |
| CP | 3 | C0033477 | 1971 | 1969 | 1946 | 1971 | |
| CPDD | 2 | C0008838 | 1971 | 1971 | 1972 | 1972 | |
| CPDD | 2 | C0553730 | 1971 | | | | |
| Crack | 2 | C0040441 | 1986 | | | | |
| Crack | 2 | C0085163 | 1987 | 1990 | 1990 | | |
| CRF | 2 | C0010132 | 1954 | 1956 | 1967 | | |
| CRF | 2 | C0022661 | 1954 | | | | |
| cRNA | 2 | C0056208 | 1981 | 1978 | 1982 | 1984 | |
| continued on next column or page | Target | N | CUI | ET | LRET | FYO | MS |
| cRNA | 2 | C1321571 | 1975 | | | | |
| CTX | 2 | C0010583 | 1960 | | | | |
| CTX | 2 | C0238052 | 1997 | 1992 | 1974 | 1996 | |
| DAT | 2 | C0002395 | 1974 | | | | |
| DAT | 2 | C0114838 | 1989 | 1988 | 1989 | 1991 | |
| DBA | 2 | C0025923 | 1972 | | | | |
| DBA | 2 | C1260899 | 1999 | 1998 | 2001 | 2001 | |
| dC | 2 | C0011485 | 1971 | 1969 | 1973 | 1973 | |
| dC | 2 | C0012764 | 1966 | | | | |
| DDD | 2 | C0011037 | 1962 | | | | |
| DDD | 2 | C0026256 | 1963 | 1973 | | | |
| DDS | 3 | C0010980 | 1960 | | | | |
| DDS | 3 | C0085104 | 1988 | 1987 | 1990 | 1990 | |
| DDS | 3 | C0950121 | 1999 | 1998 | 2001 | 2001 | |
| DE | 2 | C0011198 | 1945 | ? | | | |
| DE | 2 | C0017480 | 1945 | ? | | | |
| DI | 2 | C0011848 | 1946 | | | | |
| DI | 2 | C0032246 | 1946 | 1976 | | | |
| Digestive | 2 | C0012238 | 1945 | ? | | | |
| Digestive | 2 | C0012240 | 1945 | ? | | | |
| DON | 2 | C0012020 | 1975 | | | | |
| DON | 2 | C0028652 | 1979 | 1978 | 1981 | 1981 | |
| drinking | 2 | C0001948 | 1946 | ? | | | |
| drinking | 2 | C0684271 | 1946 | 1946 | ? | | |
| eCG | 2 | C0018064 | 1989 | 1989 | 1945 | ? | |
| eCG | 2 | C1623258 | 1945 | ? | | | |
| Eels | 2 | C0013671 | 1951 | | | | |
| Eels | 2 | C0677644 | 2003 | 2000 | 2004 | 2004 | |
| EGG | 2 | C0013710 | 1945 | | | | |
| EGG | 2 | C0029974 | 1945 | 1945 | 1945 | | |
| EM | 2 | C0014921 | 1973 | 1972 | 1975 | 1975 | |
| EM | 2 | C0026019 | 1946 | | | | |
| EMS | 2 | C0013961 | 1967 | | | | |
| EMS | 2 | C0015063 | 1974 | 1971 | 1975 | 1975 | |
| Epi | 2 | C0014563 | 1945 | | | | |
| Epi | 2 | C0014582 | 1988 | 1980 | 1980 | | |
| ERP | 2 | C0008310 | 1978 | 1977 | 1978 | 1978 | |
| ERP | 2 | C0015214 | 1956 | | | | |
| ERUPTION | 2 | C0015230 | 1945 | ? | | | |
| ERUPTION | 2 | C1533692 | 1945 | ? | | | |
| Erythrocytes | 2 | C0014772 | 1945 | | | | |
| Erythrocytes | 2 | C0014792 | 1945 | | | | |
| Exercises | 2 | C0015259 | 1945 | 1945 | ? | | |
| Exercises | 2 | C0452240 | 1945 | ? | | | |
| FA | 2 | C0015625 | 1946 | 1975 | | | |
| FA | 2 | C0016410 | 1945 | | | | |
| Fe | 2 | C0302583 | 1945 | | | | |
| Fe | 2 | C0376520 | 1995 | 1992 | 1946 | 1996 | |
| Fish | 2 | C0016163 | 1945 | | | | |
| Fish | 2 | C0162789 | 1990 | 1988 | 1953 | 1992 | |
| Follicle | 2 | C0018120 | 1949 | ? | | | |
| Follicle | 2 | C0221971 | 1949 | 1949 | ? | | |
| Follicles | 2 | C0018120 | 1949 | ? | | | |
| Follicles | 2 | C0221971 | 1949 | 1949 | ? | | |
| FTC | 2 | C0041713 | 1982 | | | | |
| FTC | 2 | C0206682 | 1992 | 1989 | 1993 | 1993 | |
| GAG | 2 | C0017346 | 1988 | 1986 | 1982 | 1989 | |
| GAG | 2 | C0017973 | 1949 | | | | |
| Ganglion | 2 | C0017067 | 1946 | 1946 | | | |
| Ganglion | 2 | C1258666 | 2006 | 1946 | 1946 | 2002 | |
| Gas | 2 | C0016204 | 1945 | 1945 | ? | | |
| Gas | 2 | C0017110 | 1945 | ? | | | |
| Glycoside | 2 | C0007158 | 1946 | ? | | | |
| Glycoside | 2 | C0017977 | 1946 | 1946 | 1946 | ? | |
| Haemophilus ducreyi | 2 | C0007947 | 1977 | | | | |
| Haemophilus ducreyi | 2 | C0018481 | 1977 | 1978 | 1978 | | |
| HCl | 2 | C0020259 | 1946 | | | | |
| HCl | 2 | C0023443 | 1975 | 1959 | 1954 | 1976 | |
| Hemlock | 2 | C0242872 | 2004 | 2002 | 2002 | ? | |
| Hemlock | 2 | C0949851 | 2002 | ? | | | |
| Heregulin | 2 | C0626201 | 1992 | 1994 | 1994 | | |
| Heregulin | 2 | C0752253 | 1992 | | | | |
| HGF | 2 | C0021760 | 1984 | 1984 | ? | | |
| HGF | 2 | C0062534 | 1984 | ? | | | |
| Hip | 2 | C0019552 | 1946 | ? | | | |
| Hip | 2 | C0022122 | 1947 | ? | | | |
| continued on next column or page | | | | | | | |
| Target | N | CUI | ET | LRET | FYO | MS | |
|----------------------------------|--------|----------|------|--------|-------|------|----|
| HIV | 2 | C0019682 | 1985 | | | | |
| HIV | 2 | C0019693 | 1987 | 1985 | 1987 | 1987 | |
| HPS | 2 | C0079504 | 1996 | 2000 | 2000 | | |
| HPS | 2 | C0242994 | 1994 | | | | |
| HR | 2 | C0010343 | 1947 | 1950 | 1992 | | |
| HR | 2 | C0018810 | 1947 | | | | |
| IA | 2 | C0021487 | 1946 | 1946 | 1946 | 1946 | |
| IA | 2 | C0022037 | 1946 | | | | |
| Ice | 3 | C0020746 | 1946 | | | | |
| Ice | 3 | C0025611 | 1946 | 1946 | 1946 | | |
| Ice | 3 | C0534519 | 1990 | 1990 | 1991 | 1991 | |
| INDO | 2 | C0021246 | 1961 | 1959 | 1963 | 1963 | |
| INDO | 2 | C0021247 | 1949 | | | | |
| Ion | 2 | C0022023 | 1945 | | | | |
| Ion | 2 | C0022024 | 1945 | 1945 | 1946 | 1946 | |
| IP | 2 | C0021069 | 2000 | 1997 | 1989 | 2001 | |
| IP | 2 | C0021171 | 1986 | | | | |
| Iris | 2 | C0022077 | 1945 | | | | |
| Iris | 2 | C1001362 | 1945 | 1946 | 2001 | | |
| JP | 2 | C0022341 | 1946 | | | | |
| JP | 2 | C0031106 | 1946 | 1946 | 1947 | 1983 | |
| LABOR | 2 | C0022864 | 1945 | 1945 | 1945 | 1945 | |
| LABOR | 2 | C0043227 | 1945 | | | | |
| Lactation | 2 | C0006147 | 1945 | ? | | | |
| Lactation | 2 | C0022925 | 1945 | ? | | | |
| Language | 2 | C0023008 | 1946 | | | | |
| Language | 2 | C0033348 | 1986 | 1954 | 1958 | 1985 | |
| Laryngeal | 2 | C0023078 | 1945 | ? | | | |
| Laryngeal | 2 | C0023081 | 1945 | ? | | | |
| Lawsonia | 2 | C0752045 | 2000 | | | | |
| Lawsonia | 2 | C1068388 | 2002 | 2002 | | | |
| Leishmaniasis | 2 | C0023281 | 1945 | | | | |
| Leishmaniasis | 2 | C1548483 | 2005 | 1996 | 1947 | 2000 | |
| lens | 3 | C0023308 | 1951 | 1948 | 1952 | 1978 | |
| lens | 3 | C0023317 | 1945 | 1945 | | | |
| lens | 3 | C0023318 | 1945 | | | | |
| Lupus | 3 | C0024131 | 1945 | | | | |
| Lupus | 3 | C0024138 | 1945 | 1945 | 1946 | 1946 | |
| Lupus | 3 | C0024141 | 1945 | | | | |
| lymphogranulomatosis | 2 | C0019829 | 1945 | 1945 | 1945 | | |
| lymphogranulomatosis | 2 | C0036202 | 1945 | | | | |
| MAF | 2 | C0079786 | 1980 | | | | |
| MAF | 2 | C0919482 | 2001 | 1994 | 1998 | 1998 | |
| Malaria | 2 | C0024530 | 1945 | | | | |
| Malaria | 2 | C0206255 | 1991 | 1988 | 1945 | 1992 | |
| MBP | 2 | C0014063 | 1973 | | | | |
| MBP | 2 | C0065661 | 1999 | 1998 | 1984 | 2001 | |
| MCC | 2 | C0007129 | 1988 | | | | |
| MCC | 2 | C0162804 | 1990 | 1989 | 1991 | 1993 | |
| Medullary | 2 | C0001629 | 1946 | | | | |
| Medullary | 2 | C0025148 | 1947 | 1947 | | | |
| MHC | 2 | C0024518 | 1978 | | | | |
| MHC | 2 | C0027100 | 1991 | 1986 | 1994 | | |
| Milk | 2 | C0026131 | 1945 | 1945 | ? | | |
| Milk | 2 | C0026140 | 1945 | ? | | | |
| Moles | 2 | C0027960 | 1946 | | | | |
| Moles | 2 | C0324740 | 1946 | 1946 | 1974 | | |
| MRS | 2 | C0024487 | 1959 | 1961 | 1961 | | |
| MRS | 2 | C0025235 | 1950 | | | | |
| NBS | 2 | C0027819 | 1947 | | | | |
| NBS | 2 | C0398791 | 2003 | 2002 | 2002 | 2006 | |
| NEUROFIBROMA... | 2 | C0085113 | 1990 | | | | |
| NEUROFIBROMA... | 2 | C0162678 | 1990 | 1990 | 1991 | 1991 | |
| NM | 2 | C0025033 | 1946 | | | | |
| NM | 2 | C0027972 | 1963 | 1962 | 1946 | 1946 | |
| NPC | 2 | C0028587 | 1998 | | | | |
| NPC | 2 | C0220756 | 2005 | 2002 | 2006 | 2006 | |
| Nurse | 2 | C0006147 | 1945 | ? | | | |
| Nurse | 2 | C0028661 | 1945 | ? | | | |
| Nursing | 2 | C0006147 | 1945 | ? | | | |
| Nursing | 2 | C0028677 | 1945 | ? | | | |
| OCD | 2 | C0028768 | 1975 | | | | |
| OCD | 2 | C0029421 | 1983 | 1980 | 1984 | 1984 | |
| OH | 2 | C0028905 | 1946 | ? | | | |
| OH | 2 | C0063146 | 1946 | ? | | | |
| Orf | 2 | C0013570 | 1980 | | | | |
| continued on next column or page | Target | N | CUI | ET | LRET | FYO | MS |
| Orf | 2 | C0079941 | 1986 | 1985 | 1982 | 1982 | |
| ORI | 2 | C0206601 | 1993 | | | | |
| ORI | 2 | C0242961 | 1993 | 1993 | 1993 | 1993 | |
| PAC | 2 | C0033036 | 1995 | | | | |
| PAC | 2 | C0949780 | 1997 | 2001 | 2001 | | |
| PAF | 2 | C0032172 | 1979 | | | | |
| PAF | 2 | C0037019 | 1980 | 1980 | | | |
| Parotitis | 2 | C0026780 | 1945 | 1945 | ? | | |
| Parotitis | 2 | C0030583 | 1945 | ? | | | |
| PCA | 5 | C0030131 | 1972 | 1971 | 1974 | 1974 | |
| PCA | 5 | C0030625 | 1957 | | | | |
| PCA | 5 | C0078944 | 1987 | 1986 | 1989 | 1989 | |
| PCA | 5 | C0149576 | 1957 | 1957 | 1957 | 1957 | |
| PCA | 5 | C0429865 | 1999 | 1998 | 1960 | 2001 | |
| PCB | 2 | C0032447 | 1971 | ? | | | |
| PCB | 2 | C0033223 | 1971 | ? | | | |
| PCD | 2 | C0022521 | 1971 | | | | |
| PCD | 2 | C0162638 | 1988 | 1991 | 1991 | | |
| PCP | 2 | C0030855 | 1972 | ? | | | |
| PCP | 2 | C0031381 | 1972 | 1972 | ? | | |
| PEP | 2 | C0031642 | 1971 | | | | |
| PEP | 2 | C0135981 | 1978 | 1976 | 1980 | 1980 | |
| PHA | 2 | C0030779 | 2002 | 2007 | 1976 | 1976 | |
| PHA | 2 | C0031858 | 1975 | 1975 | | | |
| Pharmaceutical | 2 | C0013058 | 1963 | 1962 | 1963 | 1963 | |
| Pharmaceutical | 2 | C0031336 | 1945 | | | | |
| Phosphorus | 2 | C0031705 | 1945 | ? | | | |
| Phosphorus | 2 | C0080014 | 1945 | 1945 | ? | | |
| Phosphorylase | 2 | C0017916 | 1971 | | | | |
| Phosphorylase | 2 | C0917783 | 2005 | 1998 | 1973 | 2001 | |
| pI | 2 | C0022171 | 1975 | ? | | | |
| pI | 2 | C0812425 | 1975 | ? | | | |
| Plague | 2 | C0032064 | 1945 | | | | |
| Plague | 2 | C0032066 | 1959 | 1957 | 1946 | 1960 | |
| Plaque | 2 | C0011389 | 1950 | ? | | | |
| Plaque | 2 | C0333463 | 1950 | ? | | | |
| Platelet | 2 | C0005821 | 1945 | 1945 | ? | | |
| Platelet | 2 | C0032181 | 1945 | ? | | | |
| Pleuropneumonia | 2 | C0026934 | 1945 | 1945 | 1945 | ? | |
| Pleuropneumonia | 2 | C0032241 | 1945 | ? | | | |
| POL | 2 | C0017360 | 1986 | 1989 | 1989 | | |
| POL | 2 | C0032356 | 1946 | | | | |
| posterior pituitary | 2 | C0032009 | 1946 | | | | |
| posterior pituitary | 2 | C0032017 | 1946 | 1947 | 1946 | | |
| Potassium | 2 | C0032821 | 1945 | | | | |
| Potassium | 2 | C0162800 | 1990 | 1989 | 1948 | 1992 | |
| PR | 2 | C0034044 | 1945 | | | | |
| PR | 2 | C0034833 | 1972 | 1972 | 1973 | 1973 | |
| Projection | 2 | C0016538 | 1970 | 1970 | ? | | |
| Projection | 2 | C0033363 | 1970 | ? | | | |
| PVC | 2 | C0032624 | 1974 | | | | |
| PVC | 2 | C0151636 | 1991 | 1988 | 1992 | | |
| RA | 3 | C0002893 | 1945 | 1946 | ? | | |
| RA | 3 | C0003873 | 1945 | ? | | | |
| RA | 3 | C0034625 | 1945 | ? | | | |
| Radiation | 2 | C0851346 | 1945 | | | | |
| Radiation | 2 | C1522449 | 1946 | 1946 | | | |
| RB | 2 | C0035335 | 1947 | | | | |
| RB | 2 | C0035930 | 1947 | 1951 | 1951 | | |
| RBC | 2 | C0014772 | 1945 | ? | | | |
| RBC | 2 | C0014792 | 1945 | ? | | | |
| rDNA | 2 | C0012931 | 1976 | | | | |
| rDNA | 2 | C0012933 | 1980 | 1978 | 1981 | 1981 | |
| Respiration | 2 | C0035203 | 1945 | ? | | | |
| Respiration | 2 | C0282636 | 1945 | ? | | | |
| Retinal | 2 | C0035298 | 1945 | 1945 | ? | | |
| Retinal | 2 | C0035331 | 1945 | ? | | | |
| Root | 2 | C0040452 | 1945 | 1945 | ? | | |
| Root | 2 | C0242726 | 1945 | ? | | | |
| RSV | 2 | C0035236 | 1957 | 1960 | 1960 | | |
| RSV | 2 | C0086943 | 1955 | | | | |
| SARS | 2 | C1175175 | 2002 | | | | |
| SARS | 2 | C1175743 | 2002 | 2002 | 2002 | 2002 | |
| SARS-assoc... | 2 | C1175175 | 2002 | | | | |
| SARS-assoc... | 2 | C1175743 | 2002 | 2002 | 2002 | 2002 | |
| SCD | 2 | C0002895 | 1946 | | | | |
| continued on next column or page | | | | | | | |
| Target | N | CUI | ET | LRET | FYO | MS |
|----------------|-----|----------|------|--------|-------|------|
| SCD | 2 | C0085298 | 1988 | 1987 | 1950 | 1989 |
| Schistosoma... | 2 | C0036319 | 1971 | | | |
| Schistosoma... | 2 | C0036330 | 1981 | 1977 | 1985 | |
| SLS | 2 | C0037231 | 1987 | 1991 | 1991 | |
| SLS | 2 | C0037506 | 1971 | | | |
| Sodium | 2 | C0037473 | 1945 | | | |
| Sodium | 2 | C0037570 | 1945 | 1945 | 1945 | 1945 |
| SPR | 2 | C0164209 | 1981 | | | |
| SPR | 2 | C0597731 | 1996 | 1994 | 1998 | 1998 |
| SS | 2 | C0039101 | 1948 | | | |
| SS | 2 | C0085077 | 1990 | 1960 | 1964 | 1990 |
| Staph | 2 | C0038160 | 1945 | 1945 | | |
| Staph | 2 | C0038170 | 1945 | | | |
| STEM | 2 | C0162731 | 1992 | | | |
| STEM | 2 | C0242767 | 1992 | 1994 | 1994 | |
| Sterilization | 2 | C0038280 | 1945 | 1945 | ? | |
| Sterilization | 2 | C0038288 | 1945 | ? | | |
| Strep | 2 | C0038395 | 1945 | 1945 | 1945 | |
| Strep | 2 | C0038402 | 1945 | | | |
| Synapsis | 2 | C0039062 | 1950 | | | |
| Synapsis | 2 | C0598501 | 1998 | 1950 | 1951 | 1951 |
| TAT | 3 | C0017375 | 1988 | 1985 | 1989 | 1989 |
| TAT | 3 | C0039341 | 1983 | 1982 | 1985 | 1985 |
| TAT | 3 | C0039756 | 1975 | | | |
| Tax | 2 | C0039371 | 1975 | | | |
| Tax | 2 | C0144576 | 1992 | 1989 | 1983 | 1993 |
| TEM | 2 | C0040975 | 2004 | | | |
| TEM | 2 | C0678118 | 2002 | | | |
| THYMUS | 3 | C0040112 | 1948 | 1946 | 1949 | 1949 |
| THYMUS | 3 | C0040113 | 1946 | | | |
| THYMUS | 3 | C1015036 | 1946 | 1946 | | |
| TLC | 2 | C0008569 | 1959 | 1959 | ? | |
| TLC | 2 | C0040509 | 1974 | 1972 | 1959 | ? |
| TMJ | 2 | C0039493 | 1946 | ? | | |
| TMJ | 2 | C0039496 | 1946 | ? | | |
| TMP | 2 | C0040079 | 1972 | 1975 | 1975 | |
| TMP | 2 | C0041041 | 1970 | | | |
| TNC | 2 | C0076088 | 1983 | 1982 | 1985 | 1985 |
| TNC | 2 | C0077400 | 1980 | | | |
| TNT | 2 | C0041070 | 1982 | 1982 | | |
| TNT | 2 | C0077404 | 1981 | | | |
| Tolerance | 2 | C0013220 | 1946 | ? | | |
| Tolerance | 2 | C0020963 | 1946 | 1946 | ? | |
| tomography | 2 | C0040395 | 1947 | ? | | |
| tomography | 2 | C0040405 | 1947 | ? | | |
| Torula | 2 | C0010414 | 1945 | ? | | |
| Torula | 2 | C0010415 | 1945 | ? | | |
| TPA | 2 | C0032143 | 1983 | 1982 | 1982 | 1985 |
| TPA | 2 | C0039654 | 1975 | | | |
| TPO | 2 | C0021965 | 1974 | 1974 | 1975 | 1975 |
| TPO | 2 | C0040052 | 1974 | | | |
| TRF | 2 | C0021759 | 1980 | 1980 | | |
| TRF | 2 | C0040162 | 1968 | | | |
| TSF | 2 | C0021756 | 1976 | 1974 | 1977 | 1977 |
| TSF | 2 | C0040052 | 1974 | | | |
| TYR | 2 | C0041484 | 1945 | ? | | |
| TYR | 2 | C0041485 | 1945 | ? | | |
| US | 2 | C0041618 | 1971 | 1964 | 1945 | 1966 |
| US | 2 | C0041703 | 1945 | | | |
| Ventricles | 2 | C0007799 | 1945 | ? | | |
| Ventricles | 2 | C0018827 | 1945 | ? | | |
| veterinary | 2 | C0042615 | 1945 | | | |
| veterinary | 2 | C0206212 | 1959 | 1963 | 1993 | |
| Wasp | 2 | C0043041 | 1975 | | | |
| Wasp | 2 | C0258432 | 1993 | 1991 | 1994 | 1994 |
| WBS | 2 | C0004903 | 1982 | | | |
| WBS | 2 | C0175702 | 1994 | 1991 | 1995 | 1995 |
| WT1 | 2 | C0027708 | 1946 | | | |
| WT1 | 2 | C0148873 | 1991 | 1989 | 1991 | 1991 |
| Yellow Fever | 2 | C0043395 | 1945 | 1945 | ? | |
| Yellow Fever | 2 | C0301508 | 1945 | ? | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 1 And 4 And 5
✓ B1. Did you cite the creators of artifacts you used?
Section 4 and 5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 4 and 5 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. The data used in this research is a secondary data which was previously published.
The data source files were taken from NML and is made of biomedical scientific publications.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 7
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4
## C ✓ **Did You Run Computational Experiments?** Section 4 And 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 and 7 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 and 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
zanwar-etal-2023-fuse | What to Fuse and How to Fuse: Exploring Emotion and Personality Fusion Strategies for Explainable Mental Disorder Detection | https://aclanthology.org/2023.findings-acl.568 | Mental health disorders (MHD) are increasingly prevalent worldwide and constitute one of the greatest challenges facing our healthcare systems and modern societies in general. In response to this societal challenge, there has been a surge in digital mental health research geared towards the development of new techniques for unobtrusive and efficient automatic detection of MHD. Within this area of research, natural language processing techniques are playing an increasingly important role, showing promising detection results from a variety of textual data. Recently, there has been a growing interest in improving mental illness detection from textual data by way of leveraging emotions: {`}Emotion fusion{'} refers to the process of integrating emotion information with general textual information to obtain enhanced information for decision-making. However, while the available research has shown that MHD prediction can be improved through a variety of different fusion strategies, previous works have been confined to a particular fusion strategy applied to a specific dataset, and so is limited by the lack of meaningful comparability. In this work, we integrate and extend this research by conducting extensive experiments with three types of deep learning-based fusion strategies: (i) feature-level fusion, where a pre-trained masked language model for mental health detection (MentalRoBERTa) was infused with a comprehensive set of engineered features, (ii) model fusion, where the MentalRoBERTa model was infused with hidden representations of other language models and (iii) task fusion, where a multi-task framework was leveraged to learn the features for auxiliary tasks. In addition to exploring the role of different fusion strategies, we expand on previous work by broadening the information infusion to include a second domain related to mental health, namely personality. We evaluate algorithm performance on data from two benchmark datasets, encompassing five mental health conditions: attention deficit hyperactivity disorder, anxiety, bipolar disorder, depression and psychological stress. | # What To Fuse And How To Fuse: Exploring Emotion And Personality Fusion Strategies For Explainable Mental Disorder Detection
Sourabh Zanwar RWTH Aachen University [email protected] Daniel Wiechmann University of Amsterdam [email protected] Xiaofei Li RWTH Aachen University [email protected]
## Yu Qiao
RWTH Aachen University [email protected]
## Elma Kerz Rwth Aachen University [email protected] Abstract
We present the results of conducting extensive experiments with three types of deep learningbased fusion strategies: (1) feature-level fusion, where a pre-trained masked language model for mental health detection (MentalRoBERTa)
was infused with a comprehensive set of engineered features, (2) model fusion, where the MentalRoBERTa model was infused with hidden representations of other language models and (3) task fusion, where a multi-task framework was leveraged to learn the features for auxiliary tasks. In addition to exploring the role of different fusion strategies, we extend previous work by broadening the information infusion to include a second domain related to mental health, i.e. personality. We evaluate the performance of our models on two benchmark mental health datasets encompassing five conditions: Attention Deficit Hyperactivity Disorder, Anxiety, Bipolar Disorder, Depression, and Psychological Stress. The results of our experiments show that the task fusion strategy is most promising for the detection of ADHD,
anxiety, and bipolar disorder, whereas featurelevel fusion is most advantageous for the detection of psychological distress and depression.
Moreover, the results indicate that both emotion and personality constitute valuable sources of information for predicting mental health.
## 1 Introduction
Mental health disorders (MHD) are increasingly prevalent worldwide and constitute one of the greatest challenges facing our healthcare systems and modern societies in general. In response to this societal challenge, there has been a surge in digital mental health research geared towards the development of new techniques for unobtrusive and efficient automatic detection of MHD. Within this area of research, natural language processing techniques are playing an increasingly important role, showing promising detection results from a variety of textual data. Recently, there has been a growing interest in improving mental illness detection from textual data by way of leveraging emotions: 'Emotion fusion' refers to the process of integrating emotion information with general textual information to obtain enhanced information for decision-making.
However, while the available research has shown that MHD prediction can be improved through a variety of different fusion strategies, previous works have been confined to a particular fusion strategy applied to a specific dataset, and so is limited by the lack of meaningful comparability.
As a result, the clinical community is increasingly seeking new approaches to the early detection and monitoring of mental health problems that can greatly improve the effectiveness of interventions, reduce their cost, and prevent them from becoming chronic. In this context, Natural Language Processing (NLP) is recognized as having transformative potential to support healthcare professionals and stakeholders in the early detection, treatment and prevention of mental disorders (for comprehensive reviews, see Calvo et al., 2017; Zhang et al., 2022; Zhou et al., 2022). Data from social media are particularly appealing to the NLP research community due to their scope and the deep embeddedness in contemporary culture (Perrin). Research utilizing NLP techniques in combination with social media has yielded new insights into population mental health and shown promise for incorporating datadriven analytics into the treatment of psychiatric disorders (Chancellor and De Choudhury, 2020; Garg, 2023).
Recently, this line of research has developed a growing interest in improving NLP approaches to mental illness detection by leveraging information from related domains, in particular emotion (see Zhang et al., 2023, for a comprehensive review). Behavioral and psychological research has long established links between emotions and mental disorders: For example, individuals with depressive symptoms have difficulty regulating their emotions, 8926 resulting in lower emotional complexity (Joormann and Gotlib, 2010; Compare et al., 2014). Disrupted emotion regulation has also been implicated in anxiety (Young et al., 2019). In the light of such links, information about emotions is useful in diagnosing mental disorders. 'Emotion fusion' refers to the process of "integrating emotion information with general textual information to obtain enhanced information for decision-making" (Zhang et al., 2023, p. 232). By the same rationale, information fusion approaches are likely to benefit from the inclusion of additional individual characteristics known to be associated with mental disorders, such as personality traits. Like emotion, personality has been linked to a diverse set of mental disorders based on genetic and behavioral evidence: For example, genomewide association studies have demonstrated that genetic risk factors for depression are largely shared with the neuroticism peronality trait (Adams et al.,
2019). Correlational studies comparing subjects diagnosed with Major Depressive Disorder (MDD)
and healthy control subjects found that vulnerability to depression was associated with several personality dimensions, such that MDD subjects were characterized by high neuroticism and low extraversion, accompanied by low scores on openness and conscientiousness (Nikolic et al., 2020). Analyses language of use of Twitter users with self-disclosed depression and PTSD revealed that text-derived personality played s an important role in predicting the mental disorders (Preo¸tiuc-Pietro et al., 2015).
In addition to the question of 'what to fuse', information fusion approaches also raise the algorithmic question of 'how to fuse' the auxiliary information effectively. The available research has shown that MHD prediction can be improved through a variety of different fusion strategies. However, previous work has typically focused on a specific fusion strategy applied to a specific dataset, limiting their comparability.
In this work, we integrate and extend research on information fusion for mental disorder detection by conducting extensive experiments with three types of deep learning-based fusion strategies: (i) feature-level fusion, where a pre-trained masked language model for mental health detection (MentalRoBERTa, htt) was infused with a comprehensive set of engineered features, (ii) model fusion, where the MentalRoBERTa model was infused with hidden representations of other language models and (iii) task fusion, where a multi-task framework was leveraged to learn the features for auxiliary tasks. In addition to exploring the role of different fusion strategies, we expand on previous work by broadening the information infusion to include a second domain related to mental health, i.e.
personality. We evaluate our model on data from two benchmark datasets, encompassing five mental health conditions: attention deficit hyperactivity disorder, anxiety, bipolar disorder, depression and psychological stress.1 The remainder of the paper is structured as follows: Section 2 presents a concise discussion of related work applying each of the three information fusion strategies. Section 3 introduces the datasets used to perform the mental health detection experiments. In Section 4, we describe our three mental status detection models that instantiate the three fusion strategies. Section 5 details the experimental setup including the specification of the fine-tuned MentalRoBERTa model baseline model. Section 6 presents and discusses the results of our experiments. Finally, we conclude with possible directions for future work in Section 7.
´
## 2 Related Work
In this section we provide a concise discussion of selected works for each of the three fusion strategies. A comprehensive overview of work information fusion for mental illness detection from social media data has recently been provided by Zhang et al. (2023). One strand of recent work in the feature-level fusion approach is characterized by the integration of information from several groups of features extracted using NLP tools: Song et al. (2018) utilized a feature attention network
(FAN) to combine indicators of mental disorders from four groups: (1) word-level features related to depressive symptoms taken from the Diagnostic and Statistical Manual of Mental Disorders (DSM5, APA, 2013), (2) word-level sentiment scores of obtained from the SentiWordNet dictionary (Baccianella et al., 2010), (3) features related ruminative thinking, expressed as the amount of repetition of topics in a social media post (Nolen-Hoeksema et al., 2008) and (4) writing style features, measured in terms of the sequencing of part-of-speech in a social media. The FAN consists of four feature networks - one for each feature groups - fed into a post-level attention layer. The authors eval-1Our code will be made available upon publication.
uated the performance of their approach on the Reddit Self-reported Depression Diagnosis dataset
(RSDD, Yates et al. (2017)), a large scale general forum dataset contaning data from 9,210 users with an average of 969 posts for each user. Their model was competitive with a convolutional neural network baseline model, despite using a much smaller number of posts in training data (only 500 posts per user). A second strand of feature-fusion approaches combines emotion features extracted using NLP tools with textual embeddings from pretrained language models, before feeding these into a CNN/LSTM structure to construct the MHC classification model. For example, Uban et al. (2021)
used a hierarchical attention network with LSTM
post-level and user-level encoders that combined multi-dimensional representations of texts. Specifically, their approach combined (i) content features, captured through word sequences encoded as 300-dimenional embeddings based on pre-trained GloVe vectors (Pennington et al., 2014), (ii) style features, expressed by numerical vectors representing stopword frequencies as bag-of-words, normalized by text lengths and usage of pronouns or other parts of speech, and (iii) emotion and sentiment features, represented by numerical vectors of word category ratios from two emotion- and sentimentrelated lexicons, LIWC (Pennebaker et al., 2001)
and NRC emotion (Mohammad and Turney, 2013).
They evaluated the model on the eRisk Reddit datasets on depression, anorexia and self-harm
(Losada et al., 2019), reaching competitive result across all three mental disorders, outperforming a strong RoBERTa baseline model in the detection of two of them (self-harm and depression).
Turning to the model fusion approach, Sawhney et al. (2020) presented a time-aware transformer based model for the screening of suicidal risk on social media. Their model, called STATENet, uses a dual transformer-based architecture to learn the linguistic and emotional cues in tweets. STATENet combines the 768-dimensional encoding obtained from Sentence BERT, capturing the language cues of the tweet to be assessed, with an aggregate representation of the emotional spectrum, obtained from a pre-trained BERT model fine-tuned on the the Emonet dataset (Abdul-Mageed and Ungar, 2017).
This second model, referred to as the Plutchik Transformer, tokenizes each post and adds the
[CLS] token at the beginning of each post. The authors then express the the aggregate representation of the emotional spectrum as the the final hidden state corresponding to this [CLS] token
(768-dimensional encoding). They evaluated the STATENet models on the task of tweet-level prediction of suicide idation on the Twitter timeline dataset (Sinha et al., 2019), which contained 32,558 tweets. STATENet significantly outperforms competitive baselines models for suicidal risk assessment, demonstrating the utility of combining contextual linguistic and emotional cues for suicide risk assessment.
Recently, Turcan et al. (2021) explored the use of multi-task learning and emotion-infused language model finetuning for psychological stress detection.
In this work, the authors introduced an innovative task fusion approach that utilized a multi-task learning setup to perform stress detection and emotion detection at the same time on the same input data.
As currently available datasets for stress detection are not labeled for emotion, they first separately trained BERT models on different versions of the GoEmotions dataset (Demszky et al., 2020) and employed these to derive emotion labels for the stress detection dataset used in their experiments
(Dreaddit, Turcan and McKeown, 2019). The authors then used these emotion labels as 'silver data' to train on them alongside stress in a multi-task learning setting with hard parameter sharing (Caruana, 1997). Their models achieved comparable performance to a state-of-the-art fine-tuned BERT
baseline. Importantly, based on analyses designed to probe their models and discover what information they learn to use, the authors demonstrated that their task fusion approach improved the explainabilty of deep learning-sbased mental health prediction models. Specifically, by performing correlational analyses of the models predictions on each task, they were able to explore the usefulness of the emotion prediction layers in explaining stress classifications.
As can be seen from this overview, with the exception of Turcan et al. (2021), previous studies have focused on specific fusion strategies applied to a variety of mental health conditions. By applying different fusion strategies to five mental disorders (AHDH, anxiety, bipolar disorder, depression)
and related symptomatology (psychological stress),
we aim to facilitate the evaluation of current approaches to information fusion.
Mental Health Dataset Number Avg. length SD Total Avg. length SD Total Condition of posts (words) (words) (words) (chars) (chars) (chars) ADHD SMHD 5272 117.98 121.64 621992 638.60 677.77 3366710 Anxiety 4963 116.45 132.17 577925 619.73 711.21 3075701 Bipolar 3632 116.56 114.15 423342 622.31 624.05 2260240 Depression 7818 114.70 113.11 896735 610.82 608.08 4775377 Control 10000* 97.0 84.8 969580 525 522 5251129 Stress Dreaddit 1857 93.0 35.3 172782 459.31 178.50 852949 Control 1696 85.5 29.9 145081 434.91 154.62 737622
## 3 Data
Four datasets were used in the present work: The data used in the task of mental health detection were obtained from two publicly available social media datasets: (1) the Self-Reported Mental Health Diagnoses (SMHD) dataset (Cohan et al., 2018) and (2) the Dreaddit dataset (Turcan and McKeown, 2019). Both SMHD and Dreaddit were constructed from data from Reddit, a social media platform consisting of individual topic communities called subreddits, including those relevant to MHC detection. The statistics of these datasets are provided in Table 1.
SMHD is a large dataset of social media posts from users with nine mental health conditions
(MHC) corresponding to branches in the DSM5, an authoritative taxonomy for psychiatric diagnoses (APA, 2013). User-level MHC labels were obtained through carefully designed distantly supervised labeling processes based on diagnosis pattern matching. The pattern matching leveraged a seed list of diagnosis keywords collected from the corresponding DSM-5 headings and extended by synonym mappings. To prevent that target labels can be easily inferred from the presence of MHC
indicating words and phrases in the posts, all posts made to mental health-related subreddits or containing keywords related to a mental health condition were removed from the diagnosed users' data.
Dreaddit is a dataset of social media posts from subreddits in five domains that include stressful and non-stressful text. For a subset of 3.5k users employed in this paper, binary labels (+/- stressful)
were obtained from crowdsourced annotations aggregated as the majority vote from five annotators for each data point.
As the SMHD and Dreaddit datasets are labeled only with mental health status, two additional datasets were used to provide auxiliary information about personality and emotion. Following the approach used in Turcan et al. (2021), we first separately trained RoBERTa models on the GoEmotions dataset (Demszky et al., 2020) and the Kaggle MBTI dataset (Li et al., 2018) and used these models to predict emotion and personality labels for SMHD and Dreaddit. A table with dataset statistics for these resources is provided in the appendix.
GoEmotions is the largest available manually annotated dataset for emotion prediction. It consists of 58 thousand Reddit comments, labeled by 80 human raters for 27 emotion categories plus a neutral category. The authors provided a mapping of these 27 categories to Ekman's six basic emotions (anger, disgust, fear, joy, sadness, and surprise), which are assumed to be physiologically distinct (Ekman, 1992, 1999). Drawing on the results of experiments with different emotion mappings reported in Turcan et al. (2021), these six basic emotions are used in the present work.
The Kaggle MBTI dataset was collected through the PersonalityCafe forum2and thus provides a diverse sample of people interacting in an informal online social environment. It consists of samples of social media interactions from 8675 users, all of whom indicated their Myers–Briggs Type Indicator (MBTI) personality type (Meyers et al., 1990). The MBTI is a widely administered questionnaire that describes personality in terms of 16 types that result from combining binary categories from four dimensions: (a) Extraversion/Introversion (E/I) - preference for how people direct and receive their energy, based on the external or internal world, (b) Sensing/Intuition (S/N) - preference for how people take 2https://www.personalitycafe.com/
in information, through the five senses or through interpretation and meanings, (c) Thinking/Feeling
(T/F) - preference for how people make decisions, relying on logic or emotion over people and particular circumstances, and (d) Judgment/Perception
(J/P) - how people deal with the world, by ordering it or remaining open to new information.
## 3.1 Data Preprocessing
For the SMHD dataset, we removed all posts with a length greater than 512 words, as these posts could not be processed by the large pre-trained models like RoBERTa and its variants. We then randomly sampled one post from each user and focused our analysis on the four most frequently attested mental health conditions. Furthermore, all dtasets were subjected to various standard pre-processing steps, including removal of HTML, URLs, extra spaces and emojis in the text, and the correction of inconsistent punctuation.
## 4 Models
We experiment with seven information-infusion models that differ (i) in the type of information to be infused (personality, emotion, both) and (ii) the fusion strategy applied to incorporate that information into the mental health detection models. The architectures of these models is shown in Figure 1.
## 4.1 Feature-Level Fusion
Our feature fusion model combines a MentalRoBERTa model (Ji et al., 2022) with a bidirectional long short-term (BiLSTM) network trained on 544 psycholinguistic features that fall into six broad categories: (1) features of morpho-syntactic complexity (N=19), (2) features of lexical richness, diversity and sophistication (N=52), (3) stylistic features (incl. register-based n-gram frequency features (N=57), (4) readability features (N=14), and
(5) lexicon features designed to detect sentiment, emotion and/or affect (N=325). (6) Cohesion and Coherence features (N=77). All measurements of these features were obtained using an automated text analysis system that employs a sliding window technique to compute sentence-level measurements. These measurements capture the within-text distributions of scores for a given psycholinguistic feature, referred to here as 'text contours' (for its recent applications, see e.g. Wiechmann et al.
(2022) for predicting eye-movement patterns during reading and Kerz et al. (2022) for detection of Big Five personality traits and Myers–Briggs types). Tokenization, sentence splitting, part-ofspeech tagging, lemmatization and syntactic PCFG
parsing were performed using Stanford CoreNLP
(Manning et al., 2014). The given text is fed to a pre-trained language model and its output is passed through a BiLSTM layer with 2 layers and hidden size of 512. The second part of the model is the PsyLin model which is a 3-layer BiLSTM with hidden size of 1024 which is further passed through a fully connected layer to obtain a 256 dimensional vector. The input to this model is a set of over 600 handcrafted features across 5 categories. We constructed the feature-level fusion models by (1)
obtaining a set of 256 dimensional vector from the BiLSTM network and then (2) concatenating these features along with the output from the Mental RoBERTa model component. This is then fed into a 2-layer feedforward classifier. To obtain the soft labels (probabilities that a text belongs to the corresponding emotion label), sigmoid was applied to each dimension of the output vector.
## 4.2 Model Fusion
In our model fusion approach, the MentalRoBERTa model was infused with hidden features of a finetuned RoBERTa emotion model and fine-tuned RoBERTa personality model (see also Section 3).
Both these models are fine-tuned 'roberta-base' models with a linear classification layer on top of them. We use the output values obtained from this layer to provide the infused model information on emotion and/or personality. Specifically, we pass the output obtained from the MentalRoBERTa through a sequential layer consisting of two linear layers and concatenate the features with the second part. We finally pass this through a linear layer to obtain the soft predictions for the respective MHC.
Similar to the previous model types, we train separate models for all five MHCs. For each MHC, we created three different binary classification models:
one with just emotions (MentalRoBERTa + Emotion), one with just personality (MentalRoBERTa +
Personality), and one with 'full infusion' (MentalRoBERTa + Emotion + Personality).
## 4.3 Task Fusion
Our task fusion approach is an extended version of the multi-task learning setup used Turcan et al.
(2021). Within this setup, we perform multiple tasks at the same time using the same input data.
As the SMHD data is labeled only with MHC cate-
| Emotion | F1-score | Personality | F1-score |
|-----------|------------|---------------|------------|
| Anger | 55 | I/E | 74 |
| Disgust | 39 | N/S | 83 |
| Fear | 61 | T/F | 73 |
| Joy | 81 | P/J | 63 |
| Sadness | 62 | Macro Avg | 73 |
| Surprise | 58 | | |
| Neutral | 62 | | |
| Macro Avg | 60 | | |
![5_image_0.png](5_image_0.png)
gories and Dreaddit only has labels for stress, we followed the approach described in Turcan et al.
(2021) to derive emotion and personality labels for the two datasets. To this end, we first separately trained RoBERTa models on the GoEmotions and Kaggle MBTI datasets and use them to generate
'silver labels' for emotion and personality. The performance of these models is presented in Table 2.
We then trained the model in a multi-task setup on two tasks (mental health detection and emotion recognition or personality detection) or on all three tasks. In each task fusion model, the loss is the weighted sum of the loss from MHC part and secondary task part, where the weights are tunable L = WMHC × LMHC + (1 − WMHC) × LSEC
Separate binary classification models were constructed for each of four self-reported diagnosed mental health conditions (MHC) from the SMHD dataset (ADHS, anxiety, depression, bipolar) and stress from the Dreaddit dataset. For each MHC
we constructed an emotion-infused model (MentalRoBERTa + Emotion), a personality-infused model (MentalRoBERTa + Personality), and a 'fullinfusion' model (MentalRoBERTa + Emotion +
Personality).
## 5 Experimental Setup 5.1 Baseline
We compared our models against a fine-tuned MentalRoBERTa model. We used the pretrained
'MentalRoBERTa-base' models from the Huggingface Transformers library (Wolf et al., 2019). The models consist of 12 Transformer layers with hidden size 768 and 12 attention heads. We run experiments with (1) a linear fully-connected layer for classification as well as with (2) an intermediate bidirectional LSTM layer with 256 hidden units.
The following hyperparameters are used for finetuning: a fixed learning rate of 2 × 10−5is applied and L2 regularization of 1 × 10−6. All models were trained for 8 epochs, with batch size of 4 and maximum sequence length of 512 and dropout of 0.2. We report the results from the best performing models.
## 5.2 Training Details
We trained all the models using BinaryCrossEntropy loss and Adam optimizer (adamw). We set the learning rate as 2e-5 and weight decay of 1e-5.
We train the different models with different batch sizes. The BiLSTM network component of the feature fusion model had a batch size of 128 and for training all the other models we set a batch size of 32. We trained that component model for 200 epochs and all the other models for 8 epochs and saved the best preforming models on validation set.
We evaluated these models on the test set and report the performance in terms of macro-F1 scores.
We selected the hyperparameters based on the the macro F1 score obtained on the the development set. We used grid search for getting the optimal values for the following: (1) for task fusion models: loss weights for primary and secondary tasks (0.5,0.5), (0.6, 0.4), (0.7,0.3) with the best f1 scores attained at equal weights for both tasks;
(2) for the feature fusion model: hidden size 128, 256, 512, 1024, number of LSTM layers 1,2,3,4, dropout 0.2,0.4, we found the best performance with hidden size of 512, 3 layers and 0.2 dropout.
## 6 Results And Discussion
Table 1 provides a concise overview of the performance in detecting five mental disorders (ADHD,
anxiety, bipolar disorder, depression, and stress) for three fusion strategies (feature-level fusion, model fusion, and task fusion) in comparison to the baseline MentalRoBERTa model. In general, it is shown that our fusion-models outperform the MentalRoBERTa baseline model for three of the five mental health conditions (ADHD, anxiety, bipolar disorder), and performed similarly to the baseline model for depression and stress. For the the ADHD condition the best performing model, the
'Task Fusion - emotion' model, achieved an improvement of 4% F1 over the MentalRoBERTa baseline model. For anxiety and bipolar the best performance was achieved by the 'Task Fusion -
personality model', an improvement over the baseline of 2% F1. Overall, these results indicate that task fusion is the most effective fusion strategy for detecting these three mental health conditions. Task fusion models were able to learn the features for the auxiliary tasks (emotion classification and personality detection) and thereby improve the performance of the primary task (mental health detection)
for three conditions. The results also suggest that both emotions and personality are important in the detection of specific mental health disorders: We observed that detection of ADHD benefited most from infusion of emotion information, whereas detection of anxiety and bipolar disorders benefited most from infusion of personality information. The finding that fusion model performed similarly to MentalRoBERTa baseline model for stress is consistent with the findings reported in Turcan et al
(2021): Their emotion fusion models constructed for the task of binary stress prediction achieved comparable performances to a fine-tuned BERT baseline model (F1 BERT = 78.88, F1 Emotion fusion model with Ekman GoEmotions relabeling
= 80.24). The F1 score of our baseline Mental-
RoBERTa model was 3.3% higher than that of their baseline BERT model. For stress and depression, the best performance was obtained with the feature-level fusion approach, which yielded slight improvements over the MentalRoBERTa baseline.
At the same time, we observed that infusing only information from the most informative source was more effective than full infusion, i.e. emotion and personality. A possible reason for this finding is noise or erroneous hidden features generated by the the auxiliary models in the case of model fusion (see Zhang et al., 2023; Pan and Yang, 2010, for discussion). A potential reason for lower performance of the full infusion models in the task learning approach is competition among the auxiliary tasks with regard to providing evidence for the relevance of particular features (see Ruder, 2017, for a discussion of 'attention focusing' in multi task learning). We intend to explore these issues in future research.
Building upon the approach described in Turcan et al (2021), we go a step further to probe our full task fusion models and discover the exact nature of the information it learned to use, i.e. how the six basic emotion categories (anger, disgust, fear, joy, sadness, and surprise) and four personality dimensions (Extraversion/Introversion (E/I), Sensing/Intuition (S/N), Thinking/Feeling (T/F) and Judgment/Perception (J/P)) guided the prediction of mental health status. To this end, we calculated Pearson correlation coefficients between the predicted probabilities for each of the five mental health conditions and the probabilities for the four personality and six emotion categories. Table 4 presents an overview of the results of this analysis.
A visualization of the results can be found in Figure 2 in the appendix. The results revealed that the full task fusion model learned moderate to strong correlations between specific mental health statuses and specific emotion and personality categories: More specifically, the ADHD condition was strongly associated with sadness and disgust and moderately associated with anger and anxiety, whereas it was strongly negatively correlated with joy. Anxiety was strongly linked to joy and moderately associated with sadness, while being strongly negatively correlated with disgust. Bipolar disorder is characterized by strong negative associations with fear and disgust, with tendencies towards anger and sadness. Depression was strongly linked to the negative emotions of fear, anger, disgust and sad-
Model **ADHD Anxiety Bipolar Depression Stress** MentalRoBERTa 64.28 71.50 71.83 71.34 82.22
Feature-level Fusion 64.24 71.09 71.36 **71.88 82.59**
Model Fusion - emotion 65.75 70.59 71.14 71.43 80.80
Model Fusion - personality 65.12 71.42 71.58 70.44 81.08
Model Fusion - emotion & personality 64.04 71.57 69.68 68.91 81.07
Task Fusion - emotion **68.02** 72.32 71.49 70.18 81.01
Task Fusion - personality 66.99 **73.40 73.23** 68.33 80.19
Task Fusion - emotion & personality 65.35 72.36 72.14 71.42 82.03
ness. In addition - like anxiety - it was positively related to with joy, which is somewhat unexpected.
Stress exhibited the weakest correlations to emotional categories with moderate positive correlations with fear and negative ones with joy being the most salient. We note that the weaker correlations between stress and the emotional categories can explain the more modest gain in predictive accuracy of the fusion models compared to the fine-tuned transformer model in both the present study and in Turcan et al. (2021).
Turning to personality, the task fusion model learned that all mental health conditions are associated with the MBTI-T dimension, such that individuals with a preference for relying on emotions in decision making are more likely to have an MHC diagnosis. Bipolar depression, ADHD
and anxiety were also associated with the MBTI-J
dimension, such that individuals that are less open to new information are more likely to exhibit any of these MHCs. Anxiety and bipolar disorder were correlated with the MBTI-E dimension, such that these conditions were more likely for individuals with a preference for focusing on the future with an emphasis on patterns and possibilities. Anxiety was also strongly negatively correlated with the MBTI-N dimension, meaning that the condition was much more prevalent in introverted individuals, than in extraverted ones. At the same time, extraversion was associated with both depression and to a lesser extent with bipolar disorder.
In line with results from experimental and genome-wide association studies of mental health and personality (Adams et al., 2019; Nikolic et al.,
2020), these results suggest that personality dimensions are important in understanding vulnerability to mental health disorders.
## 7 Conclusion
| Emotion | Personality | | | | | | | | | | |
|------------|--------------------|-------|--------------------------------------------------------------|------|-------|-------|-------|-------|-------|-------|-------|
| MHC | Anger Disgust Fear | Joy | Sadness Surprise Neutral Extrovert Intuitive Thinker Judging | | | | | | | | |
| ADHD | 0.35 | 0.53 | 0.36 -0.73 | 0.68 | 0.27 | -0.96 | 0.19 | -0.28 | -0.97 | 0.86 | |
| Anxiety | 0.22 | -0.74 | -0.01 0.88 | 0.50 | -0.21 | -0.91 | -0.80 | 0.75 | -1.00 | -0.90 | |
| Bipolar | 0.14 | -0.60 | -0.75 -0.19 | -.14 | -0.77 | -0.98 | 0.41 | 0.85 | -1.00 | -0.90 | |
| Depression | 0.49 | 0.69 | 0.65 | 0.98 | 0.74 | -0.11 | -0.97 | 0.62 | -0.77 | -1.00 | -0.92 |
| Stress | 0.12 | 0.05 | 0.34 -0.35 | 0.24 | 0.02 | -0.30 | -0.04 | 0.00 | -0.17 | -0.11 | |
In this work, we presented the first comprehensive experimental evaluation of current deep learningbased fusion strategies (feature-level fusion, model fusion, task fusion) for the detection of mental disorders. We go beyond previous work by applying these approaches to five mental health conditions.
The results of our experiments showed that the task fusion strategy is most promising for the detection of three of the five conditions (ADHD, anxiety, and bipolar disorder), while feature-level fusion is most advantageous for the detection of psychological distress and depression. We demonstrated that the prediction of mental health from textual data benefits from the infusion of two information sources related to mental disorders, i.e. emotion and personality. Furthermore, we show that information fusion models can improve the classification accuracy of strong transformer-based prediction models while enhancing their explainability.
In this paper, we focused on developing binary classifiers that aim to distinguish between individuals with a particular mental illness and control users.
In future work, we intend to addresses the more complex problem of distinguishing between multiple mental health conditions, which is essential if we are to uncover the subtle differences among the statistical patterns of language use associated with particular disorders. We further intend to employ our approach to longitudinal data to gain valuable insights into the evolution of symptoms over time and extend it to languages beyond English, specifically German.
## Limitations
We note that the datasets used in this work solely represent social media interactions from Reddit, which is known to have a demographic bias toward young, white, American males3. Furthermore, systematic, spurious differences between diagnosed and control users can prevent trained models from generalizing to other data. Future research on other social media and datasets is needed to determine to what extent the presented findings are generalizable to broader populations.
## References
Muhammad Abdul-Mageed and Lyle Ungar. 2017.
EmoNet: Fine-grained emotion detection with gated recurrent neural networks. In *Proceedings of the 55th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 718–728, Vancouver, Canada. Association for Computational Linguistics.
Mark J Adams, David M Howard, Michelle Luciano, Toni-Kim Clarke, Gail Davies, W David Hill, Daniel Smith, Ian J Deary, David J Porteous, Andrew M
McIntosh, et al. 2019. Stratifying depression by neuroticism: revisiting a diagnostic tradition using gwas data. *bioRxiv*, page 547828.
APA. 2013. Diagnostic and statistical manual of mental disorders. *American Psychiatric Association*,
21(21):591–643.
Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In *Proceedings of the Seventh International* 3https://social.techjunkie.com/
demographics-reddit
Conference on Language Resources and Evaluation
(LREC'10), Valletta, Malta. European Language Resources Association (ELRA).
Rafael A. Calvo, David N. Milne, M. Sazzad Hussain, and Helen Christensen. 2017. Natural language processing in mental health applications using non-clinical texts. *Natural Language Engineering*,
23(5):649–685.
Rich Caruana. 1997. Multitask learning. Machine learning, 28:41–75.
Stevie Chancellor and Munmun De Choudhury. 2020.
Methods in predictive techniques for mental health status on social media: a critical review. NPJ Digital Medicine, 3(1):1–11.
Arman Cohan, Bart Desmet, Andrew Yates, Luca Soldaini, Sean MacAvaney, and Nazli Goharian. 2018.
SMHD: a large-scale resource for exploring online language usage for multiple mental health conditions. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 1485–
1497, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Jacob Cohen. 1988. Statistical power analysis for the behavioral sciences. Lawrence Erlbaum Associates.
Hillsdale, NJ, pages 20–26.
Angelo Compare, Cristina Zarbo, Edo Shonin, William Van Gordon, and Chiara Marconi. 2014. Emotional regulation and depression: A potential mediator between heart and mind. *Cardiovascular Psychiatry* and Neurology, 2014.
Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi.
2020. GoEmotions: A dataset of fine-grained emotions. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 4040–4054, Online. Association for Computational Linguistics.
Paul Ekman. 1992. Are there basic emotions? *Psychological Review*, 99 3:550–3.
Paul Ekman. 1999. Basic emotions. *Handbook of cognition and emotion*, 98(45-60):16.
Muskan Garg. 2023. Mental health analysis in social media posts: A survey. Archives of Computational Methods in Engineering, pages 1–24.
Shaoxiong Ji, Tianlin Zhang, Luna Ansari, Jie Fu, Prayag Tiwari, and Erik Cambria. 2022. MentalBERT: Publicly available pretrained language models for mental healthcare. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 7184–7190, Marseille, France. European Language Resources Association.
Jutta Joormann and Ian H Gotlib. 2010. Emotion regulation in depression: Relation to cognitive inhibition.
Cognition and Emotion, 24(2):281–298.
Elma Kerz, Yu Qiao, Sourabh Zanwar, and Daniel Wiechmann. 2022. Pushing on personality detection from verbal behavior: A transformer meets text contours of psycholinguistic features. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 182–194, Dublin, Ireland. Association for Computational Linguistics.
Charles Li, Monte Hancock, Ben Bowles, Olivia Hancock, Lesley Perg, Payton Brown, Asher Burrell, Gianella Frank, Frankie Stiers, Shana Marshall, Gale Mercado, Alexis-Walid Ahmed, Phillip Beckelheimer, Samuel Williamson, and Rodney Wade.
2018. Feature extraction from social media posts for psychometric typing of participants. In *Augmented* Cognition: Intelligent Technologies, pages 267–286, Cham. Springer International Publishing.
David E Losada, Fabio Crestani, and Javier Parapar.
2019. Overview of eRisk 2019 early risk prediction on the internet. In *International Conference of* the Cross-Language Evaluation Forum for European Languages, pages 340–357. Springer.
Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky.
2014. The Stanford CoreNLP natural language processing toolkit. In *Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations*, pages 55–60, Baltimore, Maryland. Association for Computational Linguistics.
Isabel Briggs Meyers, Mary H McCaulley, and Allen L
Hammer. 1990. Introduction to Type: A Description of the Theory and Applications of the Myers-Briggs Type Indicator. Consulting Psychologists Press.
Saif M Mohammad and Peter D Turney. 2013. Nrc emotion lexicon. *National Research Council, Canada*,
2:234.
Sanja Nikolic, Ivana Perunicic Mladenovic, Olivera Vukovic, Jasmina Barišic, Dragan Švraki ´ c, and Srd- ´
jan Milovanovic. 2020. Individual and gender differ- ´ ences in personality influence the diagnosis of major depressive disorder. *Psychiatria Danubina*, 32(1):97–
104.
Susan Nolen-Hoeksema, Blair E. Wisco, and Sonja Lyubomirsky. 2008. Rethinking rumination. *Perspectives on Psychological Science*, 3(5):400–424.
PMID: 26158958.
Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359.
James W Pennebaker, Martha E Francis, and Roger J
Booth. 2001. Linguistic inquiry and word count:
Liwc 2001. *Mahway: Lawrence Erlbaum Associates*,
71(2001):2001.
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference* on empirical methods in natural language processing
(EMNLP), pages 1532–1543.
A. Perrin. Social Media Usage: 2005-2015: 65% of Adults Now Use Social Networking Sites–a Nearly Tenfold Jump in the Past Decade. Pew Research Trust.
Daniel Preo¸tiuc-Pietro, Johannes Eichstaedt, Gregory Park, Maarten Sap, Laura Smith, Victoria Tobolsky, H Andrew Schwartz, and Lyle Ungar. 2015. The role of personality, age, and gender in tweeting about mental illness. In *Proceedings of the 2nd workshop* on computational linguistics and clinical psychology:
From linguistic signal to clinical reality, pages 21–
30.
Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.
Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Rajiv Ratn Shah. 2020. A time-aware transformer based model for suicide ideation detection on social media. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 7685–7697, Online. Association for Computational Linguistics.
Pradyumna Prakhar Sinha, Rohan Mishra, Ramit Sawhney, Debanjan Mahata, Rajiv Ratn Shah, and Huan Liu. 2019. \# suicidal-a multipronged approach to identify and explore suicidal ideation in twitter. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 941–950.
Hoyun Song, Jinseon You, Jin-Woo Chung, and Jong C
Park. 2018. Feature attention network: Interpretable depression detection from social media. In *Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation*.
Elsbeth Turcan and Kathy McKeown. 2019. Dreaddit: A Reddit dataset for stress analysis in social media. In *Proceedings of the Tenth International* Workshop on Health Text Mining and Information Analysis (LOUHI 2019), pages 97–107, Hong Kong.
Association for Computational Linguistics.
Elsbeth Turcan, Smaranda Muresan, and Kathleen McKeown. 2021. Emotion-infused models for explainable psychological stress detection. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 2895–2909, Online. Association for Computational Linguistics.
Ana-Sabina Uban, Berta Chulvi, and Paolo Rosso. 2021.
An emotion and cognitive based analysis of mental health disorders from social media data. Future Generation Computer Systems, 124:480–494.
Daniel Wiechmann, Yu Qiao, Elma Kerz, and Justus Mattern. 2022. Measuring the impact of (psycho-
)linguistic and readability features and their spill over effects on the prediction of eye movement patterns.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 5276–5290, Dublin, Ireland.
Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing.
Andrew Yates, Arman Cohan, and Nazli Goharian. 2017.
Depression and self-harm risk assessment in online forums. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2968–2978, Copenhagen, Denmark. Association for Computational Linguistics.
Katherine S Young, Christina F Sandman, and Michelle G Craske. 2019. Positive and negative emotion regulation in adolescence: links to anxiety and depression. *Brain Sciences*, 9(4):76.
T. Zhang, A Schoene, and S. Ananiadou. 2022. Natural language processing applied to mental illness detection: A narrative review. *NPJ Digital Medicine*, 5:46.
Tianlin Zhang, Kailai Yang, Shaoxiong Ji, and Sophia Ananiadou. 2023. Emotion fusion for mental illness detection from social media: A survey. *Information* Fusion, 92:231–246.
Binggui Zhou, Guanghua Yang, Zheng Shi, and Shaodan Ma. 2022. Natural language processing for smart healthcare. *IEEE Reviews in Biomedical Engineering*.
## A Appendix
| Secondary | Dataset | Number | Avg. len | SD | Total | Avg. len | SD | Total |
|-------------|-------------|----------|------------|---------|----------|------------|----------|----------|
| attribute | of posts | (words) | (words) | (words) | (chars) | (chars) | (chars) | |
| Personality | Kaggle MBTI | 8675 | 1309.11 | 327.11 | 11356602 | 6795.6 | 1676.95 | 58951828 |
| ENFJ | 190 | 1372 | 326 | 260724 | 7062 | 1651 | 1341841 | |
| ENFP | 675 | 1344 | 315 | 907091 | 6902 | 1601 | 4658783 | |
| ENTJ | 231 | 1299 | 304 | 300037 | 6809 | 1550 | 1572947 | |
| ENTP | 685 | 1290 | 294 | 883676 | 6717 | 1529 | 4601132 | |
| ESFJ | 42 | 1379 | 373 | 57905 | 7069 | 1908 | 296884 | |
| ESFP | 48 | 1099 | 405 | 52753 | 5656 | 2084 | 271501 | |
| ESTJ | 39 | 1312 | 315 | 51178 | 6740 | 1564 | 262870 | |
| ESTP | 89 | 1242 | 337 | 110567 | 6374 | 1719 | 567266 | |
| INFJ | 1470 | 1363 | 316 | 2003249 | 7061 | 1619 | 10379463 | |
| INFP | 1832 | 1328 | 325 | 2432535 | 6858 | 1658 | 12564597 | |
| INTJ | 1091 | 1274 | 334 | 1389940 | 6693 | 1732 | 7301709 | |
| INTP | 1304 | 1281 | 321 | 1669835 | 6713 | 1667 | 8753488 | |
| ISFJ | 166 | 1328 | 377 | 220413 | 6818 | 1922 | 1131708 | |
| ISFP | 271 | 1217 | 360 | 329703 | 6269 | 1833 | 1698980 | |
| ISTJ | 205 | 1297 | 348 | 265895 | 6692 | 1746 | 1371951 | |
| ISTP | 337 | 1250 | 341 | 421101 | 6459 | 1744 | 2176708 | |
| Emotion | GoEmotion | 52501 | 13.84 | 6.97 | 726668 | 67.69 | 36.60 | 3553890 |
| Anger | 7022 | 14.5 | 6.94 | 101980 | 71.8 | 36.7 | 504334 | |
| Disgust | 1013 | 14.2 | 6.84 | 14388 | 71.1 | 35.8 | 72008 | |
| Fear | 929 | 14.5 | 7.03 | 13507 | 71.6 | 36.3 | 66554 | |
| Joy | 21733 | 13.6 | 6.91 | 296623 | 66.2 | 35.9 | 1438087 | |
| Neutral | 17772 | 13.5 | 7.06 | 239784 | 66.3 | 37.5 | 1178538 | |
| Sadness | 4032 | 15.0 | 6.80 | 60386 | 73.0 | 35.4 | 294369 | |
Table 5: Count of posts, tokens and characters along with average post length of datasets used for secondary tasks
| Model | ADHD | Anxiety | Bipolar | Depression | Stress | | | | | |
|--------------------------------------|--------|-----------|-----------|--------------|----------|-------|-------|-------|-------|-------|
| Runs | 1 | 2 | 1 | 2 | 1 | 2 | 1 | 2 | 1 | 2 |
| MentalRoBERTa | 64.48 | 64.08 | 71.72 | 71.28 | 72.04 | 71.62 | 72.01 | 70.67 | 81.98 | 82.46 |
| Feature-level Fusion | 63.96 | 64.52 | 72.34 | 69.84 | 71.54 | 71.18 | 71.89 | 71.87 | 82.83 | 82.35 |
| Model Fusion - emotion | 65.64 | 65.86 | 70.40 | 70.78 | 71.63 | 70.66 | 71.73 | 71.13 | 81.08 | 80.52 |
| Model Fusion - personality | 65.03 | 65.21 | 71.33 | 71.51 | 71.99 | 71.17 | 70.63 | 70.25 | 81.14 | 81.02 |
| Model Fusion - emotion & personality | 64.16 | 63.92 | 71.66 | 71.48 | 69.97 | 69.39 | 69.33 | 68.49 | 81.52 | 80.62 |
| Task Fusion - emotion | 68.55 | 67.49 | 71.98 | 72.66 | 71.89 | 71.09 | 70.61 | 69.75 | 81.16 | 80.86 |
| Task Fusion - personality | 66.85 | 67.13 | 73.26 | 73.54 | 73.30 | 73.16 | 68.63 | 68.04 | 80.49 | 79.89 |
| Task Fusion - emotion & personality | 65.03 | 65.67 | 72.27 | 72.45 | 72.37 | 71.91 | 71.60 | 71.24 | 82.52 | 81.54 |
Table 6: Results of information-fusion models in comparison to baseline models
![12_image_0.png](12_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
✓ B1. Did you cite the creators of artifacts you used?
Left blank.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
guo-etal-2023-adaptive | Adaptive Contrastive Knowledge Distillation for {BERT} Compression | https://aclanthology.org/2023.findings-acl.569 | In this paper, we propose a new knowledge distillation approach called adaptive contrastive knowledge distillation (ACKD) for BERT compression. Different from existing knowledge distillation methods for BERT that implicitly learn discriminative student features by mimicking the teacher features, we first introduce a novel contrastive distillation loss (CDL) based on hidden state features in BERT as the explicit supervision to learn discriminative student features. We further observe sentences with similar features may have completely different meanings, which makes them hard to distinguish. Existing methods do not pay sufficient attention to these hard samples with less discriminative features. Therefore, we propose a new strategy called sample adaptive reweighting (SAR) to adaptively pay more attention to these hard samples and strengthen their discrimination abilities. We incorporate our SAR strategy into our CDL and form the adaptive contrastive distillation loss, based on which we construct our ACKD framework. Comprehensive experiments on multiple natural language processing tasks demonstrate the effectiveness of our ACKD framework. | # Adaptive Contrastive Knowledge Distillation For Bert Compression
Jinyang Guo1,2∗
, Jiaheng Liu2∗, Zining Wang2**, Yuqing Ma**2, Ruihao Gong2,3, Ke Xu2 **and Xianglong Liu**2†
1Institute of Artificial Intelligence, Beihang University 2State Key Lab of Software Development Environment, Beihang University 3SenseTime Group Limited
{jinyangguo,liujiaheng}@buaa.edu.cn, [email protected]
## Abstract
In this paper, we propose a new knowledge distillation approach called adaptive contrastive knowledge distillation (ACKD) for BERT compression. Different from existing knowledge distillation methods for BERT that implicitly learn discriminative student features by mimicking the teacher features, we first introduce a novel contrastive distillation loss (CDL) based on hidden state features in BERT as the explicit supervision to learn discriminative student features. We further observe sentences with similar features may have completely different meanings, which makes them hard to distinguish. Existing methods do not pay sufficient attention to these hard samples with less discriminative features. Therefore, we propose a new strategy called sample adaptive reweighting (SAR) to adaptively pay more attention to these hard samples and strengthen their discrimination abilities. We incorporate our SAR
strategy into our CDL and form the adaptive contrastive distillation loss, based on which we construct our ACKD framework. Comprehensive experiments on multiple natural language processing tasks demonstrate the effectiveness of our ACKD framework.
## 1 Introduction
Recently, deep learning (Liu et al., 2023; Guo et al., 2023; Liu et al., 2021; Guo et al., 2022a) has achieved success in many natural language processing tasks. However, due to limited computation and storage resources, current deep learning approaches are hard to be deployed on mobile devices.
Knowledge distillation is an effective approach to compress the model for mobile deployment, which aims to use a pretrained teacher network to help the training of a lightweight student network. To achieve this, the student needs to learn discriminative features. Namely, we need to push the features
![0_image_0.png](0_image_0.png)
of the sample from different classes (negative pairs)
far away from each other and keep the features of the samples from the same classes (positive pairs)
close.
Current knowledge distillation methods for BERT implicitly learn discriminative student features. They assume the teacher is well-learned (i.e.,
features of negative pairs are far away from each other in the teacher). Then, they minimize the feature distance of each sample between the teacher and student to make the student feature discriminative, as shown in Fig. 1(a). In this way, the features of negative pairs in the student can be pulled far away from each other. However, the aforementioned assumption is not always held. Commonly used words will appear in the sentences with different meanings, causing the features of negative pairs in the teacher to be close to each other, as shown in Fig. 1(b). In this case, training the student using the current knowledge distillation paradigm will result in the features of negative pairs in the student being close to each other as well. So, it is desirable to in-
Table 1: Examples of hard samples from GLUE.
| Linguistic acceptable | Linguistic unacceptable |
|--------------------------------------|-------------------------------------|
| Harry coughed himself into a fit. | Harry coughed us into a fit. |
| This building got taller and taller. | This building is taller and taller. |
| Bill cried himself to sleep. | Bill cried Sue to sleep. |
troduce explicit supervision (e.g., a well-designed loss) to push the features of negative pairs in the student far away from each other.
Another issue in the existing knowledge distillation methods is that they do not pay sufficient attention to hard samples in the distillation process.
Similar sentences may have completely different meanings. For example, for the linguistic acceptability task, although the sentences "We yelled ourselves hoarse" and "We yelled Harry hoarse" are similar as they only have one different word, the first sentence is linguistically acceptable while the latter one is not, making them fall into different categories. This makes these sentences hard to distinguish because their features are similar and thus less discriminative. This phenomenon often occurs in other natural language processing tasks, and we provide more examples from GLUE benchmark (Wang et al., 2019) in Table 1. Therefore, it is also desirable to pay more attention to hard samples to strengthen their discrimination abilities.
To solve the aforementioned problems, we propose a new knowledge distillation framework called adaptive contrastive knowledge distillation
(ACKD). Specifically, to tackle the first issue (i.e.,
lack of explicit supervision), we introduce the concept of contrastive learning (Gutmann and Hyvärinen, 2010; Oord et al., 2018; Saunshi et al., 2019; Hjelm et al., 2018) to knowledge distillation and design a contrastive distillation loss (CDL) as the explicit supervision to maximize the distance of the features from negative pairs. In particular, for each sample s, our CDL aims to maximize the similarity between the features of s in the student and that in the teacher, and minimize the similarity between the features of s in student and the features from the negative pairs of s in teacher. As shown in Fig. 1(c), our CDL can effectively push the features from negative pairs far away from each other.
To tackle the second issue (i.e., learning of hard samples), we propose a new strategy called sample adaptive reweighting (SAR) in our ACKD framework to adaptively pay more attention to hard samples to strengthen their discrimination abilities. Specifically, we utilize a neural network as a predictor to predict the discrimination ability of the feature for each sample based on its learned feature. Then, we reweight the loss from different samples according to the predicted discrimination ability. As all operations in this process are differentiable, the parameters of the predictor can be jointly learned with the student. We seamlessly incorporate our SAR strategy into the newly proposed CDL and construct the adaptive contrastive distillation loss (A-CDL).
We combine our A-CDL with the existing knowledge distillation methods and construct our Adaptive Contrastive Knowledge Distillation (ACKD)
framework. It is also a non-trivial task to construct our ACKD framework as our A-CDL is calculated based on the features, which can only be calculated inside one mini-batch due to the property of current deep learning frameworks (i.e., features will be released after the calculation of current batch). So, the diversity of negative paired samples is limited by the batch size, causing an inaccurate optimization direction. To overcome this issue, inspired by
(He et al., 2020), we construct a dynamic feature storage that can store the features from a large number of samples, based on which we calculate our A-CDL to increase the sample diversity.
In summary, the main contribution of this paper can be summarized as follows:
- We propose a novel contrastive distillation loss (CDL) to introduce explicit supervision for learning discriminative student features.
- We propose a new strategy called sample adaptive reweighting (SAR) strategy to adaptively pay more attention to hard samples and strengthen their discrimination abilities. We seamlessly incorporate our SAR strategy into our CDL and form the adaptive contrastive distillation loss (A-CDL). Based on A-CDL,
we construct our new adaptive contrastive knowledge distillation (ACKD) framework for BERT compression, in which dynamic feature storage is used to increase the diversity of samples.
- Comprehensive experiments on multiple natural language processing tasks demonstrate the effectiveness of our ACKD framework.
## 2 Related Work
Knowledge distillation. Recently, model compression methods (Guo et al., 2020b,a,c, 2021, 2023, 2022b; Wei et al., 2023; Qin et al., 2022, 2023a,c,b; Liu et al., 2022c, 2020, 2022a; Peng et al., 2019)
attracts many attentions, among which knowledge distillation approaches (Liu et al., 2022b) were proposed to accelerate deep neural networks (Ma et al., 2022, 2021; Hu et al., 2021). For example, (Hinton et al., 2015) first proposed to use the so-called dark knowledge as the additional supervision for training the student. After this work, many methods (Romero et al., 2015; Zagoruyko and Komodakis, 2017) were proposed to utilize the intermediate feature as the supervision in the distillation process. Another line of work finds knowledge distillation cannot achieve promising performance if there is a large capacity gap between teacher and student. Therefore, this line of works aims to use a sequence of teacher models to better transfer the knowledge to the student, including RCO (Jin et al., 2019) and TAKD (Mirzadeh et al.,
2020). However, all of these works do not consider the relationship between different samples (e.g.,
the correlation between negative pairs), while our ACKD uses the relationship among samples as the explicit supervision to learn more discriminative features.
There are also knowledge distillation approaches (Tian et al., 2019) that utilize the relation between different samples when learning the student, which is more related to our ACKD framework. For example, (Tung and Mori, 2019) proposed to use the similarity of the features from different samples as the knowledge to train the student. (Park et al., 2019) and (Yim et al., 2017)
use the mutual relation of different samples as the knowledge for distillation. However, these methods only use the student to mimic the sample relation in the teacher, which also lacks explicit supervision for the student to learn discriminative features.
In contrast, our ACKD framework uses the newly proposed A-CDL to explicitly push the features of negative pairs far away from each other. Moreover, these methods do not consider the learning of hard sample problem for natural language processing tasks. In our ACKD, we use the SAR strategy to pay more attention to hard samples.
Knowledge distillation for BERT. Many methods were also proposed for compressing BERT (Devlin et al., 2018; Sanh et al., 2019; Zhou et al.,
2022; Haidar et al., 2022; Jafari et al., 2021; Passban et al., 2021). For example, patient knowledge distillation (Sun et al., 2019) proposed to use intermediate features as the supervision to train a small student BERT. TinyBERT (Jiao et al., 2019) uses a two-stage distillation strategy for BERT compression. Although these methods can compress BERT for efficient inference, explicit supervision for learning discriminative student features is not used in these methods. While (Fu et al., 2021) also uses contrastive loss for BERT distillation, they do not use SAR strategy and ignores the sample difficulties. (Sun et al., 2020) proposed CoDIR method to capture structural knowledge in the intermediate layers. Unlike our ACKD framework, these approaches do not consider paying more attention to hard samples.
## 3 Adaptive Contrastive Knowledge Distillation
In this section, we will introduce our adaptive contrastive distillation (ACKD) framework. The goal of our ACKD framework is to use a pre-trained teacher model with a large capacity to help the training of a lightweight student model, and its overview is shown in Fig. 2. The loss of our ACKD
framework when training the student comes from four parts: cross-entropy loss (CEL), knowledge distillation loss (KDL), patient loss (PTL), and our adaptive contrastive distillation loss (A-CDL).
## 3.1 Preliminary
Patient distillation (Sun et al., 2019) was proposed to compress BERT. Given the training dataset with N samples D = {(x1, y1),(x2, y2), . . . ,(xn, yn)},
the student network can be trained by using the loss function as follows:
$$\begin{array}{l}{{{\cal L}_{p r e}=\alpha{\cal L}_{c e}+(1-\alpha){\cal L}_{k d}+\beta{\cal L}_{p t}}}\\ {{\quad=\frac{1}{N}\sum_{i=1}^{N}[\alpha\cdot C E(\mathcal{T}(x_{i};\theta^{T}),y_{i})}}\\ {{\quad+(1-\alpha)\cdot S T(\mathcal{T}(x_{i};\theta^{T}),\mathcal{S}(x_{i};\theta^{S}))}}\\ {{\quad+\beta\cdot\sum_{m=1}^{M}M S E(z_{i}^{T,m},z_{i}^{S,m})].}}\end{array}\tag{1}$$
Lce is the task-specific loss and CE(·, ·) is the corresponding loss function, in which cross-entropy is commonly adopted for the classification task. Lkd is the knowledge distillation loss and ST(·, ·) denotes the corresponding loss function, in which the Kullback–Leibler divergence of the output probability distribution between the teacher and student is commonly adopted. Lpt is the patient loss introduced in (Sun et al., 2019) and MSE(·, ·) is 8943
![3_image_0.png](3_image_0.png)
the mean square error function. T and S are the teacher and student networks, and their parameters are denoted as θ Tand θ S, respectively. z T,m i and z S,m idenote the hidden state feature from the teacher and the student for the i-th sample at the mth paired layers when calculating the patient loss, respectively. M is the number of layers that the patient loss is inserted. α and β are the hyperparameters to control the trade-off of different terms.
The loss Lce, Lkd, and Lpt correspond to the CEL,
KDL, and PTL in Fig. 2, respectively.
## 3.2 Contrastive Distillation Loss
Although the loss in Eq. (1) can transfer the knowledge from teacher to student, it lacks explicit supervision to learn discriminative student features.
Namely, it only provides the supervision to pull the features from the same sample in teacher and student close to each other, while lacking the supervision to push the features from different classes far away from each other for more discriminative feature learning (Harwood et al., 2017; Wu et al.,
2017; Suh et al., 2019). To this end, we first design our contrastive distillation loss (CDL) in the ACKD framework.
As our CDL can be introduced at different layers, below, we only focus on the m-th paired layer and omit the layer index for better presentation. For example, we use z T
iand z S
ito denote the hidden state features for the i-th sample at this layer in teacher and student, respectively. The CDL can be written as follows:
$$\begin{split}\mathcal{L}_{cd}&=-log\sum_{i=1}^{N}\frac{POS}{POS+NEG},\\ \text{where}POS&=exp(h(z_{i}^{S},z_{i}^{T})),\\ NEG&=\sum_{z_{j}^{T}\in\mathcal{N}_{i}}exp(h(z_{i}^{S},z_{j}^{T})),\\ h(z_{i}^{S},z_{j}^{T})&=cosine(z_{i}^{S},z_{j}^{T}).\end{split}\tag{2}$$
Here, *cosine*(·, ·) denotes the cosine similarity. Ni denotes the set containing the hidden state features of the samples from different classes with the i-th sample (i.e., negative pair).
## 3.3 Sample Adaptive Reweighting
As mentioned in Sec. 1, similar sentences may have completely different meanings, which makes these samples hard to distinguish. To this end, we propose our sample adaptive reweighting (SAR) strategy to adaptively pay more attention to these hard samples. Specifically, we use a predictor network to predict the discrimination ability of each sample based on its learned features, and incorporate this predicted discrimination into our CDL to form adaptive contrastive distillation loss (A-CDL). Formally, the A-CDL can be written as follows:
$$\mathcal{L}_{acc}=-\log\sum_{i=1}^{N}\frac{POS}{POS+\overline{NEG}},$$ where $\overline{NEG}=\frac{1}{w_{i}}\sum_{z_{j}^{T}\in\mathcal{N}_{i}}exp(h(z_{i}^{S},z_{j}^{T}))$, $$w_{i}=Sigmoid(\mathcal{P}(z_{i}^{S};\theta_{p})).\tag{3}$$
Here, wiis the predicted discrimination ability of the i-th sample. P(·, ·) is the function of the predictor, which is implemented by a neural network. θp is the learnable parameter of the predictor.
Sigmoid(·) is the sigmoid function, which is used to ensure the predicted discrimination abilities are positive. The other notations are the same as before.
As all operations are differentiable in this process, we can jointly train this predictor with the student network in distillation. In this way, we can adaptively assign higher weight 1 wi on the samples with less discriminative features and finally form the adaptive contrastive distillation loss, which corresponds to A-CDL in Fig. 2. Note that our predictor is implemented by a simple neural network. Therefore, the extra computation caused by the predictor can be neglected compared with that required by the gradient calculation.
## 3.4 Overall Loss Function
As our A-CDL can be introduced to different paired layers of the teacher and student networks, for better presentation, below, we additionally use the superscript ·
m to denote the corresponding symbols for the m-th paired layers that A-CDL is inserted.
So, the loss function when training the student network in our ACKD framework can be written as:
Ltotal = αLce + (1 − α)Lkd + βLpt + γLacd =1N XN i=1 [α · CE(T (xi; θ T), yi) + (1 − α) · ST(T (xi; θ T), S(xi; θ S)) + β · XM m=1 MSE(z T,m i, z S,m i) + γ · XM m=1 −log P OS P OS + NEG ], where P OS = exp(h(z S,m i, z T,m i)), NEG =1 wm i X z T ,m j ∈Ni exp(h(z S,m i, z T,m j)). (4) α, β, and γ are the hyperparameters to control the
importance of different terms. Lce, Lkd, and Lpt
are the cross-entropy loss, the knowledge distillation loss, and the patient loss, respectively, which are introduced in Eq. (1). Lacd is our newly proposed adaptive contrastive distillation loss introduced in Eq. (3). Other notations are the same as before. By using the loss introduced in Eq. (4),
we can use explicit supervision to push the features of negative pairs in the student far away from each other, with the consideration of the sample discrimination abilities. In this way, we construct our ACKD framework for BERT compression.
## 3.5 Dynamic Feature Storage
When introducing the A-CDL to the existing knowledge distillation methods and constructing our ACKD framework, another issue is that A-CDL requires large sample diversity, which is not required in the existing knowledge distillation approaches, making the construction of our ACKD framework a non-trivial task. Specifically, the term NEG is calculated based on the features of different samples.
Due to the property of the current deep learning framework, features will be released after the calculation of each mini-batch. Therefore, we can only calculate NEG based on the samples in one minibatch. So, the feature of the i-th sample can be only pushed far away from those of a small portion of negative pairs, which causes inaccurate optimization direction. Inspired by (He et al., 2020), we construct dynamic feature storage to increase the sample diversity. Specifically, after the calculation of each batch, we store the features of this batch in the storage for NEG calculation. At the same time, labels of these samples will be also stored in the storage for identifying the samples in Ni. As the BERT model processes a sequence of tokens in parallel, the feature dimension is relatively large, which causes more memory burden to GPU. Therefore, to further save memory usage, we only store the features of the layer that A-CDL is inserted. After the storage is full, we update storage based on the first in first out strategy. In our implementation, we set the storage size as 1000. In this way, we increase sample diversity when calculating NEG.
## 3.6 Discussion
The design concept of our A-CDL is as follows.
In the distillation process, the loss Lacd will be minimized. To achieve this, we will maximize the value inside the −log(·) function. So in the training process, the numerator *P OS* will be increased, which pulls the feature from the same sample in teacher and student close to each other. At the same time, the denominator term NEG will be decreased, which pushes the feature of the j-th sample from different classes in the student far away from that of the i-th sample in the teacher. Moreover, by using discrimination ability 1 wi
, we assign higher weights to the samples with less discriminative features. In this way, we introduce explicit supervision with the consideration of sample discrimination abilities to learn more discriminative student features.
From another point of view, our A-CDL can also be viewed as the loss to "eliminate" the influence of incorrect predictions from the teacher when learning the student. Specifically, as in Fig. 1(b), if the green sample is close to the blue one and is misclassified by the teacher, traditional knowledge distillation methods will not be aware of this misclassification. So the green sample in the student will be "attracted" by that in the teacher (black arrow), causing misclassification in the student as well. In contrast, from Eq. (3), the negative pair set Ni when calculating NEG is obtained based on the ground truth labels. Therefore, as in Fig. 1(c),
despite the green sample being misclassified by the teacher, the green sample in the student will be
"repelled" by the blue sample in the teacher (red arrow). Although the cross-entropy loss for student is also based on the ground truth labels, the optimization direction will be affected by the incorrect teacher prediction. So our A-CDL can "eliminate" the influence of incorrect predictions from teacher to some extent.
## 4 Experiments
In this section, we perform comprehensive experiments and extensive ablation studies.
## 4.1 Datasets
We follow many works (Sun et al., 2019; Zhou et al., 2022) to evaluate our ACKD framework on the GLUE benchmark (Wang et al., 2019). Specifically, we use the development set of the GLUE
benchmark and use four tasks for evaluation: Paraphrase Similarity Matching, Sentiment Classification, Natural Language Inference, and Linguistic Acceptability. For Paraphrase Similarity Matching, we use MRPC (Dolan and Brockett, 2005),
QQP, and STS-B (Conneau and Kiela, 2018) for evaluation. For Sentiment Classification, we use SST-2 (Socher et al., 2013) for evaluation. For Natural Language Inference, we use MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), and RTE (Wang et al., 2019) for evaluation. For Linguistic Acceptability, we use CoLA (Warstadt et al.,
2019) for evaluation.
Following many works (Sun et al., 2019; Zhou et al., 2022), we report the results on MNLI-m and MNLI-mm on MNLI. For MRPC and QQP, we report both F1 and accuracy. For STS-B, we report Pearson and Spearman correlation. For CoLA, we report Matthew's correlation. We use accuracy as the metric for other datasets.
## 4.2 Implementation Details
We implement our ACKD framework based on the PyTorch framework. We follow previous works (Sun et al., 2019; Zhou et al., 2022) to evaluate our ACKD under the task-specific setting, in which the teacher network is firstly fine-tuned on downstream tasks and the student network is also trained based on the downstream tasks in the distillation process. Following (Sun et al., 2019), we use the BERT-Base model as the teacher network, and use BERT with 3 and 6 layers as the student models
(denoted as BERT3 and BERT6), respectively. The number of hidden states is set as 768 in both teacher and student networks. We follow (Sun et al., 2019)
to assume the lower layers of the teacher network also contain important information and should be passed to the student. Therefore, we choose the
"skip" strategy in (Sun et al., 2019) to insert our A-CDL, which can bring stronger supervision.
We first finetune the pre-trained BERT-Base model on downstream tasks as the corresponding teacher models. The maximum sequence length is set as 128, and AdamW (Loshchilov and Hutter, 2018) optimizer is adopted. We set the initial learning rate and batch size as 2e−5and 8, respectively.
The training epoch ranges from 2 to 4 for different downstream tasks. Then, we train our student network by using our ACKD framework. The discrimination predictor for generating wiin Eq. (3)
is implemented by a two-layer neural network. The size of dynamic feature storage is set as 1000. We follow (Sun et al., 2019; Zhou et al., 2022) to perform hyperparameter search over student learning rate from {1e−5, 2e−5, 5e−5}, the batch size from
{8, 16, 32}, the hyperparameter α from {0.1, 0.3, 0.5}, β from {20, 40, 60}, and γ from {5e−4, 5e−3, 5e−2}. The other hyperparameters are the same as those when training the teacher network.
| GLUE | | | | | | | | | | |
|------------------------------------|--------------|----------|-------|-----------|-----------|--------|----------------|------|-------|-----------|
| Method | #Param | Speed-up | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B |
| (Matt.) | (Acc -m/-mm) | (F1/Acc) | (Acc) | (F1/Acc) | (Acc.) | (Acc.) | (Pear./Spear.) | | | |
| Teacher Network: BERT-Base | | | | | | | | | | |
| BERT-Base (Devlin et al., 2018) | 110M | 1.0× | 60.8 | 84.6/84.4 | 91.6/87.6 | 91.6 | 88.5/91.4 | 71.4 | 93.0 | 90.2/89.8 |
| Student Network: BERT3 | | | | | | | | | | |
| PKD (Sun et al., 2019) | 46M | 4.0× | 39.8 | 75.9/76.6 | 84.1/75.0 | 84.3 | 85.3/89.2 | 62.8 | 87.4 | 86.3/86.1 |
| RCO (Jin et al., 2019) | 46M | 4.0× | 31.4 | 76.3/76.9 | 85.3/77.5 | 83.4 | 85.4/88.7 | 66.1 | 86.8 | 84.8/84.4 |
| TAKD (Mirzadeh et al., 2020) | 46M | 4.0× | 35.7 | 76.2/76.8 | 83.2/73.5 | 83.8 | 83.7/87.5 | 59.2 | 87.9 | 83.8/83.4 |
| DistilBERT (Sanh et al., 2019) | 46M | 4.0× | 34.0 | 77.0/77.0 | 83.2/73.0 | 83.8 | 85.1/88.9 | 62.8 | 86.9 | 86.6/86.2 |
| TinyBERT (Jiao et al., 2019) | 46M | 4.0× | 38.7 | 76.5/76.9 | 82.8/72.8 | 84.2 | 85.1/88.8 | 60.6 | 86.8 | 86.4/86.1 |
| CRD (Tian et al., 2019) | 46M | 4.0× | 38.6 | 76.1/76.8 | 85.2/77.5 | 84.6 | 83.9/88.0 | 65.7 | 87.6 | 86.1/85.6 |
| SFTN (Park et al., 2021) | 46M | 4.0× | 38.1 | 76.6/77.1 | 83.1/73.3 | 84.2 | 83.9/87.7 | 60.3 | 88.0 | 83.9/83.5 |
| MetaDistill (Zhou et al., 2022) | 46M | 4.0× | 39.3 | 75.9/76.4 | 82.0/71.1 | 83.8 | 83.7/88.1 | 62.1 | 88.0 | 86.6/86.4 |
| Annealing KD (Jafari et al., 2021) | 52M | 3.0× | 36.0 | 73.9/74.8 | 86.2/- | 83.1 | -/86.5 | 61.0 | 89.4 | 74.5/- |
| ACKD (ours) | 46M | 4.0× | 42.7 | 79.5/80.6 | 87.5/81.4 | 86.2 | 86.1/89.7 | 67.9 | 88.5 | 87.1/86.8 |
| Student Network: BERT6 | | | | | | | | | | |
| PKD (Sun et al., 2019) | 66M | 2.0× | 54.5 | 82.7/83.3 | 89.4/84.7 | 89.5 | 87.8/90.9 | 67.6 | 91.3 | 88.6/88.1 |
| RCO (Jin et al., 2019) | 66M | 2.0× | 53.6 | 82.4/82.9 | 89.5/85.1 | 89.7 | 87.4/90.6 | 67.6 | 91.4 | 88.7/88.3 |
| TAKD (Mirzadeh et al., 2020) | 66M | 2.0× | 53.8 | 82.5/83.0 | 89.6/85.0 | 89.6 | 87.5/90.7 | 68.5 | 91.4 | 88.2/88.0 |
| DistilBERT (Sanh et al., 2019) | 66M | 2.0× | 53.0 | 82.5/83.1 | 89.3/85.0 | 89.2 | 87.2/90.6 | 66.1 | 91.5 | 88.7/88.5 |
| TinyBERT (Jiao et al., 2019) | 66M | 2.0× | 52.4 | 83.6/83.8 | 90.5/86.5 | 89.8 | 87.6/90.6 | 67.7 | 91.9 | 89.2/88.7 |
| CRD (Tian et al., 2019) | 66M | 2.0× | 55.8 | 83.2/83.4 | 89.5/85.5 | 89.8 | 87.6/90.8 | 67.1 | 91.5 | 88.8/88.3 |
| SFTN (Park et al., 2021) | 66M | 2.0× | 53.6 | 82.4/82.9 | 89.8/85.3 | 89.5 | 87.5/90.4 | 68.5 | 91.5 | 88.4/88.5 |
| MetaDistill (Zhou et al., 2022) | 66M | 2.0× | 58.6 | 83.5/83.8 | 91.1/86.8 | 90.4 | 88.1/91.0 | 69.4 | 92.3 | 89.4/89.1 |
| ALP-KD (Passban et al., 2021) | 66M | 2.0× | 46.4 | 82.0/- | -/85.8 | 89.7 | -/90.6 | 69.0 | 91.9 | 88.8/- |
| CoDIR (Sun et al., 2020) | 66M | 2.0× | 56.4 | 83.9/- | 87.9/- | 90.7 | -/91.2 | 66.3 | 92.4 | -/- |
| ACKD (ours) | 66M | 2.0× | 59.7 | 83.6/83.9 | 91.0/87.0 | 90.6 | 88.5/91.3 | 69.7 | 92.3 | 89.5/89.1 |
## 4.3 Experimental Results
We compare our ACKD framework with multiple state-of-the-art approaches including:
PKD (Sun et al., 2019), RCO (Jin et al., 2019),
TAKD (Mirzadeh et al., 2020), DistilBERT (Sanh et al., 2019), TinyBERT (Jiao et al., 2019),
CRD (Tian et al., 2019), SFTN (Park et al.,
2021), MetaDistill (Zhou et al., 2022), Annealing KD (Jafari et al., 2021), ALP-KD (Passban et al.,
2021), and CoDIR (Sun et al., 2020).
The results are shown in Table 2. From Table 2, we have following observations: (1) Our ACKD
framework outperforms other baseline methods when using BERT3 and BERT6 as the students under most of settings, which demonstrates the effectiveness of the proposed ACKD framework.
Specifically, when using BERT3 as the student, our ACKD framework surpasses other baseline methods by more than 2.9% on CoLA. (2) When using BERT3 as the student, our ACKD framework can achieve higher performance gain. One possible explanation is that the performance of the distilled BERT6 is close to the teacher network BERT-Base, which is the bottleneck for further performance improvement. Also, BERT3 has less knowledge than BERT6. Therefore, our A-CDL as new knowledge
![6_image_0.png](6_image_0.png)
can bring more information gain for BERT3 and thus bring more performance improvement.
## 4.4 Ablation Study
In this section, we perform extensive ablation studies. We use BERT-Base as the teacher network and use BERT3 as the student network to conduct the experiment on QNLI (Rajpurkar et al., 2016).
Effectiveness of Lacd **in Eq. (4).** To investigate the effectiveness of the A-CDL, we remove the Lacd in Eq. (4) and conduct the distillation. The re8947
![7_image_0.png](7_image_0.png)
sult is denoted as "w/o Lacd" in Fig. 3. Our ACKD
method outperforms the alternative approach "w/o Lacd" by a large margin, demonstrating the effectiveness of our A-CDL for explicit supervision to push student features of negative pairs far away from each other.
Effectiveness of our sample adaptive reweighting strategy. To investigate the effectiveness of our SAR strategy, we perform the experiment to remove the 1 wi in Eq. (3) and conduct the distillation. In this case, we use CDL instead of A-CDL
in distillation. The result is denoted as "w/o SAR"
in Fig. 3. From the result, we observe that our ACKD approach performs better than the alternative method "w/o SAR", which demonstrates the effectiveness of the SAR strategy to pay more attention to less discriminative samples.
Effectiveness of dynamic feature storage. We investigate the effectiveness of using dynamic feature storage (DFS) in our ACKD framework. We perform the experiment to remove the DFS, and the result is denoted as "w/o DFS" in Fig. 3. Our ACKD framework performs better than "w/o DFS",
demonstrating the effectiveness of using dynamic feature storage.
Effectiveness of Lkd and Lpt **in Eq. (3).** We also report the results when removing the Lkd and Lpt in Eq. (4), which are denoted as "w/o Lkd" and "w/o Lpt" in Fig. 3, respectively. From the results, we observe: (1) The performance of our Table 3: Performance of ACKD framework when using different teacher network structures. BERTl means the BERT model with l layers.
| Teacher | BERT12 | BERT10 | BERT8 | BERT6 |
|-----------------|----------|----------|---------|---------|
| Student (BERT3) | 86.2 | 86.1 | 85.8 | 85.5 |
ACKD framework is better than the methods "w/o Lkd" and "w/o Lpt". This suggests it is beneficial to use Lkd and Lpt. (2) The accuracy of "w/o Lpt" is higher than "w/o Lkd", which indicates the loss Lkd is more useful than Lpt in our ACKD
framework when compressing BERT.
## 4.5 Algorithm Analysis
In this section, we also use BERT-Base as the teacher and use BERT3 as the student to conduct the experiments on algorithm analysis. We perform the experiments on QNLI (Rajpurkar et al., 2016).
Analysis on the structure of teacher network.
In Table 3, we also report the results when using different teacher networks. We observe that we can effectively train the student when using different teacher network structures.
Analysis on the structure of predictor. In our ACKD framework, we use a two-layer neural network as our predictor to predict the discrimination ability of each sample. We also investigate the performance of our ACKD framework when using different predictor structures. When using BERT-Base as the teacher and using BERT3 as the student, the accuracy of our ACKD framework with two, three, and four layers of predictor are 86.2%, 86.4%, and 86.2% on QNLI, respectively. We observe that the performance of our ACKD using different predictor structures is relatively stable.
## 4.6 Visualization
To demonstrate the effectiveness of the proposed A-CDL, we visualize the learned student feature without and with using our A-CDL. Specifically, Fig. 4 visualize the student feature trained without and with using A-CDL (i.e., Lacd in Eq. (3))
on QNLI and MRPC by using the t-SNE (Van der Maaten and Hinton, 2008) technique. From Fig. 4, we observe that after introducing our A-CDL, the student features from different classes become far away from each other, which demonstrates the effectiveness of our A-CDL.
## 5 Conclusion
In this paper, we have proposed a new knowledge distillation approach called adaptive contrastive knowledge distillation (ACKD) for BERT compression. We first introduce a novel contrastive distillation loss (CDL) as the explicit supervision to learn more discriminative student features. Then, we propose a new strategy called sample adaptive reweighting (SAR) to adaptively pay more attention to hard samples with fewer discrimination abilities. The SAR strategy can be seamlessly incorporated into the CDL and form the adaptive contrastive distillation loss (A-CDL). Based on ACDL, we construct our ACKD framework, where dynamic feature storage is used for better sample diversity. Extensive experiments on multiple natural language processing tasks demonstrate the effectiveness of our ACKD framework for BERT
compression.
## 6 Limitation
One of the limitations of our framework is we need to design the rough range of hyperparameters to search the best setting. In our future work, we will explore the strategy to avoid hyperparameter tuning.
## 7 Ethical Consideration
Our adaptive contrastive knowledge distillation framework aims to improve the performance of knowledge distillation methods and does not introduce extra ethical concerns compared with other knowledge distillation approaches. Therefore, there are no ethical problems caused by the proposed method.
## Acknowledgements
We sincerely thank the anonymous reviewers for their serious reviews and valuable suggestions. This work was supported by The National Key Research and Development Plan of China (2021ZD0110503), National Natural Science Foundation of China (62022009), National Natural Science Foundation of China (62206010),
and National Natural Science Foundation of China
(61932002).
## References
Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. *LREC*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases.
In *Third International Workshop on Paraphrasing*
(IWP2005).
Hao Fu, Shaojun Zhou, Qihong Yang, Junjie Tang, Guiquan Liu, Kaikui Liu, and Xiaolong Li. 2021.
Lrc-bert: latent-representation contrastive knowledge distillation for natural language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence.
Hongcheng Guo, Jiaheng Liu, Haoyang Huang, Jian Yang, Zhoujun Li, Dongdong Zhang, and Zheng Cui.
2022a. LVP-M3: Language-aware visual prompt for multilingual multimodal machine translation. In EMNLP 2022, pages 2862–2872.
Jinyang Guo, Jiaheng Liu, and Dong Xu. 2021. Jointpruning: Pruning networks along multiple dimensions for efficient point cloud processing. IEEE
Transactions on Circuits and Systems for Video Technology.
Jinyang Guo, Jiaheng Liu, and Dong Xu. 2022b. 3dpruning: A model compression framework for efficient 3d action recognition. *IEEE Transactions* on Circuits and Systems for Video Technology, 32(12):8717–8729.
Jinyang Guo, Wanli Ouyang, and Dong Xu. 2020a.
Channel pruning guided by classification loss and feature importance. *AAAI*.
Jinyang Guo, Wanli Ouyang, and Dong Xu. 2020b.
Multi-dimensional pruning: A unified framework for model compression. In *CVPR*.
Jinyang Guo, Weichen Zhang, Wanli Ouyang, and Dong Xu. 2020c. Model compression using progressive channel pruning. *IEEE Transactions on Circuits and* Systems for Video Technology.
Jun Guo, Wei Bao, Jiakai Wang, Yuqing Ma, Xinghai Gao, Gang Xiao, Aishan Liu, Jian Dong, Xianglong Liu, and Wenjun Wu. 2023. A comprehensive evaluation framework for deep model robustness. *Pattern* Recognition.
Michael Gutmann and Aapo Hyvärinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In *Proceedings* of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings.
Md Akmal Haidar, Mehdi Rezagholizadeh, Abbas Ghaddar, Khalil Bibi, Philippe Langlais, and Pascal Poupart. 2022. Cilda: Contrastive data augmentation using intermediate layer knowledge distillation.
arXiv preprint arXiv:2204.07674.
Ben Harwood, Vijay Kumar BG, Gustavo Carneiro, Ian Reid, and Tom Drummond. 2017. Smart mining for deep metric learning. In *ICCV*.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *CVPR*.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015.
Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531.
R Devon Hjelm, Alex Fedorov, Samuel LavoieMarchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2018. Learning deep representations by mutual information estimation and maximization. *arXiv preprint arXiv:1808.06670*.
Sheng Hu, Yuqing Ma, Xianglong Liu, Yanlu Wei, and Shihao Bai. 2021. Stratified rule-aware network for abstract visual reasoning. In Proceedings of the AAAI
Conference on Artificial Intelligence.
Aref Jafari, Mehdi Rezagholizadeh, Pranav Sharma, and Ali Ghodsi. 2021. Annealing knowledge distillation.
arXiv preprint arXiv:2104.07163.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019.
Tinybert: Distilling bert for natural language understanding. *arXiv preprint arXiv:1909.10351*.
Xiao Jin, Baoyun Peng, Yichao Wu, Yu Liu, Jiaheng Liu, Ding Liang, Junjie Yan, and Xiaolin Hu. 2019.
Knowledge distillation via route constrained optimization. In *ICCV*.
Aishan Liu, Jun Guo, Jiakai Wang, Siyuan Liang, Renshuai Tao, Wenbo Zhou, Cong Liu, Xianglong Liu, and Dacheng Tao. 2023. X-adv: Physical adversarial object attacks against x-ray prohibited item detection.
In *USENIX Security Symposium*.
Aishan Liu, Xianglong Liu, Hang Yu, Chongzhi Zhang, Qiang Liu, and Dacheng Tao. 2021. Training robust deep neural networks via adversarial noise propagation. *IEEE Transactions on Image Processing*.
Jiaheng Liu, Jinyang Guo, and Dong Xu. 2022a. Apsnet: Toward adaptive point sampling for efficient 3d action recognition. *IEEE Transactions on Image* Processing, 31:5287–5302.
Jiaheng Liu, Haoyu Qin, Yichao Wu, Jinyang Guo, Ding Liang, and Ke Xu. 2022b. Coupleface: relation matters for face recognition distillation. In *Computer* Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XII. Springer.
Jiaheng Liu, Tan Yu, Hanyu Peng, Mingming Sun, and Ping Li. 2022c. Cross-lingual cross-modal consolidation for effective multilingual video corpus moment retrieval. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1854–1862, Seattle, United States. Association for Computational Linguistics.
Jiaheng Liu, Shunfeng Zhou, Yichao Wu, Ken Chen, Wanli Ouyang, and Dong Xu. 2020. Block proposal neural architecture search. *IEEE TIP*, 30:15–25.
Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *ICLR*.
Yuqing Ma, Shihao Bai, Wei Liu, Shuo Wang, Yue Yu, Xiao Bai, Xianglong Liu, and Meng Wang. 2021.
Transductive relation-propagation with decoupling training for few-shot learning. IEEE transactions on neural networks and learning systems.
Yuqing Ma, Xianglong Liu, Shihao Bai, Lei Wang, Aishan Liu, Dacheng Tao, and Edwin R Hancock. 2022.
Regionwise generative adversarial image inpainting for large missing areas. *IEEE Transactions on Cybernetics*.
Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. 2020. Improved knowledge distillation via teacher assistant. In *AAAI*.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018.
Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*.
Dae Young Park, Moon-Hyun Cha, Daesin Kim, Bohyung Han, et al. 2021. Learning student-friendly teacher networks for knowledge distillation. *Advances in Neural Information Processing Systems*.
Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho.
2019. Relational knowledge distillation. In *CVPR*.
Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, and Qun Liu. 2021. Alp-kd: Attention-based layer projection for knowledge distillation. In *Proceedings* of the AAAI Conference on artificial intelligence.
Baoyun Peng, Xiao Jin, Jiaheng Liu, Dongsheng Li, Yichao Wu, Yu Liu, Shunfeng Zhou, and Zhaoning Zhang. 2019. Correlation congruence for knowledge distillation. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pages 5007–5016.
Haotong Qin, Yifu Ding, Xiangguo Zhang, Jiakai Wang, Xianglong Liu, and Jiwen Lu. 2023a. Diverse sample generation: Pushing the limit of generative data-free quantization. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Haotong Qin, Xudong Ma, Yifu Ding, Xiaoyang Li, Yang Zhang, Zejun Ma, Jiakai Wang, Jie Luo, and Xianglong Liu. 2023b. Bifsmnv2: Pushing binary neural networks for keyword spotting to real-network
performance. *IEEE Transactions on Neural Networks and Learning Systems*.
Haotong Qin, Mingyuan Zhang, Yifu Ding, Aoyu Li, Ziwei Liu, Fisher Yu, and Xianglong Liu. 2023c.
Bibench: Benchmarking and analyzing network binarization. In International Conference on Machine Learning.
Haotong Qin, Xiangguo Zhang, Ruihao Gong, Yifu Ding, Yi Xu, and Xianglong Liu. 2022. Distributionsensitive information retention for accurate binary neural network. *International Journal of Computer* Vision.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In *EMNLP*.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2015. Fitnets: Hints for thin deep nets.
ICLR.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, and Hrishikesh Khandeparkar.
2019. A theoretical analysis of contrastive unsupervised representation learning. In *ICML*. PMLR.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *EMNLP*.
Yumin Suh, Bohyung Han, Wonsik Kim, and Kyoung Mu Lee. 2019. Stochastic class-based hard example mining for deep metric learning. In *CVPR*.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019.
Patient knowledge distillation for bert model compression. *arXiv preprint arXiv:1908.09355*.
Siqi Sun, Zhe Gan, Yu Cheng, Yuwei Fang, Shuohang Wang, and Jingjing Liu. 2020. Contrastive distillation on intermediate representations for language model compression. arXiv preprint arXiv:2009.14167.
Yonglong Tian, Dilip Krishnan, and Phillip Isola. 2019.
Contrastive representation distillation. In *ICLR*.
Frederick Tung and Greg Mori. 2019. Similaritypreserving knowledge distillation. In *ICCV*.
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019.
Glue: A multi-task benchmark and analysis platform for natural language understanding. *ICLR*.
Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments.
Transactions of the Association for Computational Linguistics.
Xiuying Wei, Yunchen Zhang, Yuhang Li, Xiangguo Zhang, Ruihao Gong, Jinyang Guo, and Xianglong Liu. 2023. Outlier suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling. *arXiv preprint* arXiv:2304.09145.
Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. *NAACLHLT*.
Chao-Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. 2017. Sampling matters in deep embedding learning. In *ICCV*.
Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim.
2017. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning.
In *CVPR*.
Sergey Zagoruyko and Nikos Komodakis. 2017. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. *ICLR*.
Wangchunshu Zhou, Canwen Xu, and Julian McAuley.
2022. Bert learns to teach: Knowledge distillation with meta learning. In ACL.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
✓ A2. Did you discuss any potential risks of your work?
7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Not applicable. Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
he-etal-2023-fourier | {F}ourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with {FFT} Operator | https://aclanthology.org/2023.findings-acl.570 | The transformer model is known to be computationally demanding, and prohibitively costly for long sequences, as the self-attention module uses a quadratic time and space complexity with respect to sequence length. Many researchers have focused on designing new forms of self-attention or introducing new parameters to overcome this limitation, however a large portion of them prohibits the model to inherit weights from large pretrained models. In this work, the transformer{'}s inefficiency has been taken care of from another perspective. We propose Fourier Transformer, a simple yet effective approach by progressively removing redundancies in hidden sequence using the ready-made Fast Fourier Transform (FFT) operator to perform Discrete Cosine Transformation (DCT). Fourier Transformer is able to significantly reduce computational costs while retain the ability to inherit from various large pretrained models. Experiments show that our model achieves state-of-the-art performances among all transformer-based models on the long-range modeling benchmark LRA with significant improvement in both speed and space. For generative seq-to-seq tasks including CNN/DailyMail and ELI5, by inheriting the BART weights our model outperforms the standard BART and other efficient models. Our code will be publicly available at \url{https://github.com/LUMIA-Group/FourierTransformer} | # Fourier Transformer: Fast Long Range Modeling By Removing Sequence Redundancy With Fft Operator
Ziwei He♢, Meng Yang†, Minwei Feng†**, Jingcheng Yin**†,
Xinbing Wang♢, **Jingwen Leng**♢ and **Zhouhan Lin**♢∗
♢Shanghai Jiao Tong University † Netease BizEase
{ziwei.he, xwang8, leng-jw}@sjtu.edu.cn ∗[email protected]
## Abstract
The transformer model is known to be computationally demanding, and prohibitively costly for long sequences, as the self-attention module uses a quadratic time and space complexity with respect to sequence length. Many researchers have focused on designing new forms of self-attention or introducing new parameters to overcome this limitation, however a large portion of them prohibits the model to inherit weights from large pretrained models. In this work, the transformer's inefficiency has been taken care of from another perspective. We propose Fourier Transformer, a simple yet effective approach by progressively removing redundancies in hidden sequence using the ready-made Fast Fourier Transform
(FFT) operator to perform Discrete Cosine Transformation (DCT). Fourier Transformer is able to significantly reduce computational costs while retain the ability to inherit from various large pretrained models. Experiments show that our model achieves state-of-the-art performances among all transformer-based models on the long-range modeling benchmark LRA
with significant improvement in both speed and space. For generative seq-to-seq tasks including CNN/DailyMail and ELI5, by inheriting the BART weights our model outperforms the standard BART and other efficient models. 1
## 1 Introduction
Transformers (Vaswani et al., 2017), especially when equipped with large-scale pre-training (Devlin et al., 2018; Lewis et al., 2019; Raffel et al.,
2020) have become the core architecture in most tasks in natural language processing (NLP), including both encoder-only tasks such as sentence classification, sequence tagging (Liu et al., 2019), and encoder-decoder tasks such as text summarization
∗Zhouhan Lin is the corresponding author.
1Our code is publicly available at https://github.com/
LUMIA-Group/FourierTransformer and question answering (Lewis et al., 2019). However, due to the quadratic complexity of its selfattention module (Lin et al., 2017), applying these models on long sequences can be prohibitively costly. As a result, great efforts have been put into developing various efficient Transformer variants
(Tay et al., 2020b), as well as establishing standardized test-beds for long sequences such as the Long Range Arena (LRA) (Tay et al., 2020a).
Most efficient Transformers devise special attention variants to lower its complexity (Tay et al.,
2020b). Some of them achieve this by projecting components in self-attention into its lowerrank approximations (Wang et al., 2020; Zhu et al.,
2021; Winata et al., 2020, *inter alia*), or rely on kernelization to implicitly compute the attention matrix (Katharopoulos et al., 2020; Choromanski et al., 2020b; Peng et al., 2021; Choromanski et al.,
2020a, *inter alia*).
Due to the introduction of projection matrices or extra parameters, these models are not able to inherit pre-trained model parameters. However, since pre-trained large language models (LLMs)
have fundamentally influenced the NLP community, deviating model architecture from LLMs requires pre-training from scratch on the designed model, which is prohibitively resource-demanding for most practitioners.
Other approaches target at computing part of the attention matrix, by following some predefined patterns (Child et al., 2019; Qiu et al., 2020; Ho et al., 2019, *inter alia*). Some of them allow the pattern to be learnable (Sukhbaatar et al., 2019; Roy et al., 2021, *inter alia*). Most of the patterns require customized CUDA kernels or special operators to achieve the claimed speedup (Wu et al., 2019; Child et al., 2019; Beltagy et al., 2020), which casts extra challenge in deploying these models on edge devices or special hardware such as TPUs.
Moreover, some of the approaches involve considerable additional computation steps, which in 8954 practice could counterweight the time and memory complexity they reduce, especially for short and medium-length sequences (Kitaev et al., 2020; Roy et al., 2021).
One core factor behind various approaches is the existence of redundancy in attention matrices and hidden states. For example, Wang et al. (2020)
provides spectrum analysis on the self-attention matrix, indicating that the attention matrix learns to be low-rank, which allows them to learn a low-rank approximation of the attention matrix. Inspired by this line of research, in this work, we analyze the power spectrum of the hidden states in the time dimension through different layers in Fig 1, and show that the power spectrum increasingly concentrates on lower frequency bins as the layer gets deeper.
In this work, we propose Fourier Transformer, which doesn't even require to learn the projection matrix in order to approximate the self-attention.
Fourier Transformer leverages our observation on power spectra of hidden states, it progressively removes sequence redundancies through different layers by downsampling hidden states with the Discrete Cosine Transform (DCT), a variant of Fourier transform that generates real values.
The DCT in our proposed Fourier Transformer can be implemented with the Fast Fourier Transform (FFT) operator. Thanks to its profound application in image compression and signal processing, it is one of the most widely available and highly optimized operators in a wide variety of frameworks and even on edge devices, providing O(n log n) complexity and up to O(log n) in parallel implementations with negligible overhead. As a result, Fourier Transformer is easily deployable on a wide range of devices, not necessary to devise special CUDA kernels. In addition, experimental results on LRA tasks show that it performs significantly faster than many other efficient Transformers, while achieving the state-of-the-art performance among Transformer-based efficient models.
On the other hand, since DCT is a linear, reversible transformation, and the self-attention is not interfered in our model, the proposed Fourier Transformer can inherit pretrained weights from large language models without hurting performance. Experimental results on CNN-DailyMail (Hermann et al., 2015) and ELI5 (Fan et al., 2019c) show that our model could outperform BART (Lewis et al.,
2019) and other efficient Transformers by inheriting and fine-tuning on BART. Moreover, with tiny amount of further pretraining before fine-tuning, its performance could be further improved.
## 2 Related Work
Downsampling hidden states There are not many work that downsample sequence length for natural language. The closest work is Funnel Transformer (Dai et al., 2020), which progressively reduces the *query* sequence length through strided mean pooling, while keeping key and *value* sequence lengths intact. Fourier Transformer compresses the three sequences altogether and delivers more computational speedup compared with Funnel Transformer. Note that Funnel Transformer needs to re-invest the saved computations to build a larger model to achieve better performance, which disables its ability to inherit pretrained weights. For other work, Charformer (Tay et al., 2021b) devises a differentiable tokenization module that also relies on strided mean pooling to downsample its byte sequence. Nyströmformer (Xiong et al.,
2021) approximates the attention matrix through the Nyström method, which effectively downsamples *query* and key sequences. Due to the extra depth-wise convolution, it is again not able to leverage pretrained models.
In a border view, downsampling has been more favorable in computer vision. (Chen et al., 2020)
aggressively downsamples the raw input to a 1D
vector. Perceiver (Jaegle et al., 2021) adopts an asymmetric attention mechanism to distill inputs into a tight latent bottleneck. Almost all of these vision models are designed for encoder-only vision tasks rather than encoder-decoder-style NLP tasks.
Fourier transform for Transformer There are multiple recent works that incorporate Fourier transform into Transformer. FNet (Lee-Thorp et al.,
2021) takes a more radical approach by replacing the entire self-attention with 2D FFT, discarding the entire imaginary part to avoid complex numbers.
Performer (Choromanski et al., 2020a) introduced orthogonal random Fourier features to approximate the softmax attention. FSAT (Zhuang et al., 2022)
uses 1D FFT along the sequence dimension to learn the sparse structure of attention matrix. DCTFormer (Scribano et al., 2022) translates sequences into frequency domain and conducts self-attention there before projecting them back, due to the nonlinearity in the network, self-attention trained in the frequency domain significantly deviates from that in the time domain. Therefore, all the models dis-
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
cussed above lose the ability to inherit pretrained weights as well.
## 3 Preliminaries 3.1 Discrete Cosine Transform
The Discrete Cosine Transform (DCT) expresses a sequence of real numbers in terms of a sum of cosine functions with different frequencies. Since DCT only yields real values, it is a substitution for Fourier transform in the field of real numbers.
It has been the core transform behind the JPEG 2 lossy image compression format.
Formally, for a sequence of N real numbers
{xn} = {x0, x1*, ...x*N−1}, DCT transforms it into frequency domain through3:
$$y_{k}=\alpha_{k}\sum_{n=0}^{N-1}x_{n}c o s\left({\frac{\pi k(2n+1)}{2N}}\right)$$
(1)
where k ∈ {0*, ..., N* − 1} and αk is an coefficient related to k:
$$\alpha_{k}=\begin{cases}\sqrt{\frac{1}{N}}&\quad if\ \ k=0,\\ \sqrt{\frac{2}{N}}&\quad otherwise\end{cases}$$
$${\mathrm{(2)}}$$
The original sequence {xn} can be recovered with the inverse DCT (IDCT):
$$x_{n}=\alpha_{k}\sum_{k=0}^{N-1}y_{k}c o s\left({\frac{\pi k(2n+1)}{2N}}\right)\quad\quad(3)$$
$${\frac{\mathrm{which~we'll~note~as~}\{x_{n}\}=I D C T(\{y_{k}\}).}{\mathrm{~\\\\\\\\^{2}https://\mathrm{jpec.org/jpec/}}}$$
3There are several slightly different variants of DCT. Here
we use the most common type-II variant in this paper.
Practically, DCT can be computed by using the FFT operator. First, let {un} be the shuffled {xn}
by interleaving its values on even and odd positions.
Formally, when N is an odd integer, {un} is given by
$$\{u_{n}\}=\{x_{0},x_{2},...,x_{N-1},x_{N-2},x_{N-4},...,x_{1}\}\tag{4}$$
When N is even, a similar shuffling applies. We then transform {un} into its frequency domain through FFT:
$$\{v_{k}\}=F F T(\{u_{n}\})\qquad\qquad(5)$$
where k ∈ {0*, ..., N* − 1} and {vk} is a sequence of length N. The DCT of the original sequence
{xn} can thus be computed from {vk}:
$$\quad(1)$$
$$y_{k}=cos\left(\frac{\pi k}{2N}\right)\Re\left(v_{k}\right)-sin\left(\frac{\pi k}{2N}\right)\Im\left(v_{k}\right)\tag{6}$$
where Re (·) and Im (·) stand for the real and imaginary part, respectively.
## 3.2 The Power Spectrum Of Transformer Hidden States
The power spectrum of a discrete sequence describes the distribution of signal power w.r.t. frequency components, which is the amplitudes of frequency components yielded by the Fourier transform. For a certain layer in Transformer, its hidden states can be considered as a sequence of hidden vectors, along the time dimension. To analyze the power spectrum of the layer, we conduct 1D Fourier transform independently along the time
![3_image_0.png](3_image_0.png)
dimension for the hidden vectors, calculate the corresponding amplitudes, and avreage over all dimensions in that layer. In addition, we calculate the mean spectrum over many text sequences to eliminate example-wise noise.
Figure 1 shows the power spectra for different layers in the pre-trained RoBERTa-base (Liu et al., 2019) model. The up-left subfigure shows that the power spectrum of word embeddings is relatively flat, distributing its energy almost uniformly on all frequency components with several spikes in low frequencies. As the layer gets deeper, the energy starts to concentrate toward low frequencies and the spikes start to smooth out, leaving a long tail on the high-frequency side. This trend indicates that the hidden states in deeper layers are more locally correlated, which leaves space for Fourier transform to squeeze out the redundancies.
## 4 Fourier Transformer 4.1 Model Architecture
The overall architecture of the Fourier Transformer is depicted in Figure 2. In general, we insert *spectral filters* between layers in Transformer, inside which we use DCT and IDCT to downsample sequence lengths. Multiple spectral filters can work together to split Transformer layers into different blocks, thus progressively reduce sequence lengths.
We leave the self-attention intact in order to retain its ability to inherit pretrained weights.
As for the spectral filter, it consists of three steps, i.e., transform, *truncate*, and *reverse*. Formally, for an incoming hidden sequence {hn}, 0 *< n < N* −
1 that contains N hidden vectors hn ∈ R
D where D is the hidden size of the model, the spectral filter first transforms it into frequency domain through 1D-DCT:
$$\{\mathbf{y}_{k}\}=D C T(\{\mathbf{h}_{n}\}),\qquad0<k<N-1\;\;(7)$$
Note that the DCT is independently applied on all dimension in {hn}, therefore only transforming along the time dimension.
Next, {yk} is truncated by chopping off the trailing dimensions on the high frequency side.
For sequences of different lengths, we fix a ratio r ∈ (0, 1), which is a hyperparameter, to determine the number of frequency components to retain. Thus the length of {yk} is truncated from N
into ⌈rN⌉.
4 Finally, the resulting shorter sequence {yk}, 0 <
k < ⌈rN⌉ − 1 can be transformed back to time domain through IDCT, yielding a shorter sequence of {h˜n}:
$$\{\tilde{\mathbf{h}}_{n}\}=IDCT(\{\mathbf{y}_{k}\}),\qquad0<n<\lceil r N\rceil-1\tag{8}$$
Again, IDCT is also conducted in the time dimension only. The resulting shorter hidden states are passed towards upper layers.
Depending on the type of tasks, the subsequent parts differs. We'll elaborate them in encoder-only and encoder-decoder settings.
Encoder-Only Setting For encoder-only tasks such as text classification, the final output of the 4we've played with various ways of truncation here, such as cutting off the low-frequency components, cutting off the ones with lower mean amplitudes, removing the DC components and re-normalize the sequence, subtracting a common value on all components and re-normalize, or retaining the components corresponding to the spikes. Interestingly, the rather classical way of simply chopping off high frequency ones turns out to work the best.
encoder is expected to be a fixed-size vector, which is then fed into logistic regression for class probability predictions. In this work, while the model is trained from scratch, we simply use a mean pooling over the whole output sequence to yield this vector; otherwise when the model inherits a [CLS] token from pretrained models, we use the embedding at that token instead.
Encoder-Decoder Setting For language generation tasks that involve both an encoder and a decoder, there is an encoder-decoder attention that attends to the encoder states at each decoder step.
However, the encoder-decoder attention requires fine-grained positional resolution in order to work well. As a result we follow Dai et al. (2020) to upsample the shorter sequences back to their original length, and add the upsampled hidden sequences at all blocks together before feeding them to the decoder. More specifically, we use the parameterfree nearest neighbor interpolation for upsampling, and we re-normalize the sequence after adding the upsampled sequences.
## 4.2 Further Pretraining
Since the DCT is reversible through IDCT, the proposed model seamlessly approximates the vanilla Transformer as r goes up. Figure 3 shows that while fine-tuning directly on BART (Lewis et al.,
2019) weights, the model performs comparatively well when up to 70% frequency components are truncated. Nevertheless, since the upsampling and addition of upsampled sequences still differs from the original Transformer, we can still squeeze the last drop out by applying a tiny amount of further pretraining before fine-tuning, and further improve the model performance. This type of further pretraining is much more favourable than a customized pretraining from scratch, which could take massive amount of computation resources.
As a concrete example, further pretraining our model on BART-Large consumes around 10GB of data and takes around 4 days on 2 NVidia A100 GPUs, while pretraining BART from scratch needs to consume 160GB data, taking roughly 1000 days with the same devices. Compared to a customized pre-training from scratch, leveraging BART weights and further pretraining takes 2 magnitudes less computation resources, while still able to bring the model to similar or even better performance.
## 4.3 Complexity Analysis
For a standard Transformer layer with model dimension D, which consists of self-attention and 2 feed-forward layers, the time and memory complexity of processing an input sequence with length N is O(N2D + ND2) and O(N2 + ND), respectively. With FFT operator our model could compress the sequence length from N to ⌈rN⌉
within O(N log N) time complexity. Hence the Fourier Transformer enjoys time and memory complexity of O(r 2N2D + rND2 + N log N) and O(r 2N2 + rND) every time the sequence length is reduced. Actually, given the parallel implementation of FFT, the additional O(N log N) time complexity term is negligible compared to the other two terms. The speedup could get even more impressive when the sequence length is relatively long.
We refer the readers to Section 5.1 for more details.
## 5 Experiments
In this section, we experiment with our model in both of the two encoder-only and encoder-decoder settings in various datasets that involves long sequences.
## 5.1 Encoder-Only Tasks
To test our model's ability on encoder-only tasks, we choose the 5 tasks in the widely-used Long Range Arena (LRA) benchmark (Tay et al., 2020a).
LRA is designed for evaluating efficient transformers under long-context scenario, with the input sequence lengths ranging from 1K to 8K. The datasets in LRA come from rich sources, including natural languages, image pixels, math expressions etc. More specifically, they are:
ListOps A dataset of math expressions that asks the model to calculate the output value of a math expression with sequence lengths up to 2K.
Text A byte-level text classification task, with a fixed sequence length 4K which requires the model to deal with compositionality.
Retrieval A byte-level document retrieval task with a maximum length of 8K which test the model's ability to compress long sequences.
Image An image classification task of which requires the model to learn the 2D spatial relations between input pixels by sequentially reading the pixels. The sequence length is fixed to 1K.
| Models | ListOps | Text | Retrieval | Image | Pathfinder | Avg. |
|---------------------------------------|-----------|--------|-------------|---------|--------------|--------|
| Transformer (Vaswani et al., 2017) | 36.37 | 64.27 | 57.46 | 42.44 | 71.40 | 54.39 |
| Longformer (Beltagy et al., 2020) | 35.63 | 62.85 | 56.89 | 42.22 | 69.71 | 53.46 |
| Linformer (Wang et al., 2020) | 35.70 | 53.94 | 52.27 | 38.56 | 76.34 | 51.36 |
| Reformer (Kitaev et al., 2020) | 37.27 | 56.10 | 53.40 | 38.07 | 68.50 | 50.67 |
| Synthesizer (Tay et al., 2021a) | 36.99 | 61.68 | 54.67 | 41.61 | 69.45 | 52.88 |
| BigBird (Zaheer et al., 2020) | 36.05 | 64.02 | 59.29 | 40.83 | 74.87 | 55.01 |
| Performer (Choromanski et al., 2020a) | 18.01 | 65.40 | 53.82 | 42.77 | 77.50 | 51.41 |
| FNet (Lee-Thorp et al., 2021) | 35.55 | 65.11 | 59.61 | 38.67 | 77.80 | 55.30 |
| Nyström (Xiong et al., 2021) | 37.15 | 65.52 | 79.56 | 41.58 | 70.94 | 58.95 |
| Luna-256 (Ma et al., 2021) | 37.25 | 64.57 | 79.29 | 47.38 | 77.32 | 61.24 |
| FSAT (Zhuang et al., 2022) | 46.85 | 65.95 | 81.11 | 49.97 | 77.32 | 64.24 |
| Fourier Transformer (ours) | 40.73 | 75.02 | 85.35 | 53.17 | 83.43 | 67.54 |
Table 1: The results on LRA benchmark. We report classification accuracy for each task and average accuracy across all tasks. Results from Longformer to Performer are from Tay et al. (2020a), the rest are fetched from their respective papers. For FSAT model on Text task, we only consider the result without convolutions.
| Steps per second ↑ | Peak Memory Usage ↓ | | | | | | | |
|----------------------------|-----------------------|-------|-------|-------|-------|-------|-------|-------|
| Model | 1K | 2K | 3K | 4K | 1K | 2K | 3K | 4K |
| Transformer | 1.0x | 1.0x | 1.0x | 1.0x | 1.0x | 1.0x | 1.0x | 1.0x |
| Reformer | 0.5x | 0.4x | 0.7x | 0.8x | 0.56x | 0.37x | 0.28x | 0.24x |
| BigBird | 0.9x | 0.8x | 1.2x | 1.1x | 0.91x | 0.56x | 0.4x | 0.3x |
| Synthesizer | 1.1x | 1.2x | 2.9x | 1.4x | 0.76x | 0.75x | 0.74x | 0.74x |
| FSAT | 1.1x | 1.5x | 2x | 2.5x | 0.53x | 0.27x | 0.21x | 0.16x |
| Linformer | 1.2x | 1.9x | 3.7x | 5.5x | 0.44x | 0.21x | 0.18x | 0.1x |
| Performer | 1.2x | 1.9x | 3.8x | 5.7x | 0.44x | 0.22x | 0.15x | 0.11x |
| Fourier Transformer (ours) | 6.9x | 12.2x | 16.8x | 17.7x | 0.23x | 0.19x | 0.18x | 0.18x |
Table 2: The speed and memory consumption on LRA benchmark over **Text** task with input lengths of 1K, 2K, 3K
and 4K. The results from Reformer to Performer are from Zhuang et al. (2022). The speed and memory consumption are listed as the rate w.r.t. the vanilla Transformer.
Pathfinder An synthetic image classification task with a fixed input length of 1K which requires the model to capture long-range spatial dependencies.
## 5.1.1 Implementation Details
We run experiments on the LRA benchmark closely following the configurations in (Tay et al., 2020a),
including data pre-processing, data split, model architecture, hyperparameters (number of layers, hidden dimensions, etc.). We evaluate in terms of classification accuracy. Our implementation is based on (Xiong et al., 2021). For the sake of simplicity, we report the results of our model over the five tasks with the same compression budget.
We aggressively reduce 80% of the input sequence
## Length At The First Layer. 5.1.2 Performance & Efficiency
The results on the aforementioned 5 tasks are summarized in Table 1. We compare Fourier Transformer with a bunch of previously published Transformer-based models, and it achieves new state-of-the-art results on four out of the five tasks.
Our proposed model improves over the previous SOTA model (Zhuang et al., 2022) on Text, Retrieval, *Image* and *Pathfinder* by 9.07%, 4.24%,
3.20%, 6.11% absolute value respectively, which is a big margin. Notably, our model doesn't beat FSAT (Zhuang et al., 2022) on the ListOps task and ranks the 2nd in the list. We conjecture that it's because math expression values are more sensitive to individual tokens in the sequence, thus is more sensitive to downsampling.
Next, taking the byte-level text classification task
(the Text dataset) as a testbed, we quantitatively evaluate the time and memory efficiency of our model and the other competing models on various input lengths. The results are summarized in Table 2. Note that, due to the limitation of GPU memory for the vanilla Transformer, results on 1K, 2K and 3K lengths are run with a batch size of 32, and 4K are with a batch size of 16. We calculate the corresponding rates of our model w.r.t. vanilla Transformer on identical batch settings, and timed on an NVidia A100-80G GPU. Compared with other efficient transformers, Fourier Transformer significantly reduces time consumption on both short and long sequences, leaving the other model behind by a large margin, while keeping a steady memory savings as the sequence length grows.
## 5.2 Encoder-Decoder Tasks
The model for encoder-decoder tasks are equipped with a decoder to perform text generation. For this setting, we choose two long-text datasets in summarization and question answering tasks, i.e., CNN/DailyMail (Hermann et al., 2015) and ELI5
(Fan et al., 2019c), with average sequence lengths at 0.8K and 5K, respectively.
CNN/DailyMail A summarization dataset containing over 280K news articles (766 token counts on average) from news stories in CNN and Daily Mail websites paired with human-generated summaries (53 token counts on average). We follow the conversion and evaluate the performance in terms of Rouge scores (Rouge-1, Rouge-2, RougeL) (Lin, 2004).
ELI5 A question answering dataset containing over 270K complex, diverse and paragraph-length question-answer pairs gathered from subreddits, the average number of tokens for input and target are 5140 and 693 respectively. Following the conversion, we evaluate it in both Rouge-L and F1 scores.
## 5.2.1 Implementation Details
Since on both the two datasets pretrained models leave a large gap over non-pretrained ones, it makes less sense to report results without pretraining. Thus, we report results of our model inheriting BART-large (Lewis et al., 2019) weights. We generally test two settings, which is: 1) directly fine-tune our model on the dataset, and 2) conduct further pretraining before fine-tuning. For convenience, we call them *Fourier-BART* and *Fourier-BART-FP*
respectively in the rest of the paper.
Fourier-BART has the same architecture as BART-large. It simply adopts a 2-block design, the first block contains the first 2 consecutive transformer layers, the rest 10 layers belong to the second block. For CNN/DailyMail, 50% of the frequency components are truncated, while for ELI5 70% are truncated since it has much longer sequence lengths.
Fourier-BART-FP has the same setting as Fourier-BART, except that before fine-tuning on downstream tasks it is further pretrained for 1 epoch on 10GB of text with the original BART
pretraining objectives. The text is randomly sliced from the Pile (Gao et al., 2020) corpus.
## 5.2.2 Performance & Efficiency
CNN/DailyMail On summarization task, inside the scope of efficient models, we compare our model with BigBird (Zaheer et al., 2020), ST-MoE
(Zoph et al., 2022) and Switch Transformer (Fedus et al., 2021), which are strong baselines from recent literature. Both ST-MoE and Switch Transformer targeted at activating only part of the parameters to improve the efficiency. Bigbird approximates full attention matrix with a sparse one to improve on FLOPs. In addition, we put the standard BART
(Lewis et al., 2019) performance as baseline.
The results are listed in Table 3. Our proposed Fourier-BART successively leverages the advantage of BART, achieving a performance at the level of pretrained model. With the tiny amount of further pretraining, it achieves the best performance among all competitors. Note that Fourier-BART is built upon BART and sharing the same model size with BART-400M with much less computation, however it is able to outperform the standard BART-400M with a sensible margin.
As for efficiency, it is almost impossible to reproduce all the models listed in Table 3 and investigate their efficiency, so we choose to only evaluate the standard BART-400M and proposed Fourier-BART400M in terms of FLOPs. As elaborated in Section 5.2.1, we remove 50% from the hidden sequence on the third transformer layer, although the two models have the exact same size, the FLOPs invested in the standard BART-400M is 1.6 times of Fourier-BART-400M. Due to the upsampling and the auto-regressive decoding, the overall reduction
| Model | R-1 | R-2 | R-L |
|----------------------|-------|-------|-------|
| BART-400M | 44.16 | 21.28 | 40.90 |
| ST-MOE-L-770M | - | 20.7 | - |
| Switch Trans.-223M | - | 19.6 | - |
| BigBird-Large | 43.84 | 21.11 | 40.74 |
| Fourier-BART-400M | 44.65 | 21.48 | 41.30 |
| Fourier-BART-FP-400M | 44.76 | 21.55 | 41.34 |
| Model | RL | F1 |
|----------------------|-------|-------|
| LayerDrop-240M | 23.4 | - |
| E-MCA-240M | 24.0 | - |
| c-REALM*-596M | 23.2 | 22.9 |
| EMAT*-446M | 20.91 | 19.03 |
| KID*-406M | 26.3 | - |
| BART-large-400M | 26.8 | 26.6 |
| Fourier-BART-400M | 26.2 | 25.98 |
| Fourier-BART-FP-400M | 26.9 | 26.73 |
in computation is not as significant as those on LRA.
ELI5 On question answering task, we compare our model with the LayerDrop (Fan et al., 2019b),
E-MCA (Fan et al., 2019a), c-REALMS (Krishna et al., 2021), EMAT (Wu et al., 2022) and KID
(Liu et al., 2022). To provide a fair comparison, the result of BART-large is our reproduced one on the bleeding-edge version of fairseq (Ott et al., 2019),
which is much higher than the results reported in the original BART paper. Note that here we are even comparing with performance-sensitive models, as in the list only EMAT and LayerDrop are focusing on reducing complexity. As shown in Table 4, our Fourier-BART-FP has surpassed all the competing models on both Rouge-L and F1 scores.
As for efficiency, when removing 70% of the frequency components (elaborated in Section 5.2.1),
the FLOPs invested in the standard BART is 1.9 times of Fourier-BART.
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
## 5.3 Analysis On Retaining Ratio R
An important question that arises is how sensitive the model is w.r.t. the ratio of retaining frequency components. To investigate this, we experiment our model in ELI5 dataset. by sweeping r from 0.1 to 1. We didn't conduct further pretraining on each setting due to computation limit. Results are shown in Fig 3. The performance remains pretty good up until less than 30% of frequency components are retained. When we try to truncate more components passing that ratio, the performance starts to drop significantly. This is a fairly satisfying result that shows the model performs reliably stable in a wide range of reasonable r's.
## 6 Conclusion
In this work, we introduce the discrete cosine transformation to progressively downsample the hidden states in the Transformer model by leveraging the local correlations between hidden states in upper layers. Our approach is able to significantly reduce the computation required by the vanilla Transformer, while being able to achieve even better performance in various tasks. Moreover, it is able to inherit the pretrained model weights, which is an notable advantage over most efficient Transformers.
## 7 Limitations
Although our approach exhibits great speedups in encoder-only settings, it doesn't yield as impressive speedups in encoder-decoder setting. This is due to the autoregresive decoding steps in the decoder, that has to be conducted sequentially. Accelerating that with DCT requires to incrementally update DCT outputs step by step based on outputs of previous timesteps, which is theoretically possible but not easy to optimize its efficiency. We plan to further accelerate it in this direction in future work.
## Acknowledgement
This work was sponsored by the National Natural Science Foundation of China (NSFC) grant (No. 62106143), and Shanghai Pujiang Program (No.
21PJ1405700).
## References
Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020.
Longformer: The long-document transformer. *arXiv* preprint arXiv:2004.05150.
Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. 2020.
Generative pretraining from pixels. In *International* conference on machine learning, pages 1691–1703.
PMLR.
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. *arXiv preprint* arXiv:1904.10509.
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, David Belanger, Lucy Colwell, et al. 2020a. Masked language modeling for proteins via linearly scalable long-context transformers. *arXiv preprint arXiv:2006.03555*.
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. 2020b. Rethinking attention with performers. *arXiv preprint arXiv:2009.14794*.
Zihang Dai, Guokun Lai, Yiming Yang, and Quoc Le.
2020. Funnel-transformer: Filtering out sequential redundancy for efficient language processing. *Advances in neural information processing systems*,
33:4271–4282.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Angela Fan, Claire Gardent, Chloé Braud, and Antoine Bordes. 2019a. Using local knowledge graph construction to scale seq2seq models to multi-document inputs. *arXiv preprint arXiv:1910.08435*.
Angela Fan, Edouard Grave, and Armand Joulin. 2019b.
Reducing transformer depth on demand with structured dropout. *arXiv preprint arXiv:1909.11556*.
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019c. ELI5:
long form question answering. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3558–3567. Association for Computational Linguistics.
William Fedus, Barret Zoph, and Noam Shazeer. 2021.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020.
The pile: An 800gb dataset of diverse text for language modeling. *arXiv preprint arXiv:2101.00027*.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. *Advances in neural information* processing systems, 28.
Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. 2019. Axial attention in multidimensional transformers. arXiv preprint arXiv:1912.12180.
Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and Joao Carreira. 2021.
Perceiver: General perception with iterative attention. In *International conference on machine learning*, pages 4651–4664. PMLR.
Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear attention. In *International Conference on Machine* Learning, pages 5156–5165. PMLR.
Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya.
2020. Reformer: The efficient transformer. *arXiv* preprint arXiv:2001.04451.
Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021.
Hurdles to progress in long-form question answering. arXiv preprint arXiv:2103.06332.
James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, and Santiago Ontanon. 2021. Fnet: Mixing tokens with fourier transforms. *arXiv preprint arXiv:2105.03824*.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81.
Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. *arXiv preprint arXiv:1703.03130*.
Ruibo Liu, Guoqing Zheng, Shashank Gupta, Radhika Gaonkar, Chongyang Gao, Soroush Vosoughi, Milad Shokouhi, and Ahmed Hassan Awadallah.
2022. Knowledge infused decoding. arXiv preprint arXiv:2204.03084.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, and Luke Zettlemoyer. 2021.
Luna: Linear unified nested attention. Advances in Neural Information Processing Systems, 34:2441– 2453.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. *arXiv preprint arXiv:1609.07843*.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT
2019: Demonstrations.
Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A Smith, and Lingpeng Kong.
2021. Random feature attention. *arXiv preprint* arXiv:2103.02143.
Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, et al. 2020. Kilt: a benchmark for knowledge intensive language tasks. arXiv preprint arXiv:2009.02252.
Jiezhong Qiu, Hao Ma, Omer Levy, Wen-tau Yih, Sinong Wang, and Jie Tang. 2020. Blockwise selfattention for long document understanding. In Findings of the Association for Computational Linguistics:
EMNLP 2020, pages 2555–2565.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efficient content-based sparse attention with routing transformers. *Transactions of* the Association for Computational Linguistics, 9:53– 68.
Carmelo Scribano, Giorgia Franchini, Marco Prato, and Marko Bertogna. 2022. Dct-former: Efficient self-attention withdiscrete cosine transform. *arXiv* preprint arXiv:2203.01178.
Sainbayar Sukhbaatar, Édouard Grave, Piotr Bojanowski, and Armand Joulin. 2019. Adaptive attention span in transformers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 331–335.
Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, and Che Zheng. 2021a. Synthesizer: Rethinking self-attention for transformer models. In International conference on machine learning, pages 10183–10192. PMLR.
Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. 2020a. Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006.
Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2020b. Efficient transformers: A survey.
ACM Computing Surveys (CSUR).
Yi Tay, Vinh Q Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler.
2021b. Charformer: Fast character transformers via gradient-based subword tokenization. arXiv preprint arXiv:2106.12672.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity. *arXiv preprint arXiv:2006.04768*.
Genta Indra Winata, Samuel Cahyawijaya, Zhaojiang Lin, Zihan Liu, and Pascale Fung. 2020. Lightweight and efficient end-to-end speech recognition using low-rank transformer. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6144–6148.
IEEE.
Felix Wu, Angela Fan, Alexei Baevski, Yann N
Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. *arXiv* preprint arXiv:1901.10430.
Yuxiang Wu, Yu Zhao, Baotian Hu, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel.
2022. An efficient memory-augmented transformer for knowledge-intensive nlp tasks. arXiv preprint arXiv:2210.16773.
Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh.
2021. Nyströmformer: A nyström-based algorithm for approximating self-attention. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 35, pages 14138–14148.
Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. *Advances in Neural Information* Processing Systems, 33:17283–17297.
Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, and Bryan Catanzaro. 2021. Long-short transformer: Efficient transformers for language and vision. *Advances in Neural Information Processing Systems*,
34:17723–17736.
Yimeng Zhuang, Jing Zhang, and Mei Tu. 2022. Longrange sequence modeling with predictable sparse attention. In *Proceedings of the 60th Annual Meeting* of the Association for Computational Linguistics (Volume 1: Long Papers), pages 234–243.
Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. 2022. Designing effective sparse expert models. *arXiv preprint arXiv:2202.08906*.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
han-etal-2023-zero | Zero-Shot Classification by Logical Reasoning on Natural Language Explanations | https://aclanthology.org/2023.findings-acl.571 | Humans can classify data of an unseen category by reasoning on its language explanations. This ability is owing to the compositional nature of language: we can combine previously seen attributes to describe the new category. For example, we might describe a sage thrasher as {``}it has a slim straight relatively short bill, yellow eyes and a long tail{''}, so that others can use their knowledge of attributes {``}slim straight relatively short bill{''}, {``}yellow eyes{''} and {``}long tail{''} to recognize a sage thrasher. Inspired by this observation, in this work we tackle zero-shot classification task by logically parsing and reasoning on natural language explanations. To this end, we propose the framework CLORE (Classification by LOgical Reasoning on Explanations). While previous methods usually regard textual information as implicit features, CLORE parses explanations into logical structures and then explicitly reasons along this structure on the input to produce a classification score. Experimental results on explanation-based zero-shot classification benchmarks demonstrate that CLORE is superior to baselines, which we show is mainly due to higher scores on tasks requiring more logical reasoning. We also demonstrate that our framework can be extended to zero-shot classification on visual modality. Alongside classification decisions, CLORE can provide the logical parsing and reasoning process as a clear form of rationale. Through empirical analysis we demonstrate that CLORE is also less affected by linguistic biases than baselines. | # Zero-Shot Classification By Logical Reasoning On Natural Language Explanations
Chi Han 1, Hengzhi Pei 1, Xinya Du 2**, Heng Ji** 1 1 University of Illinois at Urbana-Champaign 2 The University of Texas at Dallas
{chihan3,hpei4,hengji}@illinois.edu, [email protected]
## Abstract
Humans can classify data of an unseen category by reasoning on its language explanations.
This ability is owing to the compositional nature of language: we can combine previously seen attributes to describe the new category.
For example, we might describe a sage thrasher as "it has a slim straight relatively short bill, yellow eyes and a long tail", so that others can use their knowledge of attributes "slim straight relatively short bill", "yellow eyes" and "long tail" to recognize a sage thrasher. Inspired by this observation, in this work we tackle zero-shot classification task by logically parsing and reasoning on natural language explanations. To this end, we propose the framework CLORE (Classification by LOgical Reasoning on Explanations). While previous methods usually regard textual information as implicit features, CLORE parses explanations into logical structures and then explicitly reasons along thess structures on the input to produce a classification score. Experimental results on explanation-based zero-shot classification benchmarks demonstrate that CLORE is superior to baselines, which we further show mainly comes from higher scores on tasks requiring more logical reasoning. We also demonstrate that our framework can be extended to zeroshot classification on visual modality. Alongside classification decisions, CLORE can provide the logical parsing and reasoning process as a clear form of rationale. Through empirical analysis we demonstrate that CLORE is also less affected by linguistic biases than baselines.
1
## 1 Introduction
Humans are capable of understanding new categories by reasoning on natural language explanations (Chopra et al., 2019; Tomasello, 2009). For example, in Figure 1, we can describe sage thrashers as "having a slim straight relatively short bill, 1Code and data will be made publicly available upon publication
![0_image_0.png](0_image_0.png)
![0_image_1.png](0_image_1.png)
Figure 1: We propose to conduct zero-shot classification by logical reasoning on natural language explanations, just like humans do. This design encourages our approach to better utilize the compositional property in natural language explanations.
yellow eyes and a long tail". Then when we view a real sage thrasher the first time, we can match its visual appearance with attributes "slim straight relatively short bill", "yellow eyes" and "long tail",
and then logically combine these results to recognize it. This ability has been shown to be applicable to both visual objects and abstract concepts (Tomasello, 2009). Compared to learning only through examples, using language information enables humans to acquire higher accuracy in less learning time (Chopra et al., 2019).
One important advantage of learning with natural language explanations is that explanations are often logical and compositional. That is, we can logically decompose the explanation of a new category into previously seen attributes (or similar ones) such as "yellow eyes" and "long tail". This enables us to reuse the knowledge on how these attributes align with visual appearances, and reduce the need for "trial-and-error". Furthermore, learning with explanations provides better interpretability which makes results more trustworthy.
Recently, there have been research efforts on using language information for zero-shot general-
![1_image_0.png](1_image_0.png)
ization. Types of such language information include human-annotated explanations or task-level instructions (Menon et al., 2022; Sanh et al., 2022; Mishra et al., 2022). However, auxiliary language information is often treated merely as additional text sequences to be fed into pre-trained language models. This approach does not fully leverage the compositional nature of natural language, and does not provide sufficient interpretable rationales for its decisions.
Inspired by these observations, in this work we explore classifying unseen categories by logically reasoning on their language explanations. To this end, we propose the framework of Classification by LOgical Reasoning on Explanations (CLORE).
CLORE works in two stages: it first parses an explanation into a logical structure, and then reasons along this logical structure. Figure 2 illustrates an example of classifying sage thrashers in this way.
We first encode the inputs (Figure 2 (a) → (c)) get the logical structure of explanation (Figure 2 (b) →
(d)). Then we detect if the input matches attributes, and we gather the matching scores along the logical structure to output the overall classification score (Figure 2 (c),(d)→(e)). In this case the logical structure consists of AND operators over three attributes. We test the model's zero-shot capacity by letting it learn on a subset of categories, and make it categorize data from other unseen types.
We conduct a thorough set of analysis on the latest benchmark for zero-shot classifier learning with explanations, CLUES (Menon et al., 2022).
Our analysis shows that CLORE works better than baselines on tasks requiring higher level of compositional reasoning, which validates the importance of logical reasoning in CLORE. CLORE also demonstrates better interpretability and robustness against linguistic biases. Furthermore, as a test on generalizability of the proposed approach on other modalities, we built a new benchmark on visual domain: CUB-Explanations. It is built upon the image dataset CUB-200-2011 (Wah et al., 2011),
while we associate each category with a set of language explanations. CLORE consistently outperforms baseline models in zero-shot classification across modalities.
To sum up, our contributions are as follows:
- We propose a novel zero-shot classification framework by logically parsing and reasoning over explanations.
- We demonstrate our model's superior performance and explainability, and empirically show that CLORE is more robust to linguistic biases and reasoning complexity than blackbox baselines.
- We demonstrate the universality of the proposed approach by building a new benchmarks, CUB-Explanations. It is derived from CUB-200-2011 (Wah et al., 2011) by collecting natural language explanations for each category.
## 2 Related Work
Classification with Auxiliary Information This work studies the problem of classification through explanations, which is related to classification with auxiliary information. For example, in the natural language processing field, Mann and McCallum
(2010); Ganchev et al. (2010) incorporate side information (such as class distribution and linguistic structures) as a regularization for semi-supervised learning. Some other efforts convert crowd-sourced explanations into pseudo-data generators for data augmentation when training data is limited (Wang et al., 2020a; Hancock et al., 2018; Wang et al.,
2020b). However, these explanations are limited to describing linguistic patterns (e.g., "this is class X because word A directly precedes B"), and are only used for generating pseudo labels. A probably more related topic is using explanations for generating a vector of features for classification (Srivastava et al., 2017, 2018). However, they either learn a black-box final classifier on features or rely on observed attributes of data, so their ability of generalization is limited.
The computer vision area widely uses classlevel auxiliary information such as textual metadata, class taxonomy and expert-annotated feature vectors (Yang et al., 2022; Akata et al., 2015b; Xian et al., 2016; Lampert et al., 2009; Akata et al.,
2015a; Samplawski et al., 2020). However, the use of label names and class explanations is mainly limited to a simple text encoder (Akata et al., 2015b; Xian et al., 2016; Liu et al., 2021; Norouzi et al.,
2014). This processing treats every text as one simple vector in similarity space or probability space, whereas our method aims to reason on the explanation and exploit its compositional nature.
Few-shot and Zero-shot Learning with Language Guidance This work deals with the problem of learning with limited data with the help of natural language information, which is closely related to few-shot and zero-shot learning with language guidance in NLP domain (Hancock et al.,
2018; Wang et al., 2020b; Srivastava et al., 2017, 2018; Yu et al., 2022; Huang et al., 2018). Besides the discussions in the previous subsection, recent pre-trained language models (LMs) (Devlin et al.,
2019; Liu et al., 2019; Tam et al., 2021; Gao et al.,
2021; Yu et al., 2022) have made huge progress in few-shot and zero-shot learning. To adapt LMs to downstream tasks, common practices are to formulate them as cloze questions (Tam et al., 2021; Schick and Schütze, 2021; Menon et al., 2022; Li et al., 2022b) or use text prompts (Mishra et al.,
2022; Ye et al., 2021; Sanh et al., 2022; Aghajanyan et al., 2021). These approaches hypothetically utilize the language models' implicit reasoning ability (Menon et al., 2022). However, in this work we demonstrate with empirical evidence that adopting an explicit logical reasoning approach can provide better interpretability and robustness to linguistic biases.
In computer vision, recently there has been impressive progress on vision-language pre-trained models (VLPMs) (Li et al., 2022a; Radford et al.,
2021; Li et al., 2019; Kim et al., 2021). These methods are trained on large-scale high-quality visiontext pairs with contrastive learning (Radford et al.,
2021; Kim et al., 2021; Li et al., 2019) or mask prediction objective (Kim et al., 2021; Li et al.,
2019). However, these model mostly focus on representation learning than understanding the compositionality in language. As we will show through experiments, VLPMs fits data better at the cost of zero-shot generalization performance.
There are also efforts in building benchmarks for cross-task generalization with natural language explanations or instructions (Mishra et al., 2022; Menon et al., 2022). We use the CLUES benchmark (Menon et al., 2022) in our experiment for structured data classification, but leave Mishra et al.
(2022) for future work as its instructions are focused on generally describing the task instead of defining categories/labels.
Neuro-Symbolic Reasoning for Question Answering is also closely related to our approach.
Recent work (Mao et al., 2019; Yi et al., 2018; Han et al., 2019) has demonstrated its efficacy in question answering, concept learning and image retrieval. Different from our work, previous efforts mainly focus on question answering tasks, which contains abundant supervision for parsing natural language questions. In classification tasks, however, the number of available explanations is much more limited (100∼1000), which poses a higher challenge on the generalization of reasoning ability.
## 3 Logical Parsing And Reasoning
Explanation-based classification is, in essence, a bilateral matching problem between inputs and explanations. Instead of simply using similarity or entailment scores, in this work we aim at better utilizing the logical structure of natural language explanations. A detailed illustration of our proposed model, CLORE, is shown in Figure 2. At the core of the approach is a 2-stage logical matching process: logical parsing of the explanation (Figure 2(d)) and logical reasoning on explanation and inputs to obtain the classification scores (Figure 2(e)). Rather than using sentence embeddings, our approach focuses more on the logical structure of language
![3_image_0.png](3_image_0.png)
explanations, setting it apart from logic-agnostic baselines such as ExEnt and RoBERTa-sim (which is based on sentence embedding similarity). To the best of our knowledge, ours is the first attempt to utilize logical structure in zero-shot classification benchmarks, and it also serves as a proof of concept for the importance of language compositionality. In the following part of this section we will describe these two stages. More implementation details including input representation can be found at Section 4 and 5.
## 3.1 Logical Parsing
This stage is responsible for detecting attributes mentioned in an explanation as well as recovering the logical structure on top of these attributes.
(Figure 2(b) to Figure 2(d)). A more detailed illustration is given in Figure 3. We divide this parsing into 2 steps:
Step 1: Selecting attribute Candidates We deploy a attribute detector to mark a list of attribute candidates in the explanations. Each attribute candidate is associated with an attention map as in Figure 3. First we encode the explanation sentence with a pre-trained language encoder, such as RoBERTa (Liu et al., 2019). This outputs a sentence embedding vector and a sequence of token embedding vectors. Then we apply an attention-based Gated Recurrent Unit (GRU) network (Qiang et al., 2017). Besides the output vector at each recurrent step, attention-based GRU also outputs an attention map over the inputs that is used to produce the output vector. In this work, we use the sentence embedding vector as the initialization vector h 0for GRU, and word embeddings as the inputs. We run GRU for a maximum of T (a hyperparameter) steps, and get T attention weight maps.
Finally we adopt these attention maps to acquire weighted sums of token features {wt|t ∈ [1..T]}
as attribute embeddings.
Step 2: Parsing Logical Structure The goal of this step is to generate a logical structure over the attribute candidates in the previous step. As shown in Figure 2(d), the logical structure is a binary directed tree with nodes being logical operators AND
or OR. Each leaf node corresponds to an attribute candidate. In this work, we need to deal with the problem of undetermined number of attributes, and also allow for differentiable optimization. To this end, we define a fixed list of tree structures within maximum number of T leaf nodes, each resembling the example in Figure 3. A complete list is shown in Appendix A.2. We compute a distribution on templates by applying an multi-layer perceptron
(MLP) with soft-max onto the explanation sentence embedding. This provides a non-negative vector p with sum 1, which we interpret as a distribution over the logical structure templates. If the number of attributes involved in the template is fewer than T, we discard the excessive candidates in following logical reasoning steps.
## 3.2 Logical Reasoning
After getting attribute candidates and a distribution over logical structures, we conduct logical reasoning on the input to get the classification score. An illustration is provided in Figure 2(e).
Step 1: Matching attributes with Inputs We assume that the input is represented as a sequence of feature vectors X = (x1, x2, · · · , xK). First we define a matching score between attribute embedding wt and input X as the maximum cosine similarity:
sim(*X, w*t): = max k cos(xk, wt).
## Step 2: Probabilisitc Logical Reasoning This
step tackles the novel problem of reasoning over logical structures of explanations. During reasoning, we iterate over each logical tree template and walk along the tree bottom-up to get the intermediate reasoning scores node by node. First, for leaf nodes in the logical tree (which are associated with attributes), we use the attribute-input matching scores in the previous step as their intermediate
| Top-1 acc/% | CLUES-Real | + pre-training |
|---------------|--------------|------------------|
| ExEnt | 54.8 | 52.7 |
| RoBERTa-sim | 45.1 | 46.3 |
| CLORE-plain | 45.8 | 49.8 |
| CLORE | 57.4 | 55.2 |
scores. Then, for a non-leaf node, if it is associated with an AND operator, we define its intermediate score as min(s1, s2) with s1 and s2 following common practice (Mao et al., 2019). If the non-leaf node is associated with an OR operator instead, we use max(s1, s2) as the intermediate score. The intermediate score of the root node s*root* serves as the output reasoning score. Note that we generated a distribution over logical structures rather than a deterministic structure. Therefore, after acquiring the reasoning scores on each structure, we use the probability distribution weight p to sum up the scores s of all structures. The resulting score is then equivalent to probabilistically logical reasoning over a distribution of logical structures.
$$s_{e x p l}=p^{\top}s$$
We also consider that some explanations might be more or less certain than others. When using words like "maybe", the explanation is less certain than another explanation using word "always". We model this effect by associating each explanation with a certainty value c*certainty*, which is produced by another MLP on the explanation sentence embedding. So we scale the score s*expl* with c*certainty* in logit scale:
$$\cdot(s_{e x p l}))$$
sscaled = σ(c*certainty* · logit(s*expl*))
Intuitively, the training phase will encourage the model to learn to assign each explanation a certainty value that best fits the classification tasks.
Step 3: Reasoning over Multiple Explanations There are usually multiple explanations associated with a category. In this case, we take the maximum s*scaled* over the set of explanations as the classification score for this category.
## 4 Experiments On Zero-Shot Classification
In this section we conduct in-depth analysis of our proposed approach towards zero-shot classification with explanations. We start with a latest benchmark, CLUES (Menon et al., 2022), which evaluates the performance of classifier learning with natural language explanations. CLUES focuses on the modality of structured data, where input data is a table of features describing an item. This data format is flexible enough for computers on a wide range of applications, and also benefits quantitative analysis in the rest part of this section.
## 4.1 Clues Benchmark
CLUES is designed as a cross-task generalization benchmark on structured data classification. It consists of 36 real-world and 144 synthetic multi-class classification tasks, respectively. The model is given a set of tasks for learning, and then evaluated on a set of unseen tasks. The inputs in each task constitute a structured table. Each column represents an attribute type, and each row is one input datum. In each task, for each class, CLUES
provides a set of natural language explanations.
We follow the data processing in Menon et al.
(2022) and convert each input into a text sequence.
The text sequence is in the form of "odor | pungent [SEP] ... [SEP] ring-type | pendant", where "odor" is the attribute type name, and "pungent" is the attribute value for this input, so on and so forth. For CLORE, we encode the sentence with RoBERTa (Liu et al., 2019)
2 and use the word embeddings as input features X. More implementation details can be found in Appendix A.1. We use ExEnt as a baseline, which is an text entailment model introduced in the CLUES paper. ExEnt uses pre-trained RoBERTa as backbone. It works by encoding concatenated explanations and inputs, and then computing an entailment score. We also introduce a similaritybased baseline, RoBERTa-sim, which uses cosine between RoBERTa-encoded inputs and explanations as classification scores. Finally, we compare with CLORE-plain as an ablation study, which ignores the logical structure in CLORE and plainly addes all attribute scores as the overall classification scofre.
| Task | Natural Language Explanation | Interpreted Logical Structure | | | |
|---------------------------------|--------------------------------------------------|-------------------------------------------|---------------------------------------------------|-----|------------------------------------------------------|
| carevaluation | Cars | with higher safety | and capacity | are | Label(X) = with_higher_safety (X) ∧ and_capacity (X) |
| highly acceptable for resale. | | | | | |
| indian-liverpatient | Age group above 40 | ensures liver patient | Label(X) = group_above_40 (X) ∧ ensures_liver (X) | | |
| soccerleague-type | If the league is W -PSL then its type is women's | Label(X) = league_is_W (X) | | | |
| soccer | | | | | |
| awardnominationresult | If the name of association has 'American' in it | Label(X) = association_has_'American' (X) | | | |
| then the result was mostly won. | | | | | |
| Input | Execution Evidence s1 = with_higher_safety(X) = 0.58 s2 = and_capacity(X) = 0.65 s1∧2 = 0.58 | | | | | |
|-----------------------|------------------------------------------------------------------------------------------------|-----------------------------------|------------------|------------------|-------------------------------|---------------------|
| safety | person capacity | buying cost | maintenance cost | … | | |
| high | 4 | med | low | … | s1 = group_above_40(X) = 0.56 | |
| SGPT | SGOT | total bilirubin | age | direct bilirubin | s2 = ensures_liver(X) = 0.57 | |
| 33 | 71 | 4.9 | 65 | 2.7 | s1∧2 = 0.56 | |
| Club | League | Venue | City | … | | |
| Tulsa Spirit | WPSL | Union 8th | Broken Arrow | … | s | league_is_W(X)=0.72 |
| 1 = | | | | | | |
| Association | Category | Nominee | | | | |
| American Comedy award | Funniest Actor in a Motion Picture | s1 = ssociation_has_'American'(X) | | | | |
| Meg Ryan | = 0.69 | | | | | |
## 4.2 Zero-Shot Classification Results
Zero-shot classification results are listed in Table 1.
CLORE outperforms the baseline methods on main evaluation metrics. To understand the effect of backbound model, we need to note that ExEnt also uses RoBERTa as the backbone model, so the CLORE and baselines do not exhibit a significant difference in basic representation abilities. The inferior performance of RoBERTa-sim compared to ExEnt highlights the complexity of the task, indicating that it demands more advanced reasoning skills than mere sentence similarity. Furthermore, as an ablation study, CLORE outperforms CLORE-plain, which serves as initial evidence on the importance of logical structure in reasoning.
## 4.3 Effect Of Explanation Compositionality
What causes the difference in performance between CLORE and baselines? To answer this question, we investigate into how the models' performance varies with the compositionality of each task on CLUES. Table 3 provides a pair of examples. An explanations is called "simple explanation" if it only describes one attribute, e.g., "If safety is high, then the car will not be unacceptable.". Other explanations describe multiple attributes to define a class, e.g., "Cars with higher safety and medium luggage boot size are highly acceptable for resale.".
We define the latter type as "compositional explanation". In Figure 7 we plot the classification accuracy against the proportion of compositional explanations in each subtask's explanation set. Intuitively, with more compositional explanations, the difficulty of the task increases, so generally we should expect a drop in performance. Results show that, on tasks with only simple explanations
(x-value = 0), both models perform similarly. However, with higher ratio of compositional explanations, CLORE's performance generally remains stable, but ExEnt's performance degrades. This validates our hypothesis that CLORE's performance gain mainly benefits from its better compositional reasoning power.
To further explore the effect of logical reasoning on model performance. Figure 5 plots the performance regarding the maximum number of
![6_image_0.png](6_image_0.png)
| Compositional Explanation Cars with higher safety and medium luggage boot size are highly acceptable for resale. Simple Explanation If safety is high, then the car will not be unacceptable. |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
![6_image_3.png](6_image_3.png)
attributes T. Generally speaking, when T is larger, CLORE can model more complex logical reasoning process. When T = 1, the model reduces to a simple similarity-based model without logical reasoning. The figure shows that when T is 2∼3, the model generally achieves the highest performance, which also aligns with our intuition in the section 3.
We hypothesize that a maximum logical structure length up to 4 provides insufficient regularization, and CLORE is more likely to overfit the data.
## 4.4 Interpretability
CLORE is interpretable in two senses: 1) it parses logical structures to explain how the explanations are interpreted, and 2) the logical reasoning evidence serves as decision making rationales. To demonstrate the interpretability of CLORE, in Table 2 and Figure 4 we present examples of the parsed logical structure and reasoning process.
The first example of Table 2 shows that CLORE
selects "*with higher safety*" and "*and capacity*" as attributes candidates, and uses an AND operator over the attributes. In Figure 4 correspondingly, two attributes match with columns 1∼3 and 2∼3, respectively. This example is correctly classified by our model, but mis-classified by the ExEnt baseline.
To quantitatively evaluate the learned attributes, we manually annotate keyword spans for 100 out of 344 explanations. These spans describe the key attributes for making the explanation. When
![6_image_1.png](6_image_1.png)
![6_image_2.png](6_image_2.png)
there are multiple attributes detected, we select the one closest to the keyword span. Then we plot the histogram of the relative position between topattention tokens and annotated keyword spans in Figure 6. From the figure we can see that the majority of top-attention tokens (52%) fall within the range of annotated keyword spans. The ratio increases to 81% within distance of 5 tokens from the keyword span, and 95% within distance of 10 tokens.
## 4.5 Robustness To Linguistic Bias
Linguistic biases are prevalent in natural language, which can subtly change the emotions and stances of the text (Field et al., 2018; Ziems and Yang, 2021). Pre-trained language models have also been found to be affected by subtle linguistic perturbations (Kojima et al., 2022) and hints (Patel and Pavlick, 2021).
In this section we investigate how different models are affected by these linguistic biases in inputs.
To this end, we experiment on 3 categories of linguistic biases. *Punctuated*: inspired by discussions about linguistic hints in (Patel and Pavlick, 2021),
![7_image_0.png](7_image_0.png)
we append punctuation such as "?" and "..." to the input in order to change its underlying tone.
Hinted: we change the joining character from "|" to phrases with doubting hints such as "is claimed to be". *Verbose*: Transformer-based models are found to attend on a local window of words (Child et al., 2019), so we append a long verbose sentence
(≈ 30 words) to the input sentence to perturb the attention mechanism. These changes are automatically applied.
Results are presented in Figure 8. Compared with the original scores without linguistic biases (the horizontal lines), CLORE's performance is not significantly affected. But ExEnt appears to be susceptible to these biases with a large drop in performance. This result demonstrates that ExEnt also inherits the sensitivity to these linguistic biases from its PLM backbone. By contrast, CLORE is encouraged to explicitly parse explanations into its logical structure and conduct compositional logical reasoning. This provides better inductive bias for classification, and regulates the model from leveraging subtle linguistic patterns.
## 4.6 Linguistic Quantifier Understanding
Linguistic quantifiers is a topic to understand the degree of certainty in natural language (Srivastava et al., 2018; Yildirim et al., 2013). For example, humans are more certain when saying something usually happens, but less certain when using words like *sometimes*. We observe that the certainty coefficient c*certainty* that CLORE learns can naturally serve the purpose the of modelling quantifiers.
We first detect the existence of linguistic quantifiers like *often* and *usually* by simply word matching. Then we take the average of c*certainty* on the matched explanations. We plot these values against expert-annotated "quantifier probabilities" in (Srivastava et al., 2018) in Figure 9. Results show that c*certainty* correlates positively with "quantifier probabilities" with Pearson correlation coefficient value of 0.271. In cases where they disagree, our quantifier coefficients also make some sense, such as assigning *often* a relatively higher value but giving *likely* a lower value.
## 4.7 Linguistic Quantifier Understanding
Linguistic quantifiers is a topic to understand the degree of certainty in natural language (Srivastava et al., 2018; Yildirim et al., 2013). For example, humans are more certain when saying something usually happens, but less certain when using words like *sometimes*. We observe that the certainty coefficient c*certainty* that CLORE learns can naturally serve the purpose the of modelling quantifiers.
We first detect the existence of linguistic quantifiers like *often* and *usually* by simply word matching. Then we take the average of c*certainty* on the matched explanations. We plot these values against expert-annotated "quantifier probabilities" in (Srivastava et al., 2018) in Figure 9. Results show that c*certainty* correlates positively with "quantifier probabilities" with Pearson correlation coefficient value of 0.271. In cases where they disagree, our quantifier coefficients also make some sense, such as assigning *often* a relatively higher value but giving *likely* a lower value.
## 5 Extending To Visual Inputs
Natural language explanations are prevalent in other applications as well. Taking this observation, in this section we evaluate whether CLORE
| Model | ACCU ACCS ACCH | | | |
|------------------|------------------|------|------|------|
| w/o VLPMs | TF-VAEGANexpl | 4.7 | 39.1 | 8.3 |
| CLORE (ours) | 6.6 | 51.1 | 11.7 | |
| CLIPlinear | 34.3 | 41.2 | 37.4 | |
| w/ VLPMs | CLIPf inetuned | 29.9 | 66.9 | 41.3 |
| CLORECLIP (ours) | 39.1 | 65.8 | 49.1 | |
w/ VLPMs
CLIP*linear* 34.3 41.2 37.4
CLIP*f inetuned* 29.9 **66.9** 41.3
CLORE*CLIP* (ours) **39.1** 65.8 **49.1**
Table 4: Generalized zero-shot classification results (in
percentage) on CUB-Explanations dataset.
## 5.1 Datasets
Due to lack of datasets on evaluating zero-shot classification with compositional natural language explanations, we augment a standard visual classification datasets with manually collected explanations.
Specifically, we select CUB-200-2011 (Wah et al.,
2011), a bird image classification, as the recognition of birds benefits a lot from their compositional features (such as colors, shapes, etc.).
CUB-Explanations We build a CUBExplanations dataset based on CUB-200-2011, which originally includes ∼ 12k images with 200 categories of birds. 150 categories are used for training and other 50 categories are left for zero-shot image classification. In this work, we focus on the setting of zero-shot classification using natural language explanations. Natural language explanations of categories are more efficient to collect than the crowd-sourced feature annotations of individual images. They are also similar to human learning process, and would be more challenging for models to utilize. To this end, we collect natural language explanations of each bird category from Wikipedia. These explanations come from the short description part and the Description, Morphology or *Identification* sections in the Wikipedia pages. We mainly focus on the sentences that describe visual attributes that can be recognized in images (e.g. body parts, visual patterns and colors). Finally we get 1∼8 explanation sentences for each category with a total of 991 explanations.
For evaluation, we adopt the three metrics commonly used for generalized zero-shot learning:
ACCU denotes accuracy on unseen categories, ACCS denotes accuracy on seen categories, and their harmonic average ACCH =
2ACCU ACCS
ACCU +ACCS
.
## 5.2 Experiment Setting And Baselines
On CUB-Explanations dataset, we use a pretrained visual encoder to obtain image patch representation vectors. These vectors are then flattened as a sequence and used as visual input X. We use ResNet (He et al., 2016) as visual backbone for CLORE. For baselines, we make comparisons in two groups. The first group of models does not use parameters from pre-trained vision-language models (VLPMs). We adapt TF-VAEGAN (Narayan et al., 2020)
3, a state-of-the-art model on the CUB200 zero-shot classification task, to use RoBERTaencoded explanations as auxiliary information.
This results in the baseline TF-VAEGAN*expl*. The second group of models are those using pre-trained VLPMs. The main baseline we compare with is CLIP (Radford et al., 2021)
4, which is a wellperformed pretrained VLPM. We build two of its variants: CLIP*linear*, which only fine-tunes the final linear layer and CLIP*f inetuned*, which finetunes all parameters on the task. For fairer compasion, in this group we also replace the visual encoder with CLIP encoder in our model and get CLORE*CLIP* .
## 5.3 Classification Results
Results are listed in Table 4 . On CUBExplanations CLORE achieves the highest ACCU
and ACCH both with and without pre-trained vision-language parameters. Note that fine-tuning all parameters of CLIP makes it fit marginally better on seen classes, but sacrifices its generalization ability. Fine-tuning only the final linear layer
(CLIP*linear*) provides slightly better generalizability on unseen categories, but it is still lower than our approach.
## 6 Conclusions And Future Work
In this work, we propose a multi-modal zero-shot classification framework by logical parsing and reasoning on natural language explanations. Our method consistently outperforms baselines across modalities. We also demonstrate that, besides being interpretable, CLORE also benefits more from tasks that require more compositional reasoning, and is more robust against linguistic biases.
There are several future directions to be explored. The most intriguing one is how to utilize pre-trained generative language models for explicit logical reasoning . Another direction is to incorporate semantic reasoning ability in our approach, such as reasoning on entity relations or event roles.
3https://github.com/akshitac8/tfvaegan 4https://github.com/openai/CLIP
## Limitations
The proposed approach focuses more on logical reasoning on explanations for zero-shot classification. The semantic structures in explanations, such as inter-entity relations and event argument relations, are less touched (although the pre-trained language encoders such as BERT provides semantic matching ability to some extent). Within the range of logical reasoning, our focus are more on first-order logic, while leaving the discussion about higher-order logic for future work.
## Ethics Statement
This work is related to and partially inspired by the real-world task of legal text classification. As legal matters can affect the life of real people, and we are yet to fully understand the behaviors of deeplearning-based models, relying more on human expert opinions is still a more prudent choice. While the proposed approach can be utilized for automating the process of legal text, care must be taken before using or referring to the result produced by any machine in legal domain.
## Acknowledgements
We would like to thank anonymous reviewers for valuable comments and suggestions. This work was supported in part by US DARPA KAIROS
Program No. FA8750-19-2-1004. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
## References
Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta.
2021. Muppet: Massive multi-task representations with pre-finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5799–5811, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. 2015a. Label-embedding for image classification. *IEEE transactions on pattern analysis and machine intelligence*, 38(7):1425–1438.
Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, and Bernt Schiele. 2015b. Evaluation of output embeddings for fine-grained image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2927–2936.
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509.
Sahil Chopra, Michael Henry Tessler, and Noah D Goodman. 2019. The first crank of the cultural ratchet:
Learning and transmitting concepts through language.
In *CogSci*, pages 226–232.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Anjalie Field, Doron Kliger, Shuly Wintner, Jennifer Pan, Dan Jurafsky, and Yulia Tsvetkov. 2018. Framing and agenda-setting in russian news: a computational analysis of intricate political strategies. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3570–
3580.
Kuzman Ganchev, João Graça, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. *Journal of Machine* Learning Research.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3816–3830, Online. Association for Computational Linguistics.
Chi Han, Jiayuan Mao, Chuang Gan, Josh Tenenbaum, and Jiajun Wu. 2019. Visual concept-metaconcept learning. *Advances in Neural Information Processing* Systems, 32.
Braden Hancock, Martin Bringmann, Paroma Varma, Percy Liang, Stephanie Wang, and Christopher Ré.
2018. Training classifiers with natural language explanations. In *Proceedings of the conference. Association for Computational Linguistics. Meeting*,
volume 2018, page 1884. NIH Public Access.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pages 770–
778.
Lifu Huang, Heng Ji, Kyunghyun Cho, Ido Dagan, Sebastian Riedel, and Clare Voss. 2018. Zero-shot transfer learning for event extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2160–2170.
Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt:
Vision-and-language transformer without convolution or region supervision. In *International Conference on Machine Learning*, pages 5583–5594.
PMLR.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *arXiv preprint* arXiv:2205.11916.
Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. 2009. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE conference on computer vision and pattern recognition, pages 951–958. IEEE.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language.
arXiv preprint arXiv:1908.03557.
Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, and Shih-Fu Chang. 2022a. Clip-event:
Connecting text and images with event structures. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16420–
16429.
Sha Li, Liyuan Liu, Yiqing Xie, Heng Ji, and Jiawei Han. 2022b. Piled: An identify-and-localize framework for few-shot event detection. arXiv preprint arXiv:2202.07615.
Yang Liu, Lei Zhou, Xiao Bai, Yifei Huang, Lin Gu, Jun Zhou, and Tatsuya Harada. 2021. Goal-oriented gaze estimation for zero-shot learning. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3794–3803.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. *arXiv preprint* arXiv:1711.05101.
Gideon S. Mann and Andrew McCallum. 2010. Generalized expectation criteria for semi-supervised learning with weakly labeled data. Journal of Machine Learning Research, 11(32):955–984.
Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B
Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes, words,
and sentences from natural supervision. In *International Conference on Learning Representations*. International Conference on Learning Representations, ICLR.
Rakesh R Menon, Sayan Ghosh, and Shashank Srivastava. 2022. Clues: A benchmark for learning classifiers using natural language explanations. arXiv preprint arXiv:2204.07142.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task generalization via natural language crowdsourcing instructions.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 3470–3487.
Sanath Narayan, Akshita Gupta, Fahad Shahbaz Khan, Cees GM Snoek, and Ling Shao. 2020. Latent embedding feedback and discriminative features for zero-shot classification. In *European Conference* on Computer Vision, pages 479–495. Springer.
Mohammad Norouzi, Tomas Mikolov, Samy Bengio, Yoram Singer, Jonathon Shlens, Andrea Frome, Greg S Corrado, and Jeffrey Dean. 2014. Zero-shot learning by convex combination of semantic embeddings. In 2nd International Conference on Learning Representations, ICLR 2014.
Roma Patel and Ellie Pavlick. 2021. "was it "stated" or was it "claimed"?: How linguistic bias affects generative language models. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 10080–10095.
C Qiang, W Shu, H Yan, and W Liang. 2017. A hierarchical contextual attention-based gru network for sequential recommendation. *Neurocomputing*.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763.
PMLR.
Colin Samplawski, Erik Learned-Miller, Heesung Kwon, and Benjamin M Marlin. 2020. Zero-shot learning in the presence of hierarchically coarsened labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 926–927.
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2022. Multitask prompted training enables zeroshot task generalization. In *The Tenth International* Conference on Learning Representations.
Timo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also fewshot learners. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339–2352.
Shashank Srivastava, Igor Labutov, and Tom Mitchell.
2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 1527–1536.
Shashank Srivastava, Igor Labutov, and Tom Mitchell.
2018. Zero-shot learning of classifiers from natural language quantification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 306–316.
Derek Tam, Rakesh R. Menon, Mohit Bansal, Shashank Srivastava, and Colin Raffel. 2021. Improving and simplifying pattern exploiting training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4980–4991, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Michael Tomasello. 2009. *The cultural origins of human cognition*. Harvard university press.
Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. 2011. The caltech-ucsd birds-200-2011 dataset. *Computation and Neural* Systems Technical Report.
Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xiang Ren. 2020a. Learning from explanations with neural execution tree. In *ICLR*.
Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xiang Ren. 2020b. Learning from explanations with neural execution tree. In *8th International Conference on Learning Representations, ICLR 2020, Addis* Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Yongqin Xian, Zeynep Akata, Gaurav Sharma, Quynh Nguyen, Matthias Hein, and Bernt Schiele. 2016.
Latent embeddings for zero-shot classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 69–77.
Guanyu Yang, Zihan Ye, Rui Zhang, and Kaizhu Huang.
2022. A comprehensive survey of zero-shot image classification: methods, implementation, and fair evaluation. *Applied Computing and Intelligence*,
2(1):1–31.
Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. 2021.
Crossfit: A few-shot learning challenge for crosstask generalization in nlp. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7163–7189.
Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Josh Tenenbaum. 2018. Neuralsymbolic vqa: Disentangling reasoning from vision and language understanding. *Advances in neural* information processing systems, 31.
Ilker Yildirim, Judith Degen, Michael Tanenhaus, and Florian Jaeger. 2013. Linguistic variability and adaptation in quantifier meanings. In *Proceedings of the* Annual Meeting of the Cognitive Science Society, 35.
Pengfei Yu, Zixuan Zhang, Clare Voss, Jonathan May, and Heng Ji. 2022. Building an event extractor with only a few examples. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 102–109.
Caleb Ziems and Diyi Yang. 2021. To protect and to serve? analyzing entity-centric framing of police violence. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 957–976.
## A Appendix A.1 Configuration And Experiment Setting
We build CLORE on publicly available packages such as HuggingFace Transformers5, where we used model checkpoints as initialization. We train CLORE for 30 epochs in all experiments. In the image classification task on CUB-Explanations, we adopt a two-phase training paradigm: in the first phase we fix both visual encoders and Explanation encoders in EΦ, and in the second phase we finetune all parameters in CLORE.
Across experiments in this work we use the AdamW (Loshchilov and Hutter, 2017) optimizer widely adopted for optimizing NLP tasks. For hyper-parameters in most experiments we follow the common practice of learning rate= 3e − 5, β1 = 0.9, β2 = 0.999, ϵ = 1e − 8 and weight decay= 0.01. An exception is the first phase in image classification where, as we fix the input encoder, the learnable parameters become much less.
Therefore we use the default learning rate= 1e − 3 in AdamW. For randomness control, we use random seed of 1 across all experiments.
In Figure 7, there are multiple data points at xvalue of 0. Therefore, the data variance on data at x = 0 is intrinsic in data, and is unsolvable theoretical for any function fitting the data series. This causes the problem when calculating R2 value, as R2 measures the extent to which the data variance are "explained" by the fitting function. So R2can be upper bounded by: R2 ≤ 1 −
V ar*intrinsic* V ar*total*. To deal with this problem when measuring R2 metric, we removed the intrinsic variance in data point set D by replacing data points (0, yi) ∼ D with
(0, 1 n P(0,yi)∼D yi) in both series in Figure 7 before calculating R2 value.
## A.2 Logical Structure Templates
As the number of valid logical structure templates grows exponentially with maximal attribute numbers T, we limit T to a small value, typically 3. We list the logical structure templates in Table 5.
## A.3 Resources
We use one Tesla V100 GPU with 16GB memory to carry out all the experiments. The training time is 1 hour for tabular data classification on CLUES, 2 hours for image classification on CUBExplanations.
5https://huggingface.co, https://github.com/
huggingface/transformers label(X) = attribute1(X)
label(X) = attribute1(X)∧ attribute2(X) label(X) = attribute1(X)∨ attribute2(X)
label(X) = attribute1(X)∧ attribute2(X)∧ attribute3(X)
label(X) = attribute1(X)∨ attribute2(X)∨ attribute3(X) label(X) = (attribute1(X)∧ attribute2(X))∨ attribute3(X)
label(X) = (attribute1(X)∨ attribute2(X))∧ attribute3(X)
Table 5: The list of logical structure templates at maximum attribute number T = 3.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section "Limitations", page 10
✓ A2. Did you discuss any potential risks of your work?
Section "Ethics Statement", page 10
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section "Abstract" and Section 1 Introduction, page 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4 And 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section A.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section A.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4, Figure 8
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4, 5, and A.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
li-etal-2023-dual-gated | Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction | https://aclanthology.org/2023.findings-acl.572 | Multi-Modal Relation Extraction (MMRE) aims at identifying the relation between two entities in texts that contain visual clues. Rich visual content is valuable for the MMRE task, but existing works cannot well model finer associations among different modalities, failing to capture the truly helpful visual information and thus limiting relation extraction performance. In this paper, we propose a novel MMRE framework to better capture the deeper correlations of text, entity pair, and image/objects, so as to mine more helpful information for the task, termed as DGF-PT. We first propose a prompt-based autoregressive encoder, which builds the associations of intra-modal and inter-modal features related to the task, respectively by entity-oriented and object-oriented prefixes. To better integrate helpful visual information, we design a dual-gated fusion module to distinguish the importance of image/objects and further enrich text representations. In addition, a generative decoder is introduced with entity type restriction on relations, better filtering out candidates. Extensive experiments conducted on the benchmark dataset show that our approach achieves excellent performance compared to strong competitors, even in the few-shot situation. |
## Dual-Gated Fusion With Prefix-Tuning For Multi-Modal Relation Extraction
Qian Li1,2, Shu Guo3, Cheng Ji1,2, Xutan Peng4, Shiyao Cui5, Jianxin Li1,2∗**, Lihong Wang**3 1School of Computer Science and Engineering, Beihang University, Beijing, China 2Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing, China 3NationalComputerNetworkEmergencyResponseTechnicalTeam/CoordinationCenterofChina 4The University of Sheffield, South Yorkshire, UK
5Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
{liqian, jicheng, lijx}@act.buaa.edu.cn, [email protected], [email protected], [email protected], [email protected]
## Abstract
Multi-Modal Relation Extraction (MMRE)
aims at identifying the relation between two entities in texts that contain visual clues. Rich visual content is valuable for the MMRE task, but existing works cannot well model finer associations among different modalities, failing to capture the truly helpful visual information and thus limiting relation extraction performance.
In this paper, we propose a novel MMRE framework to better capture the deeper correlations of text, entity pair, and image/objects, so as to mine more helpful information for the task, termed as DGF-PT. We first propose a promptbased autoregressive encoder, which builds the associations of intra-modal and inter-modal features related to the task, respectively by entityoriented and object-oriented prefixes. To better integrate helpful visual information, we design a dual-gated fusion module to distinguish the importance of image/objects and further enrich text representations. In addition, a generative decoder is introduced with entity type restriction on relations, better filtering out candidates.
Extensive experiments conducted on the benchmark dataset show that our approach achieves excellent performance compared to strong competitors, even in the few-shot situation.
## 1 Introduction
As a fundamental subtask of information extraction, relation extraction (RE) aims to identify the relation between two entities (Cong et al., 2022; Xue et al., 2022). Recently, there is a growing trend in multi-modal relation extraction (MMRE), aiming to classify textual relations of two entities as well as introduce the visual contents. It provides additional visual knowledge that incorporates multi-media information to support various cross-modal tasks such as the multi-modal knowledge graph construction (Zhu et al., 2022; Wang et al., 2019) and visual question answering systems (Wang et al., 2022; Shih et al., 2016).
∗Corresponding author.
Relation: *Member_of* Entity Pair Text Image LeBron James joined the Los Angeles Lakers as a free **agent.**
(a) Helpful LeBron
![0_image_0.png](0_image_0.png)
LeBron James has returned to the Los Angeles Lakers.
(b) Unhelpful
?
LeBron James talks to Davis of the Lakers during a full timeout.
Figure 1: An example of the MMRE task. The task is to predict the relation of given entity pairs for the specific text and image which contains multiple objects.
Existing methods achieved considerable success by leveraging visual information (Zheng et al.,
2021a; He et al., 2022; Chen et al., 2022) since the visual contents provide valuable pieces of evidence to supplement the missing semantics for MMRE. Previous work (Zheng et al., 2021a) introduced the visual relations of objects in the image to enrich text embedding via an attention-based mechanism. Next, HVPNet (Chen et al., 2022)
used an object-level prefix and a multi-scale visual fusion mechanism to guide the text representation learning. Nevertheless, these methods primarily focus on the relations between objects and text and ignore the finer associations (entity pair, text, and image/objects). Furthermore, they usually suffered from the failure of identifying truly helpful parts/objects of the image to the corresponding entity pair on account of introducing all the objects.
This may cause severe performance degradation of downstream tasks.
For multi-modal relation extraction, not all images or their objects are helpful for prediction. As illustrated in Figure 1, given three different inputs with the same relation *Member_of* and entity pair, each of the inputs contains a text, an image, and an entity pair. There are two situations: (a) The image is helpful for relation extraction. For entity pair *LeBron James* and *Lakers*, in the image *LeBron James* wears the Lakers jersey revealing the implied relationship between the two entities. Therefore, we can improve relation extraction by considering the entity-pair relationships in visual information. (b)
The image is unhelpful for the entity pair *LeBron* James and *Lakers* since it only contains *Lakers* object, rather than the association information for entity pairs. Furthermore, the image can provide an incorrect extraction signal, for example, the third image in Figure 1 shows that the relation between LeBron James and *Lakers* is more likely to be misjudged as Coach_of or *Owner_of*. Unhelpful visual content is prone to providing misleading information when predicting the relation. In general, it is necessary to identify the truly helpful visual information to filter the useless and misleading ones, but it is still under-explored.
To overcome the above challenges, we propose a novel MMRE framework DGF-PT to better incorporate finer granularity in the relations and avoiding unhelpful images misleading the model1.
Specifically, we propose a prompt-based autoregressive encoder containing two types of prefixtuning to integrate deeper associations. It makes the model focus on associations of intra-modal (between entity pair and text) by entity-oriented prefix and inter-modal (between objects and text) by the object-oriented prefix. In order to distinguish the importance of image/objects, we design a dualgated fusion module to address the unhelpful visual data by utilizing interaction information via local and global visual gates. Later, we design a generative decoder to leverage the implicit associations and restrict candidate relations by introducing entity type. We further design joint objective to allow the distribution of representations pre and postfusion to be consistent while enhancing the model to identify each sample in the latent space. Experimental results show that our approach achieves excellent performance in the benchmark dataset.
Our contributions can be summarized as follows.
- We technically design a novel MMRE Framework to build deeper correlations among entity pair, text, and image/objects and distinguish helpful visual information.
- We propose a prompt-based autoregressive encoder with two types of prefixes to enforce the intra-modal and inter-modal association. We 1The source code is available at https://github.com/
xiaoqian19940510/DGF-PT.
design dual-gated fusion with a local objectimportance gate and a global image-relevance gate to integrate helpful visual information.
- Experimental results indicate that the framework achieves state-of-the-art performance on the public multi-modal relation extraction dataset, even in the few-shot situation.
## 2 Related Work
Multi-modal relation extraction (MMRE) task, a subtask of multi-modal information extraction in NLP (Sun et al., 2021; Cong et al., 2020; Sui et al.,
2021; Lu et al., 2022), aims to identify textual relations between two entities in a sentence by introducing visual content (Zheng et al., 2021a,b; Chen et al., 2022), which compensates for insufficient semantics and helps to extract the relations.
Recently, there are several works (Zheng et al.,
2021a,b; Chen et al., 2022) beginning to focus on the multi-modal relation extraction technology.
As the first work, MNRE (Zheng et al., 2021b)
developed a multi-modal relation extraction baseline model. It demonstrates that introducing multimodal information supplements the missing semantics and improves relation extraction performance in social media texts. Later, Zheng et al. (2021a)
proposed a multi-modal neural network containing a scene graph modeling the visual relations of objects and aligning the relations between objects and text via similarity and attention mechanism.
HVPNet (Chen et al., 2022) designs a visual prefixguided fusion for introducing object-level visual information and further utilizes hierarchical multiscaled visual features. Moreover, they introduce the information of all objects and thus cannot distinguish the truly helpful visual information, making it impractical to personalize the use of the image and further damaging their performance.
In the multi-modal relation extraction task, we note that images are naturally helpful information in the problem of multi-modal relation extraction. However, the potential for a differentiated use of image information in this task is under-explored. In this paper, we focus on the finer (intra-modal and inter-modal) association and manage to integrate truly useful visual information, promoting the exploitation of limited images. This enables bridging the gap through the transfer multi-modal relation extraction task into MLM pre-train mechanism (Devlin et al., 2019; Liu et al., 2021).
![2_image_0.png](2_image_0.png)
## 3 Problem Formulation
We provide the definition of MMRE. For a given sentence text T = {w1, w2*, . . . , w*L} with L words as well as the image I related to the sentence, and an entity pair (e1, e2), an MMRE model takes
(e1, e2*, T, I*) as input and calculates a confidence score p(ri|e1, e2*, T, I*) for each relation ri ∈ R to estimate whether T and I can reflect the relation ri between e1 and e2. R = {r1*, . . . , r*C, None} is a pre-defined relation set, where "None" means that no relation is maintained for the mentions.
## 4 Framework
This section introduces our proposed DGF-PT
framework, as shown in Figure 2. We first design the prompt-based autoregressive encoder to acquire the fine-grained representations (entity pair, object, and text), which contains two types of prefixes for integrating helpful information and characterizing intra-modal and inter-modal interactions. To avoid unhelpful visual information misleading the model, we design the dual-gated fusion module to distinguish the importance of image/objects via local and global gates. It also integrates the semantics of image transferred by Oscar (Li et al., 2020). The Fusion module outputs an enhanced representation.
Later, the generative decoder is proposed for relation prediction, leveraging the implicit associations and restricting candidate relations by introducing entity types. Finally, we design the joint objective including distribution-consistency constraint, self-identification enhancement, and relation classification for model optimization.
## 4.1 Prompt-Based Autoregressive Encoder
In order to acquire the finer granularity in the associations (entity pair, objects, and text), we propose a prompt-based autoregressive encoder. After initialization, two specific prefix-tuning strategies are implemented to guide the encoder to attend to task-relevant inter-/intra-modal associations. Subsequently, prefixes, objects, image, and text are progressively fed into an autoregressive encoder stage by stage to obtain fine-grained representations for use in subsequent fusion module (Section 4.2).
## 4.1.1 Initialization
Given text T, the word embeddings w ∈ R
1×N are obtained through the GPT-2 model (Radford et al.,
2019) and then fed into a fully-connection layer, where N is the word dimension. The initial text representation is T = [w1; w2; *. . .* ; wL] ∈ R
L×N .
Given an image I, the global image feature I ∈
RM×N is obtained by VGG16 (Simonyan and Zisserman, 2015) with a fully-connection layer, transferring the feature into M-block N-dimensional vectors. We then extract object features using Faster R-CNN (Ren et al., 2015) and select top-K objects using the ROI classification score. Each object feature is obtained by the average grouping of ROI regions. The initial object representation is O = [o1; o2; *. . .* ; oK] ∈ R
K×N .
## 4.1.2 Object & Entity Oriented Prefixes
Utilizing the advantages of prefix-tuning, the pretrained encoder (e.g., GPT-2 as our encoder) can be guided to learn task-specific features for fast adaptation to the MMRE task (Liu et al., 2021). However, the design of appropriate prefixes for finer associations learning in the MMRE task remains an open research question, and the direct use of prefixes from other tasks is not reasonable. Therefore, we construct two types of prefixes, an object-oriented prefix for inter-modal relevance (objects & text)
and an entity-oriented prefix for intra-modal correlations (entity pair & text), encouraging the encoder to leverage text as a medium to strengthen multigranular associations to acquire enhanced semantic representations.
Object-Oriented Prefix. Given that objects related to entities are indeed useful information for the MMRE task, we propose an object-oriented prefix, termed as Po(·), which provides guidance information of inter-modal relevance to the encoder.
For the input text T, we define the following pattern "Consider ⟨*objects*⟩, predict relation.", where
⟨*objects*⟩ means the objects relevant to the entity pair of T which is different for each input. It emphasizes specific key textual contents and introduces the visual features of relevant objects.
Entity-Oriented Prefix. Due to the visual information may be incomplete or misleading, we argue that only an object-oriented prefix is insufficient to capture classification information. Thus, we propose an entity-oriented prefix, termed as Pe(·), to capture intra-modal association to adapt the task.
We define the following pattern "Consider ⟨e1, e2⟩,
predict relation.", where ⟨e1, e2⟩ is the entity pair to predict the relation.
## 4.1.3 Multi-Stage Autoregressive Encoder
Prompt-based learning keeps the parameters of the whole PLM frozen and prepends the prefixes before the task inputs (Liu et al., 2021). The bidirectional encoder (e.g., BERT) cannot effectively integrate the proposed dual-gated fusion module (Section 4.2) in model testing. Therefore, we deploy a unidirectional encoder (e.g., GPT and GPT-2) and design multiple stages to integrate multi-granular textual and visual knowledge, where the prefixes, objects, image, and text are fed stage by stage.
First stage (S1). The input of the first stage S1 contains the prefixes and objects to learn the relevance from local granularity and obtain the representations of objects. To introduce task-related prefix knowledge, the two types of trainable prefixes Po(·) and Pe(·) are prepended before the input sequence as the prefix tokens, obtained through the GPT-2 vocabulary. In S1, the encoder learns the representations of objects and updates the prefix tokens of each model layer:
$$\begin{array}{l}{{\mathcal{P}_{o}^{*}(\mathbf{o}_{e_{1}},\mathbf{o}_{e_{2}}),\mathcal{P}_{e}^{*}(\mathbf{T}[e_{1}],\mathbf{T}[e_{2}]),\mathbf{h}_{o}=\mathcal{S}_{1}(\cdot)}}\\ {{=\mathrm{Encoder}(\mathcal{P}_{o}(\mathbf{o}_{e_{1}},\mathbf{o}_{e_{2}}),\mathcal{P}_{e}(\mathbf{T}[e_{1}],\mathbf{T}[e_{2}]),\mathbf{O})\,,}}\end{array}\tag{1}$$
where P∗
o(·) and P∗
e(·) are updated prefixes after S1, ho is the representations of objects, and oe1 and oe1 are the initial embeddings of entities e1 and e2. After the S1 stage, the object information is introduced into the prefix embedding.
Second stage (S2). The inputs of the second stage S2 are the outputs of the first stage S1 (including the updated prefixes and the representations of objects) and the image feature I to get the representations of images hi. We hope the model can learn to capture the inter-modal relevance from global granularity, which is useful information for relation extraction that may improve performance. Thus, we introduce the image information in S2. The S2 embedding is therefore updated as:
$$\mathbf{h}_{i}={\mathcal{S}}_{2}(\cdot)=\operatorname{Encoder}\left({\mathcal{S}}_{1}(\cdot),\mathbf{I}\right).$$
$$(2)^{\frac{1}{2}}$$
Third stage (S3). To learn text representation ht, the third stage inputs S2 and text T using interactive objects and images.
$$\mathbf{h}_{t}={\mathcal{S}}_{3}(\cdot)=\operatorname{Encoder}\left({\mathcal{S}}_{2}(\cdot),\mathbf{T}\right).$$
$$\left({\boldsymbol{3}}\right)$$
## 4.2 Dual-Gated Fusion
Unhelpful/Task-irrelevant information in image is often ignored by simply utilizing all objects for aggregation. To solve this, we propose dual-gated fusion to effectively integrate helpful visual information while filtering out misleading information. This module utilizes local and global gates to distinguish the importance and relevance of image/objects, and filters out task-irrelevant parts. By integrating semantic information of the image, a final fused representation containing associations among image, object, and text is obtained.
Specifically, the local object-importance gate vector β by the local object features and the global image-relevance gate vector γ by the global image features are calculated as:
$$\begin{array}{c}{{\alpha_{k}=\frac{\cos\left(\mathbf{h}_{t},\mathbf{h}_{o}[k]\right)}{\sum_{j=1}^{K}\cos\left(\mathbf{h}_{t},\mathbf{h}_{o}[j]\right)},}}\\ {{\beta=\mathrm{FC}_{\beta}(\overline{{{\mathbf{h}}}}_{o})=\mathrm{FC}_{\beta}\left(\sum_{k=1}^{K}\alpha_{k}\mathbf{h}_{o}[k]\right),}}\\ {{\gamma=\operatorname{tanh}\left(\mathrm{FC}_{\gamma}\left(\mathbf{h}_{i}\right)\right),}}\end{array}$$
(4) $$\begin{array}{l}\small\mathbf{(5)^{}}\end{array}$$ = $$\begin{array}{l}\small\mathbf{(6)^{}}\end{array}$$ .
where FC is fully-connected layer and ho[k] is the k-th object in the object set O. ho calculates attention between the selected top-K objects of divergent modalities. Subsequently, the textual characteristic of fusion h˜tis calculated by
$${\tilde{\mathbf{h}}}_{t}=\operatorname{MLP}\left(\mathbf{h}_{t}\odot{\boldsymbol{\gamma}}+{\boldsymbol{\beta}}\right)+\mathbf{h}_{t},$$
where MLP is multilayer perceptron and ⊙ means hadamard product.
In order to further integrate the semantics of visual information, we use Oscar (Li et al., 2020)
to transfer hiinto a text description h˜i2t for each image, using objects as anchor points to align visual and textual features in a common space. It learns multi-modal alignment information of entities from a semantic perspective. The detail is given in Appendix A.
While local representations can capture valuable clues, global features provide condensed contextual and high-level semantic information. Given this insight, we leverage the global information from one modality to regulate the local fragment of another modality, enabling the entity to contain semantic information and filter out irrelevant visual information. The final fused representation is:
$${\hat{\mathbf{h}}}_{t}={\hat{\mathbf{h}}}_{t}+{\hat{\mathbf{h}}}_{i2t}\odot\delta,$$
h˜t = h˜t + h˜i2t ⊙ δ, (8)
where δ is trade-off factor between text embedding h˜t and the inter-modal text representation h˜i2t.
## 4.3 Generative Decoder
To leverage the implicit associations and restrict candidate relations by introducing entity type, we design a generative decoder.
The type of entity pair is helpful for relation classification. For example, the relation of entity type *Person* and *Organization* must not be *born* and *friend*, but maybe CEO and *staff*. Thus, we introduce head type Tte1 and tail type Tte2 one by one to leverage the implicit associations and restrict candidate relations.
To maintain the consistency of the relation extraction task with the MLM pre-trained model, we use the generative decoder to predict the relation.
The prediction of the generative decoder is:
$$\mathbf{h}_{e_{1}}^{t},\mathbf{h}_{e_{2}}^{t},\mathbf{r}=\mathrm{Decoder}\left(\mathcal{S}_{3}(\cdot),\mathbf{T}_{e_{1}}^{t},\mathbf{T}_{e_{2}}^{t}\right),\tag{9}$$
where h te1
, h te2 are the representation of types, and r is the representation of relation.
## 4.4 Joint Objective
$$(7)$$
In order to address distribution consistency within the dual-gated fusion module, we introduce the distribution-consistency constraint loss, which is applied on a single-sample basis. Additionally, to meet the need for inter-sample identification, we propose self-identification enhancement loss. The overall joint objective is then formed by combining the relation classification loss with the aforementioned constraints.
Distribution-Consistency Constraint. In order to ensure the dual-gated fusion module effectively integrates helpful visual features while avoiding the introduction of task-irrelevant information, we introduce distribution-consistency constraint to measure and optimize the change in representation distribution pre and post-fusion. Thus, we propose to use KL divergence to measure the distance between the probability distribution of h˜t and ht, which is equal to calculating the cross-entropy loss over the two distributions:
$$\mathcal{L}_{d}(\theta)=\text{KL}\left(p_{\theta}(\mathbf{r}|\tilde{\mathbf{h}}_{t})\|p_{\theta}(\mathbf{r}|\mathbf{h}_{t})\right)\tag{10}$$ $$=\sum_{\mathbf{r}\in R}p_{\theta}(\mathbf{r}|\tilde{\mathbf{h}}_{t})\log p_{\theta}(\mathbf{r}|\mathbf{h}_{t}).\tag{11}$$
$$({\boldsymbol{\delta}})$$
Self-Identification Enhancement. The MMRE
task requires the model to have the ability to correctly classify relations from individual samples.
However, relation labels are unevenly distributed or lacking in the real world. Therefore, further enhancement is needed. We design a negativesampling-based self-supervised loss function to enhance the model. Moreover, the dual-gated fusion module is treated as the augmentation function leveraging the modality information. Specifically, textual representation ht and fused representation h˜t are the mutually positive samples:
$$\mathcal{L}_{s}\!=\![s(x,\tilde{x})-s(x_{n},\tilde{x})]_{+}\!+\![s(x,\tilde{x})-s(x,\tilde{x}_{n})]_{+}\,,\tag{12}$$ where $\{x,\tilde{x}\}$ are $\{\mathbf{h}_{t},\tilde{\mathbf{h}}_{t}\}$, $[a]_{+}=\max(a,0)$, and $\{\mathbf{h}_{t},\tilde{\mathbf{h}}_{t}\}$ is the $\mathbf{h}_{t}$-norm.
s(·, ·) is the cosine similarity. xn and x˜n are the
hardest negatives of ht and h˜tin a mini-batch
based on a similarity-based measurement.
Relation Classification. The loss for relation classification by the negative log-likelihood function is as follows:
$${\mathcal{L}}_{\mathbf{c}}=-\log p(\mathbf{r}|\mathbf{h}_{t},\mathbf{h}_{i},\mathbf{h}_{t}[e_{1}],\mathbf{h}_{t}[e_{2}]),$$
$$(13)$$
| Model Type | Model Name | Acc. (%) | Prec. (%) | Recall (%) | F1 (%) |
|------------------------------------|------------------------------|-------------------|-------------------|-------------------|------------------|
| Glove+CNN (Zeng et al., 2014) | 70.32 | 57.81 | 46.25 | 51.39 | |
| Text-based RE Models | PCNN (Zeng et al., 2015) | 72.67 | 62.85 | 49.69 | 55.49 |
| MTB (Soares et al., 2019) | 72.73 | 64.46 | 57.81 | 60.96 | |
| BERT+SG (Devlin et al., 2019) | 74.09 | 62.95 | 62.65 | 62.80 | |
| BERT+SG+Att. (Devlin et al., 2019) | 74.59 | 60.97 | 66.56 | 63.64 | |
| MMRE Models | VisualBERT (Li et al., 2019) | - | 57.15 | 59.45 | 58.30 |
| MEGA (Zheng et al., 2021a) | 76.15 | 64.51 | 68.44 | 66.41 | |
| HVPNet (Chen et al., 2022) | - | 83.64 | 80.78 | 81.85 | |
| DGF-PT (BERT Encoder) | 79.82 ( ↑ 3.67 ) | 79.72 ( ↑ -3.92 ) | 78.63 ( ↑ -2.15 ) | 79.24 ( ↑ -2.61 ) | |
| Ours | DGF-PT (GPT Encoder) | 82.03 ( ↑ 5.88 ) | 81.23 ( ↑ -2.41 ) | 82.48 ( ↑ 1.70 ) | 82.09 ( ↑ 0.24 ) |
| DGF-PT (GPT-2 Encoder) | 84.25 ( ↑ 8.10 ) | 84.35 ( ↑ 0.71 ) | 83.83 ( ↑ 3.05 ) | 84.47 ( ↑ 2.62 ) | |
where r is the relation between the head entity e1 and the tail entity e2 for hx, and ht[e1], ht[e2] are the representations of the two entities. Finally, the overall loss function of our model is as follows.
$${\mathcal{L}}=\lambda_{d}{\mathcal{L}}_{d}+\lambda_{s}{\mathcal{L}}_{s}+\lambda_{c}{\mathcal{L}}_{c},\qquad(14)$$
where λd, λs, and λc are trade-off parameters. We optimize all training inputs in a mini-batch strategy.
## 5 Experiment 5.1 Dataset And Evaluation Metric
We conduct experiments in a multi-modal relation extraction dataset MNRE (Zheng et al., 2021b),
crawling data from Twitter2. The MNRE includes 15, 484 samples and 9, 201 images. It contains 23 relation categories. As previous work (Zheng et al., 2021a) recommended, the MNRE dataset is divided into 12, 247 training samples, 1, 624 development samples, and 1, 614 testing samples. We report the official Accuracy (Acc.), Precision (Prec.), Recall, and F1 metrics for relation evaluation.
## 5.2 Comparision Methods
We compare our method with three text-based RE
models and five MMRE models.
Text-based RE Models. We first consider a group of representative text-based RE models, which do not introduce image information, for modeling the connection of words in the sentence: (1)
Glove+CNN (Zeng et al., 2014) is a CNN-based model with additional position embeddings to utilize the position association. (2) PCNN (Zeng et al.,
2015) is a RE method utilizing external knowledge graph with a distant supervision manner to build 2https://archive.org/details/twitterstream connection by the graph. (3) Matching the Blanks
(MTB) (Soares et al., 2019) is a BERT-based RE
model to learn context correlation.
MMRE Models. We further consider another group of previous approaches for MMRE to integrate visual information: (4) **BERT+SG** (Devlin et al., 2019) concatenates BERT representations with visual content, which is obtained by the pre-trained scene graph tool (Tang et al., 2020)
to learn the connection between text and the object of the image. (5) **BERT+SG+Att** adopts an attention mechanism to compute the relevance between the textual and visual features. (6) **VisualBERT** (Li et al., 2019) is a single-stream encoder, learning cross-modal correlation in a model. (7)
MEGA (Zheng et al., 2021a) considers the relevance from the structure of objects in the image and semantics of text perspectives with graph alignment. (8) **HVPNet** (Chen et al., 2022) introduces an object-level prefix with a dynamic gated aggregation strategy to enhance the correlation between all objects and text.
In contrast to these methods, our approach incorporates the correlation between entity pairs, text, and visual information, and effectively identifies useful visual information.
## 5.3 Implementation Details
For all baselines, we adopt the best hyperparameters and copy results reported in the literature (Zheng et al., 2021a,b; Chen et al., 2022).
We used PyTorch3as a deep learning framework to develop the MMRE. The BERT and GPT-24are for text initialization and the dimension is set at 3https://pytorch.org/
4https://github.com/huggingface 768. The VGG version is VGG165. We use Faster R-CNN (Ren et al., 2015) for image initialization and set the dimension of visual objects features at 4096. For hyper-parameters, the best coefficients λd, λs, λc are 2, 2 and 3. The best δ is 0.4. See Appendix B for more details on model training.
## 5.4 Main Results
To verify the effectiveness of our model, we report the overall average results in Table 1.
From the table, we can observe that: 1) Our model outperforms text-based RE models in terms of four evaluation metrics, indicating the beneficial impact of visual information on relation extraction and the necessity of its integration. 2) Compared to MMRE baselines, our model achieves the best results. Specifically, our model improves at least 2.62% in F1 and 8.10% in Acc., respectively. These results indicate that our method for incorporating and utilizing visual information is superior and effective. 3) Compared to different encoders (e.g.,
BERT and GPT), the GPT and GPT-2 achieve better results. It demonstrates that the generative encoder can integrate effective visual features more effectively, which is more suitable for the task. For the generative model, the performance is sensitive to the order of input. Thus, we discuss the effect of the order of text, image, and objects in Appendix C.
## 5.5 Discussion For Model Variants
For a further detailed evaluation of the components of our framework, we performed ablation experiments and reported the results in Table 2. E-P
means entity-oriented prefix and O-P means objectoriented prefix. "↓" means the average decrease of all four metrics compared to our model.
Discussions for core module. To investigate the effectiveness of each module, we performed variant experiments, showcasing the results in Table 2.
From the table, we can observe that: 1) the impact of the prefixes tends to be more significant.
We believe the reason is that the multiple prompts characterize modality interactions, helping for providing more visual clues. 2) By removing each module, respectively, the performance basically decreased. Compared to joint objective modules, the dual-gated fusion is significantly affected. It demonstrates the effectiveness of knowledge fusion introducing useful visual content and addressing
| Variants | Acc. | Prec. | Recall | F1 | △ Avg |
|-----------------------------|--------|---------|----------|-------|---------|
| DGF-PT (Ours) | 85.25 | 84.35 | 83.83 | 84.47 | - |
| w/o All Prefixes | 83.32 | 82.93 | 83.42 | 82.35 | ↓ 1.47 |
| w/o E-P | 84.62 | 83.31 | 82.83 | 82.63 | ↓ 1.13 |
| w/o O-P | 84.05 | 83.20 | 83.29 | 83.39 | ↓ 0.99 |
| w/o dual-gated fusion | 84.24 | 83.56 | 82.74 | 83.26 | ↓ 1.03 |
| w/o joint objective | 84.49 | 83.90 | 83.26 | 84.05 | ↓ 0.55 |
| repl. All Prefixes in S2 | 83.29 | 82.40 | 83.17 | 82.24 | ↓ 1.70 |
| repl. All Prefixes in S3 | 84.10 | 83.24 | 82.02 | 82.29 | ↓ 1.56 |
| repl. E-P in S2 | 84.28 | 83.20 | 82.41 | 83.93 | ↓ 1.02 |
| repl. E-P in S3 | 84.37 | 83.82 | 83.03 | 83.41 | ↓ 0.82 |
| repl. O-P in S2 | 84.12 | 83.71 | 83.15 | 83.84 | ↓ 0.77 |
| repl. O-P in S3 | 84.03 | 83.65 | 82.20 | 83.73 | ↓ 1.07 |
| repl. E-P in S2 & O-P in S3 | 84.09 | 83.43 | 82.81 | 84.15 | ↓ 0.86 |
| repl. E-P in S3 & O-P in S2 | 84.76 | 84.24 | 83.38 | 84.20 | ↓ 0.33 |
noise visual data. All observations demonstrate the effectiveness of each component in our model.
Discussions for the stage of prefix. We explore the effects by introducing the prefixes at a different stage in the encoder, as shown in Table 2. From the table, we can observe that: 1) Compared to feed all Prefixes in the S2, S3 stage, the S1 stage is more effective. It demonstrates that early introducing prefixes may integrate more helpful visual classification information. 2) When the O-P is fixed, and the E-P is fed into the S3 stage, the performance of our model is the best compared to that introduced in S2 stages. It demonstrates that the E-P is set nearly to the text features helpful to introduce intramodal association. 3) When we fix the E-P, and the O-P is introduced in the S2 stage, which is fed nearly to objects, the performance achieves better than S3. It demonstrates that the O-P is nearly to object features, which can capture more useful local information for relation classification. 4) When we change the stage of the two prefixes, the performance achieves better as the E-P in S3 and the O-P
in S2. All observations demonstrate that "E-P in S1 & O-P in S1" is the best schema to introduce intra-modal association and inter-modal relevance.
## 5.6 Discussions For Image Information
To further investigate the impact of images on all the compared methods, we report the results by deleting different proportions of images, as shown in Figure 3. From the figure we can observe that:
1) On all metrics, the more proportion of images
![7_image_0.png](7_image_0.png)
were introduced, the better the model performed.
It demonstrates that more images provide more meaningful information for relation classification and utilize visual information more effectively. 2)
Compared to other methods, our model performs best (except for HVPNet in introducing only 0%-
20% percentage image data). It demonstrates that without visual information, our model is still more capable to capture intra-modal associations for relation classification. 3) Compared to the HVPNet with an object-level prefix, our model performs poorly with fewer visual data. The main reason is that the prefix is outstanding in the few-shot situation and incorporating deeper correlations of our model needs enough visual information compared to the prefix-based method. Our model performs better than HVPNet with the visual data increase.
Observations indicate that our model incorporates visual features more effectively.
## 5.7 Discussions For Sample Number
We investigate the impact of the sample number of different relations. To do so, we divide the dataset into multiple blocks based on the sample number of each relation and evaluate the performance by varying the sample number of relations in [0, 1000]
compared with the outstanding baselines, as shown in Figure 4. From the figure, we can observe that:
1) The increasing of sample number performance improvements to all methods. The main reason is that the smaller the sample number, the more difficult it is to distinguish the relation. 2) Our model could also advance the baseline methods with the decrease in sample number, demonstrating the superiority of our method in tackling the relation with fewer samples. This phenomenon confirms the prefixes are suitable for few-shot situations. All the observations demonstrate that our method reduces the impact of the sample number.
![7_image_1.png](7_image_1.png)
Text Input: *P.Wilson* and *Dagmara* went to the Boundaries *Premiere* to support *Vera*, ...
![7_image_2.png](7_image_2.png)
Truth Relation MEGA BERT+SG+Att. Prediction *Result*
×
× ×
√ √ √
HVPNet
×
√ √
## 5.8 Case Study
To illustrate our model can effectively identify useful visual information, we provide an example involving various entity pairs. As shown in Fig. 5, the helpful information varies depending on the entity pair. From the figure, we can observe that: 1)
Our model achieves superior performance across different entity pairs, demonstrating its ability to effectively extract useful visual information while avoiding the negative influence of unhelpful information on prediction. 2) When presented with the entity pair of *Vera* and *P.Wilson* that contains limited useful visual information, our model remains the best, while other baselines make incorrect predictions. These observations further demonstrate the effectiveness of our model in leveraging visual information while avoiding the negative influence of unhelpful information on predictions.
## 6 Conclusion
We propose DGF-PT, a novel multi-modal relation extraction framework, to capture deeper correlations among entity pair, text, and image/objects and integrate more helpful information for relation extraction. Our framework effectively integrates intra-modal and inter-modal features, distinguishes helpful visual information, and restricts candidate relations. Extensive experiments conducted on the benchmark dataset show that our approach achieves excellent performance.
## Limitations
Our work overcomes visual noise data that limit extraction performance, incorporating multi-modal knowledge of different levels. Empirical experiments demonstrate that our method avoids noise data misleading the MMRE model. However, there are still some limitations of our approach can be summarized as follows:
- Due to the limitation of the existing MMRE
datasets, we only experiment on two modalities to explore the influence of image features. We will study more modalities in future work.
- Our method neglects the multiple relations for an input, which may not consider the multiple semantics of entities. We leave the multiple relation extraction method for future work.
## Ethics Statement
In this work, we propose a new MMRE framework that captures deeper correlations and fuses helpful visual information to benchmark our architecture with baseline architectures on the MNRE dataset.
Data Bias. Our framework is designed for multimodal relation extraction for Twitter data. However, when applied to data with vastly different distributions or in new domains, the model's performance may be biased. The results reported in the experiment section are based on specific benchmark datasets, which may be affected by these biases. Therefore, caution should be taken when evaluating the generalizability and fairness.
Computing Cost/Emission. Our research, which entails the utilization of large language models, necessitates a significant computational burden. We recognize that this computational burden results in a negative environmental impact in terms of carbon emissions. Specifically, our work required a cumulative 425 GPU hours of computation utilizing Tesla V100 GPUs. The total emissions generated by this computational process are estimated to be 47.18 kg of CO2 per run, with a total of two runs being performed.
## Acknowledgment
We thank the anonymous reviewers for their insightful comments and suggestions. Jianxin Li is the corresponding author. The authors of this paper were supported by the NSFC through grant No.U20B2053, 62106059.
## References
Xiang Chen, Ningyu Zhang, Lei Li, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Good visual guidance make A
better extractor: Hierarchical visual prefix for multimodal entity and relation extraction. In *NAACL*,
pages 1607–1618.
Xin Cong, Jiawei Sheng, Shiyao Cui, Bowen Yu, Tingwen Liu, and Bin Wang. 2022. Relation-guided few-shot relational triple extraction. In *SIGIR '22:*
The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11 - 15, 2022, pages 2206–2213.
ACM.
Xin Cong, Bowen Yu, Tingwen Liu, Shiyao Cui, Hengzhu Tang, and Bin Wang. 2020. Inductive unsupervised domain adaptation for few-shot classification via clustering. In *Machine Learning and Knowledge Discovery in Databases - European Conference,*
ECML PKDD 2020, Ghent, Belgium, September 1418, 2020, Proceedings, Part II, volume 12458 of Lecture Notes in Computer Science, pages 624–639.
Springer.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*, pages 4171–4186.
Shwai He, Liang Ding, Daize Dong, Boan Liu, Fuqiang Yu, and Dacheng Tao. 2022. Cherry hypothesis: Identifying the cherry on the cake for dynamic networks.
CoRR, abs/2211.05528.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language.
CoRR, abs/1908.03557.
Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao.
2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In *ECCV*, volume 12375 of *Lecture Notes in Computer Science*, pages 121–
137.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
CoRR, abs/2107.13586.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *ICLR*.
Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In ACL, pages 5755–5772.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language
models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In *NeurIPS*,
pages 91–99.
Kevin J Shih, Saurabh Singh, and Derek Hoiem. 2016.
Where to look: Focus regions for visual question answering. In *CVPR*, pages 4613–4621.
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In *ICLR*.
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks:
Distributional similarity for relation learning. In ACL, pages 2895–2905.
Dianbo Sui, Zhengkun Tian, Yubo Chen, Kang Liu, and Jun Zhao. 2021. A large-scale chinese multimodal NER dataset with speech clues. In *ACL/IJCNLP*,
pages 2807–2818.
Lin Sun, Jiquan Wang, Kai Zhang, Yindu Su, and Fangsheng Weng. 2021. Rpbert: A text-image relation propagation-based BERT model for multimodal NER.
In *AAAI*, pages 13860–13868.
Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, and Hanwang Zhang. 2020. Unbiased scene graph generation from biased training. In *CVPR*, pages 3713–3722.
Jiaan Wang, Fandong Meng, Ziyao Lu, Duo Zheng, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2022. Clidsum:
A benchmark dataset for cross-lingual dialogue summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 7716–7729. Association for Computational Linguistics.
Meng Wang, Guilin Qi, HaoFen Wang, and Qiushuo Zheng. 2019. Richpedia: a comprehensive multimodal knowledge graph. In *Joint International Semantic Technology Conference*, pages 130–145.
Fuzhao Xue, Aixin Sun, Hao Zhang, Jinjie Ni, and EngSiong Chng. 2022. An embarrassingly simple model for dialogue relation extraction. In *ICASSP*, pages 6707–6711.
Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao.
2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In *EMNLP*,
pages 1753–1762.
Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In *COLING*, pages 2335–2344.
Changmeng Zheng, Junhao Feng, Ze Fu, Yi Cai, Qing Li, and Tao Wang. 2021a. Multimodal relation extraction with efficient graph alignment. In *ACM MM*,
pages 5298–5306.
Changmeng Zheng, Zhiwei Wu, Junhao Feng, Ze Fu, and Yi Cai. 2021b. MNRE: A challenge multimodal dataset for neural relation extraction with visual evidence in social media posts. In *ICME*, pages 1–6.
Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and VQA. In *AAAI*, pages 13041–13049.
Xiangru Zhu, Zhixu Li, Xiaodan Wang, Xueyao Jiang, Penglei Sun, Xuwu Wang, Yanghua Xiao, and Nicholas Jing Yuan. 2022. Multi-modal knowledge graph construction and application: A survey. arXiv preprint arXiv:2202.05786.
## A Oscar For Image Caption Generation
To generate the text description of the image for multi-modal knowledge alignment without additional pre-training on multi-modal relation extraction, we directly utilize the image captioning method, generating a natural language description of the content of an image. In this paper, we use Oscar (Object-Semantics Aligned Pre-training) (Li et al., 2020) to transfer the image into a text description for each image, which integrates multi-modal alignment information of entities from a semantic perspective.
Oscar uses object tags detected in images as anchor points to significantly facilitate alignment learning. Input samples are processed into triples involving image region features, captions, and object tags similar to the pre-training. It randomly masks 15% of caption tokens and uses the corresponding output representations to perform a classification to predict tokens. Similarly to VLP (Zhou et al., 2020), the self-attention mask is constrained so that a caption token can only attend to the tokens before its position to simulate a uni-directional generation process. It eases the learning of semantic alignments between images and texts on the public corpus of 6.5 million text-image pairs, creating new state-of-the-art on the image caption task. Thus, we use Oscar to integrate useful images by transferring them into textual descriptions.
## B Hyper-Parameter Settings
Our implementation is based on PyTorch6. All experiments were carried out on a server with one GPU (Tesla V100). For re-implementation, we report our hyper-parameter settings on the dataset in Table 3. Note that the hyper-parameter settings are tuned in the validation data by grid search with 5 trials. The learning rate is 2e − 4, the batch size is 100, and the dropout rate is 0.6. We use AdamW (Loshchilov and Hutter, 2019) to optimize the parameters. The maximum length of the text is 128 and the objects of each image are 10. For the learning rate, we adopt the method of grid search with a step size of 0.0001.
## C Discussions For Input Order
Due to utilizing a generative encoder, where the prefix, object, image, and text are input stage by stage, thus the order affects the performance of 6https://pytorch.org/
| Hyper-parameter | MNRE dataset |
|---------------------------|----------------|
| word embedding dimension | 768 |
| image embedding dimension | 4,096 |
| dropout rate | 0.6 |
| batch size | 100 |
| training epoch | 20 |
| maximum length of text | 128 |
| learning rate | 2e − 4 |
| threshold λd | 2 |
| threshold λs | 2 |
| threshold λc | 3 |
| threshold δ | 0.4 |
Table 3: Hyper-parameter settings of DGF-PT.
| Variants | Acc. (%) | Prec. (%) | Recall (%) | F1 (%) |
|-----------------------|------------|-------------|--------------|----------|
| DGF-PT (Io → Ii → It) | 85.25 | 84.35 | 83.83 | 84.47 |
| Io → It → Ii | 84.64 | 83.32 | 82.09 | 83.53 |
| Ii → Io → It | 84.92 | 83.17 | 82.24 | 84.85 |
| Ii → It → Io | 84.85 | 83.38 | 83.70 | 84.26 |
| It → Io → Ii | 83.01 | 82.96 | 82.61 | 82.73 |
| It → Ii → Io | 82.27 | 82.04 | 81.75 | 82.29 |
Table 4: Impact of the input order of the image Ii, objects Io, and text It.
the model. As shown in Table 4, we exploit the best input order for multi-modal relation extraction. From the figure, we can observe that: 1) Our model is affected by the input order of text, image, and objects. The reason we think that prompt-based autoregressive encoder is a more efficient way to integrate multi-grained information. 2) The best input order is Io → Ii → It. Furthermore, when the text Itis input before others, the performance of our model dramatically decreases. It demonstrates that visual information is fed before textual information usually integrating more helpful extraction knowledge.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
We discuss this in Section Limitations and Ethics Statement.
✓ A2. Did you discuss any potential risks of your work?
We discuss this in Section Limitations and Ethics Statement.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
We summarize at the end of Section 1.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We use Oscar, VGG, and Faster R-CNN in Section 4.
✓ B1. Did you cite the creators of artifacts you used?
We provide the reference for all artifacts.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In section 5.1.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
In section 5.1.
## C ✓ **Did You Run Computational Experiments?** In Section 5.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In section 5.3 and Appendix B.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In Section 5 and Appendix C.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In Section 5.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In Section 5.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
ren-zhu-2023-pruning | Pruning Pre-trained Language Models with Principled Importance and Self-regularization | https://aclanthology.org/2023.findings-acl.573 | Iterative pruning is one of the most effective compression methods for pre-trained language models. We discovered that finding the optimal pruning decision is an equality-constrained 0-1 Integer Linear Programming problem. The solution to this optimization problem leads to a principled importance criterion which we use to rank parameters during iterative model pruning. To mitigate the poor generalization at high sparsity levels, we propose a self-regularization scheme where model prediction is regularized by the latest checkpoint with increasing sparsity throughout pruning. Our experiments on natural language understanding, question answering, named entity recognition, and data-to-text generation with various Transformer-based PLMs show the effectiveness of the approach at various sparsity levels. | # Pruning Pre-Trained Language Models With Principled Importance And Self-Regularization
Siyu Ren Kenny Q. Zhu∗
Shanghai Jiao Tong University Shanghai, China [email protected], [email protected]
## Abstract
Iterative pruning is one of the most effective compression methods for pre-trained language models. We discovered that finding the optimal pruning decision is an equalityconstrained 0-1 Integer Linear Programming problem. The solution to this optimization problem leads to a principled importance criterion which we use to rank parameters during iterative model pruning. To mitigate the poor generalization at high sparsity levels, we propose a self-regularization scheme where model prediction is regularized by the latest checkpoint with increasing sparsity throughout pruning. Our experiments on natural language understanding, question answering, named entity recognition, and datato-text generation with various Transformerbased PLMs show the effectiveness of the approach at various sparsity levels.
## 1 Introduction
Pre-trained language models (PLMs) (Devlin et al., 2019; Radford et al., 2018) have significantly advanced the state-of-the-art in various natural language processing tasks (Wang et al., 2018; Zhou and Lampouras, 2020; Dušek et al., 2020; Radev et al., 2020). However, these models often contain a vast amount of parameters, posing nontrivial requirements for storage and computation.
Due to this inefficiency, the applications of PLMs in resource-constrained scenarios are still limited.
To resolve the above challenge, model compression (Sun et al., 2019; Ben Noach and Goldberg, 2020; Lan et al., 2020) has been actively studied to make PLMs meet the practical requirement. Among them, iterative pruning methods are widely adopted at only a tiny expense of model performance when adapting PLMs to downstream tasks. During the course of iterative pruning, model parameters can not only be updated but also
∗ The corresponding author.
be pruned based on the rank of their importance scores in order to satisfy the cardinality constraint.
Prevalent importance criteria are based on the parameter's magnitude (Zhu and Gupta, 2017; Renda et al., 2020) or sensitivity (Louizos et al., 2018; Sanh et al., 2020; Liang et al., 2021; Zhang et al., 2022). Parameters with low importance scores are pruned and are expected to have little impact on model performance.
Despite the empirical success, existing importance criteria for model pruning still face two major limitations: (1) they are heuristically defined and may not accurately quantify a parameter's contribution to the learning process, e.g., absolute weight value in magnitude-based pruning and gradient-weight product in sensitivity-based pruning; (2) they determine the importance of each parameter individually without considering the effect of coinstantaneous parameter updates on model performance, e.g., sensitivity is estimated by the absolute change in training error if only a single parameter is pruned and others remain unchanged.
In this paper, we rethink the design of the importance criterion for model pruning from an optimization perspective. We begin by analyzing the temporal variation of any given learning objective based on a single-step gradient descent update under the iterative pruning setting. We show that finding the optimal pruning decision can be framed as solving an equality-constrained 0-1 Integer Linear Programming (ILP) problem, where the constraint is defined by the specified sparsity.
The resulting problem is a particular case of a general 0-1 Knapsack problem in which the weight for each item is the same. The solution to this problem naturally leads to a principled importance criterion which we use to rank all model parameters and derive the optimal stepwise pruning decision.
When a high sparsity (e.g., 80%∼90%) is pursued, the limited capacity often renders the pruned model fails to retain satisfactory performance with conventional fine-tuning. To further improve the model's generalization ability, we propose a selfregularization scheme, where the model prediction is regularized by the latest best-performing model checkpoint during pruning. We show that such a scheme eases model learning with decreasing capacity and effectively yields a tighter upper bound of expected generalization error than learning from training data alone.
To validate the effectiveness of our approach, dubbed PINS (Pruning with principled Importance aNd Self-regularization), we conducted extensive experiments with various pre-trained language models on a wide variety of tasks, including natural language understanding on GLUE (Wang et al.,
2018)), question answering on SQuAD (Rajpurkar et al., 2016), named entity recognition on CoNLL
2003 (Tjong Kim Sang and De Meulder, 2003), and data-to-text generation on WebNLG (Zhou and Lampouras, 2020), DART (Radev et al.,
2020), and E2E (Dušek et al., 2020). Experimental results show that PINS provides more accurate models at different sparsity levels. Detailed analysis shed further light on some intriguing properties of models pruned by PINS. By exploiting the resulting high sparsity, we show that the storage/inference can be reduced/accelerated by 8.9x and 2.7x using CSR format and a sparsityaware inference runtime (Kurtz et al., 2020) on consumer-level CPUs 1.
In summary, our contributions are:
- We establish the equivalence between the optimal pruning decision and the solution to an equality-constrained 0-1 Integer Linear Programming problem. The solution to this problem leads to a principled importance criterion that can be used to rank parameters during iterative pruning.
- We propose a simple yet effective selfregularization scheme to enhance the model's generalization capability, especially under a high-sparsity regime.
- Comprehensive experiments and analyses confirm the effectiveness of our approach at various sparsity levels.
## 2 Background And Related Work
In this section, we review the necessary background on Transformer-based pre-trained language models and popular importance criteria for iterative pruning.
## 2.1 Transformer-Based Pre-Trained Language Models
Most existing pre-trained neural language models (Radford et al., 2018; Devlin et al., 2019; Wang et al., 2020; Clark et al., 2020) are based on the Transformer (Vaswani et al., 2017) architecture, which consists of several identical blocks of self-attention and feedforward network. After pre-training on a massive amount of unlabeled general-domain corpus in a self-supervised learning manner, these models exhibit superior performance on various downstream tasks via finetuning. However, good generalization performance comes at the cost of a vast amount of parameters. For example, the base version of BERT has 110M parameters and leads to more than 400MB of disk storage. Therefore, how to effectively reduce model size while preserving as much task accuracy as possible remains a challenging research problem.
## 2.2 Iterative Pruning
Pruning methods can be divided into two categories: one-shot pruning (Lee et al., 2018; Frankle and Carbin, 2018) and iterative pruning (Louizos et al., 2018; Sanh et al., 2020; Zhang et al., 2022).
One-shot pruning removes parameters of low importance after training. It is efficient but ignores the complicated training dynamics when applied to modern large neural language models. On the contrary, iterative pruning performs training and pruning simultaneously. Therefore, the resulting sparsity pattern is aware of the complex dynamics of parameters through the course of training and delivers considerable improvement compared to one-shot pruning.
Let θ
(t) = {θ
(t)
1θ
(t)
2
, ..., θ(t)
d} denote the ddimensional model parameters at t-th training iteration, the typical updating rule of iterative pruning can be formulated as:
$$\hat{\mathbf{\theta}}^{(t+1)}=\mathbf{\theta}^{(t)}-\eta^{(t)}\nabla_{\mathbf{\theta}}{\mathcal{L}}(\mathbf{\theta}^{(t)})\qquad\qquad(1)$$ $$\mathbf{\theta}^{(t+1)}=\hat{\mathbf{\theta}}^{(t+1)}\odot\mathbf{M}^{(t)}\qquad\qquad(2)$$
where η
(t)is the learning rate at time step t and L
is the learning objective. The temporarily updated θˆ(t+1) is further pruned by the binary mask M(t)∈
{0, 1}
d, which is computed based on a given importance criterion S
(t):
$$M_{i}^{(t)}=\begin{cases}1,&\text{if}\ \ S_{i}^{(t)}\text{is in the top-}r^{(t)}\text{of}\ S^{(t)}\\ 0,&\text{otherwise}\end{cases}\tag{3}$$
$$({\mathfrak{I}})$$
where r
(t)≤ d indicates the number of remaining parameters at time step t according to a given sparsity scheduler.
## 2.3 Importance Criteria For Model Pruning
Popular importance criteria for model pruning include parameters' magnitude and sensitivity.
Magnitude is a simple yet effective importance criterion that is widely used for model pruning. It estimates the importance of each parameter as its absolute value, i.e., S
(t)
i = |θ
(t)
i|. Despite its simplicity, the magnitude cannot accurately gauge the importance of parameters because even parameters with small magnitude can have a large impact on the model prediction due to the complex compositional structure of PLMs.
Sensitivity is another useful importance criterion. It estimates the importance of each parameter as the absolute change of the learning objective if the parameter is pruned, i.e., set to zero. The mathematical formulation of the sensitivity of i-th parameter is given by:
$$\begin{array}{c}{{{\bf S}_{i}^{(t)}=|{\cal L}(\mathbf{\theta}_{-i}^{(t)})-{\cal L}(\mathbf{\theta}^{(t)})|}}\\ {{\qquad\qquad\approx|\mathbf{g}_{i}^{(t)}\mathbf{\theta}_{i}^{(t)}|}}\end{array}$$
where $\theta^{(t)}_{-i}$ is identical to $\theta^{(t)}$ except that the $i$-th
−i entry is set to zero and g
(t)
iis the gradient of i-th entry. Though taking the training dynamics into account, sensitivity still estimates the importance of each parameter individually without considering the effect of holistic parameter update.
## 3 Methodology
Instead of heuristically defining the importance criterion as in prior pruning methods, we take a step back and rethink the design of the importance criterion for model pruning from an optimization perspective. From our analysis, we draw an equivalence between finding the optimal stepwise pruning decision and solving an equality-constrained 0-1 Integer Linear Programming problem. We further show that the optimal solution to this problem leads to a new importance criterion for model pruning. Moreover, we propose a simple yet effective self-regularization scheme to facilitate the generalization ability of the sparse model. We elucidate our analysis in Section 3.1 and describe our self-regularization scheme in Section 3.2.
## 3.1 Rethinking Importance Criterion From The Optimization Perspective
Without loss of generality, we denote L as the learning objective when adapting a pre-trained language model f with parameter θ to a downstream task. At t-th training iteration, we denote the current model parameters as θ
(t)and the evaluated learning objective as L(θ
(t)).
The temporal variation of the learning objective L(θ
(t)) at time step t is given by the second-order Taylor series expansion:
$$\begin{array}{c}{{\Delta{\mathcal{L}}^{(t)}={\mathcal{L}}(\mathbf{\theta}^{(t)}+\Delta\mathbf{\theta}^{(t)})-{\mathcal{L}}(\mathbf{\theta}^{(t)})}}\\ {{=\nabla_{\mathbf{\theta}}{\mathcal{L}}(\mathbf{\theta}^{(t)})^{\top}\Delta\mathbf{\theta}^{(t)}+}}\\ {{\frac{1}{2}\Delta\mathbf{\theta}^{(t)^{\top}}\mathbf{H}^{(t)}\Delta\mathbf{\theta}^{(t)}+o(|\Delta\mathbf{\theta}^{(t)}|^{2})}}\\ {{\mathrm{~}}}\end{array}$$
(t)) (6)
(6) $\binom{7}{2}$ .
2) (7)
where H(t)is the Hessian matrix at step t. It is known that the largest eigenvalue λmax of Hessian matrices in a PLM is typically small (Shen et al.,
2019), i.e., ∆θ
(t)⊤H(t)∆θ
(t) ≤ λmax|∆θ
(t)| 22 ≈
0. Thus, we ignore the second-order term as well as the infinitesimal of higher order in Eq. (7):
$$\Delta{\cal L}^{(t)}=\nabla_{\mathbf{\theta}}{\cal L}(\mathbf{\theta}^{(t)})^{\top}\Delta\mathbf{\theta}^{(t)}\tag{8}$$ $$=\sum_{i=1}^{d}\mathbf{g}_{i}^{(t)}\cdot\Delta\mathbf{\theta}_{i}^{(t)}$$
(4) $\binom{5}{5}$ .
Under the iterative pruning setting, the actual temporal variation ∆θ
(t)
iof i-th parameter depends on whether it is allowed to be updated or forced to zeroed out. Formally, we use a binary variable x
(t) i to indicate the pruning decision of i-th parameter at time step t, i.e., x
(t)
i = 1 means θ
(t)
iis updated and x
(t)
i = 0 means θ
(t)
iis pruned. The temporal variation in Eq. (8) can now be rewritten as:
$$\Delta{\mathcal{L}}^{(t)}=\sum_{i=1}^{d}g_{i}^{(t)}(x_{i}^{(t)}\Delta\hat{\mathbf{\theta}}_{i}^{(t)}+(1-x_{i}^{(t)})(-\mathbf{\theta}_{i}^{(t)}))$$
where ∆θˆ
(t)
i = −η
(t)g
(t)
iis the gradient descent update. Finding the optimal pruning decision that leads to the smallest ∆L
(t)is now converted to an equality-constrained 0-1 integer linear programming (ILP) problem of variables x
(t):
$$\tilde{\mathbf{x}}^{(t)}=\operatorname*{arg\,min}_{\mathbf{x}^{(t)}}\Delta\mathcal{L}^{(t)}$$ s.t. $$\sum_{i=1}^{d}\mathbf{x}_{i}^{(t)}=r^{(t)},\mathbf{x}_{i}^{(t)}\in\{0,1\}$$ (10)
where r
(t)is the number of remaining parameters at step t according to the pre-defined sparsity scheduler. If we consider each parameter θ
(t)
ias an item and r
(t)as the total capacity, the problem that Eq. (10) defines can be treated as a special case of 0-1 Knapsack problem where the weight for each item is one and the value for each item is given by:
$${\mathbf{S}}_{i}^{(t)}=-{\mathbf{g}}_{i}^{(t)}\Delta{\hat{\mathbf{\theta}}}_{i}^{(t)}-{\mathbf{g}}_{i}^{(t)}{\mathbf{\theta}}_{i}^{(t)}\qquad(11)$$
Contrary to the general 0-1 Knapsack problem which is known to be NP-complete, fortunately, the equal-weight 0-1 Knapsack is a P problem. Its optimal solution can be obtained by sorting items in descending order according to their values and selecting the top-r
(t) ones:
$$\tilde{\mathbf{x}}_{i}^{(t)}=\begin{cases}1,&\text{if}\ \mathbf{S}_{i}^{(t)}\text{is in the top-}r^{(t)}\text{of}\ \mathbf{S}^{(t)}\\ 0,&\text{otherwise}\end{cases}\tag{12}$$
$$(12)$$
Putting it in the context of iterative pruning, our analysis theoretically reveals the validity of: (1)
selecting parameters based on the ranking of certain importance criterion; (2) using Eq. (11) as a principled new importance criterion.
## 3.2 Self-Regularization
In vanilla fine-tuning, the learning objective L is defined as the training error Ler (a.k.a empirical risk in statistical learning) over the empirical data distribution. However, minimizing such training error does not translate to good generalization.
Moreover, as iterative pruning proceeds, the number of non-zero parameters in the model monotonically decreases. The reduced model capacity increases the learning difficulty (Lopez-Paz et al., 2015; Mirzadeh et al., 2019) and usually leads to degenerated generalization performance of the sparsified model (Sanh et al., 2020).
Confronting the above challenges, we propose an effective self-regularization scheme tailored to improving the model's generalization ability during iterative pruning. Concretely, besides learning from the hard label of training data, the output of the current model with parameter θ
(t)is also regularized by the output of the latest best-performing model checkpoint with parameter θ
(tl), where tl ≤
t denotes the time step at which the latest checkpoint was saved. The learning objective of selfregularization is defined as:
$${\mathcal{L}}_{s r}={\mathcal{D}}(y_{\boldsymbol{\theta}^{(t)}},y_{\boldsymbol{\theta}^{(t_{l})}})\qquad\qquad(13)$$
where D can be any divergence metric, e.g., KLdivergence for classification tasks. Lsr is then integrated with the original learning objective, i.e.,
L = Ler + Lsr.
Why does self-regularization work? Our selfregularization is similar to teacher-student knowledge distillation in the sense that the model output is regularized by the output of another model.
However, the most critical difference is that the
"teacher" in self-regularization is instantiated by checkpoint with increasing sparsity, such that the capacity gap between "teacher" and "student" is dynamically adjusted. We theoretically justify the effectiveness of self-regularization as follows:
Theorem 1. Let ti and tj where ti ≥ tj denote the time steps at which two different checkpoints are saved; Let R(fθ
(t←ti
) ) and R(fθ
(t←tj
) ) *denote the* expected generalization error of models learned from fθ
(ti
) and fθ
(tj
) ; Let n denotes the size of training data; | · |C *denotes a capacity measure of* function class Fθ. Based on previous expositions on VC theory (Vapnik, *1998), we have the following asymptotic generalization bounds hold:*
R(fθ (t←ti ) ) ≤ O( |Fθ(t) |C nαi) + inf Fθ (t←ti ) R(fθ(t) ) | {z } bound(fθ (t←ti )) R(fθ (t←tj ) ) ≤ O( |Fθ(t) |C n αj) + inf Fθ (t←tj ) R(fθ(t) ) | {z } bound(fθ (t←tj ))
Because θ
(ti)*is a later checkpoint with higher* sparsity than θ
(tj )*, we have the learning speed* 1 ≥ αi ≥ αj ≥
1 2
, then the following inequality holds with high probability:
$$b o u n d(f_{\pmb{\theta}^{(t\gets t_{i})}})\leq b o u n d(f_{\pmb{\theta}^{(t\gets t_{j})}})$$
In summary, self-regularization works by enabling a tighter generalization bound compared to learning from training data alone or a static dense teacher as in knowledge distillation. Please refer to Appendix B for detailed derivation.
## 3.3 The Algorithm
Here we formally summarize our algorithm PINS (Pruning with principled Importance aNd Self-regularization) in Algorithm 1:
Algorithm 1 PINS
Input: Training set Dtr = {(xi, yi)}
N
i=1; Validation set Dval; pre-trained parameters θ; maximum training steps T; evaluation interval t*eval*.
Initialize: θ
(0) ← θ, tl ← 0, best validation accuracy acctl ← −INF.
1: for t = 0 to T − 1 do 2: Sample a mini-batch (x, y) from Dtr 3: Compute current model's output yθ(t)
4: Compute latest best-performing checkpoint's output yθ
(tl
)
5: Compute L based on yθ(t) , yθ
(tl
) and y 6: Compute S
(t) via Eq. (11)
7: Compute θ
(t+1) via Eq. (2) and Eq. (3)
8: if t%t*eval* = 0 and acct>acctl then 9: acctl ← acct, θ
(tl) ← θ
(t)
Output: the pruned parameters θ
(T).
## 4 Experiments
In this section, We compare PINS with state-ofthe-art pruning algorithms and perform detailed analysis to understand the effectiveness of PINS.
## 4.1 Setup 4.1.1 Tasks
We conduct experiments on a comprehensive spectrum of tasks following standard data splits.
Natural Language Understanding. We opt for tasks from the GLUE (Wang et al., 2018) benchmark, including linguistic acceptability (CoLA),
natural language inference (RTE, QNLI, MNLI),
paraphrase (MRPC, QQP), sentiment analysis (SST-2) and textual similarity (STS-B).
Because the official test set of GLUE is hidden, we randomly split a small portion of training set as validation set and treat the original validation set as test set.
Question Answering. We use SQuAD v1.1 (Rajpurkar et al., 2016) as a representative dataset for extractive question answering following previous work (Zhang et al., 2022).
Named Entity Recognition. We also examine our approach on CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003) for token-level named entity recognition task.
Data-to-Text Generation. Besides language understanding tasks, we also extend our evaluation to data-to-text generation on three datasets:
E2E (Dušek et al., 2020), DART (Radev et al.,
2020), and WebNLG (Zhou and Lampouras, 2020), which involves generating a piece of fluent text from a set of structured relational triples.
## 4.1.2 Baselines
Magnitude-based. Iterative magnitude pruning (IMP) (Zhu and Gupta, 2017) is the state-ofthe-art magnitude-based approach.
Sensitivity-based. l0-regularization (Louizos et al., 2018) trains masking variables via reparametrization trick with l0 penalty; SMvP (Sanh et al., 2020) uses accumulated sensitivity as importance metric; PST (Li et al., 2022) proposed a hybrid importance criterion combining both magnitude and sensitivity; PLATON (Zhang et al., 2022)
uses a modified variant of sensitivity by exponential moving average and uncertainty re-weighting.
## 4.1.3 Implementation Details
We mainly conduct experiments on the pre-trained BERTbase (Devlin et al., 2019) as a pruning target for all tasks except data-to-text generation. We defer the pruning results of MiniLM12L-384H (Wang et al., 2020) and Electrabase (Clark et al., 2020) to Appendix A. For data-to-text generation, we adopt the pre-trained GPT-2 (Radford et al., 2018) following a prior study (Li et al., 2022).
During pruning, we employ the cubic sparsity scheduler (Sanh et al., 2020; Zhang et al., 2022) to gradually increase the sparsity level from 0 to the specified target sparsity. To avoid tremendous computation cost brought by hyper-parameter tuning, we only search the batch size from {16, 32}
and fix the learning rate as 3e-5 for all experiments on GLUE and CoNLL. For SQuAD v1.1, we fix the batch size as 16 and the learning rate as 3e-5 following Zhang et al. (2022). We adopt AdamW (Loshchilov and Hutter, 2017) as the default optimizer. To reduce the variance induced by mini-batch sampling, we adopt a smoothing technique similar to PLATON. We run each experi-
Sparsity Method RTE
Acc
MRPC
F1
STS-B
Pearson
CoLA
Mcc
SST-2
Acc
QNLI
Acc
MNLI
Acc
QQP
Acc Avg.
0% Fine-tune† 69.3 90.3 90.2 58.3 92.4 91.3 84.0 91.5 83.4
IMP† 65.7 86.2 86.8 42.5 84.3 89.2 82.2 86.0 77.9 l0-regularization† 63.2 80.2 82.8 0.0 85.0 85.0 80.8 88.5 70.7
SMvP† 62.8 86.7 87.8 48.5 89.0 88.3 81.9 90.6 79.5
PST 63.0 87.4 88.0 44.6 89.3 88.3 79.3 88.9 78.6
PLATON† 68.6 89.8 89.0 54.5 91.2 90.1 83.3 90.7 82.2
PINS (ours) 72.7 90.9 89.2 57.1 91.9 91.2 83.9 90.9 **83.5**
| Sparsity | Method | RTE | MRPC | STS-B | CoLA |
|------------|----------|---------|--------|---------|--------|
| Acc | F1 | Pearson | Mcc | | |
| 80% 90% | | | | | |
IMP† 57.4 80.3 83.4 18.3 80.7 86.6 78.9 78.8 70.5 l0-regularizatio† 59.9 79.5 82.7 0.0 82.5 82.8 78.4 87.6 69.1 SMvP† 58.8 85.9 86.5 0.0 87.4 86.6 80.9 90.2 72.1 PST‡ 62.8 85.6 81.7 42.5 88.7 86.0 76.7 83.9 76.0
PLATON† 65.3 88.8 87.4 44.3 90.5 88.9 81.8 90.2 79.6
PINS (ours) 68.5 90.1 87.9 49.8 91.0 89.5 82.7 90.6 **81.3**
Sparsity 80% 70% 60% 50%
Fine-tune† 88.1
IMP† 82.9 86.5 86.7 87.0
l0-regularization† 81.9 82.8 83.9 84.6 SMvP† - 84.6 - 85.8
PLATON† 86.1 86.7 86.9 87.2
PINS (ours) **86.4 86.9 87.4 88.0**
ment five times with different random seeds and report the average results (significance tests with p-value < 0.05 are conducted for all performance gains).
## 4.2 Main Results 4.2.1 Comparison With Baselines
Natural language understanding We present the experimental results on GLUE at high sparsity, i.e., 80% and 90% in Table 1. Among all baselines, sensitivity-based methods generally achieve better results than magnitude-based IMP,
which implies the importance of training dynamics when designing pruning criteria. We can see that PINS delivers more accurate sparsified models on all datasets at both sparsity levels. The advantage of PINS is more evident on small datasets.
For example, PINS outperforms the previous bestperforming baseline (PLATON) by 4.1 and 2.6 points on RTE and CoLA at 80% sparsity, where there are only a few thousand training data. Under extremely high sparsity, i.e., 90%, PINS is still able to retain 97.5% overall performance of finetuning, outperforming 95.4% of the previous best method PLATON. Notably, PINS even surpasses fine-tuning on RTE and MRPC at 80% sparsity. This can be attributed to the fact that PLMs are heavily over-parameterized and PINS can effectively identify parameters crucial to the task to realize low bias and low variance simultaneously.
Question answering Table 2 summarizes the pruning results on SQuAD v1.1. Interestingly, IMP outperforms all sensitivity-based methods except for PLATON at all considered sparsity levels, in contrast to the observations on GLUE. Our method, however, consistently yields the best performance at all sparsity settings. Named entity recognition Table 3 demonstrates the pruning results on CoNLL 2003 dataset for named entity recognition. At 70% sparsity, our method almost matches the performance of fine-tuning, outperforming baselines on all evaluation metrics. The gain of PINS is more prominent
| Sparsity | Method | P | R | F1 |
|------------|-----------|------|------|------|
| 0% | Fine-tune | 93.5 | 94.6 | 94.0 |
| 70% 80% | | | | |
| Sparsity | Method | E2E | DART | WebNLG | | | | |
|-------------|-----------|--------|--------|----------|------|--------|------|------|
| BLEU | ROUGE-L | METEOR | BLEU | BLEURT | BLEU | BLEURT | | |
| 0% | Fine-tune | 69.4 | 71.1 | 46.2 | 46.6 | 0.30 | 46.9 | 0.23 |
| IMP | 69.3 | 71.0 | 45.8 | 44.9 | 0.22 | 39.9 | 0.00 | |
| 80% | PST | 69.4 | 70.8 | 45.9 | 44.1 | 0.22 | 44.3 | 0.16 |
| PINS (ours) | 69.6 | 71.8 | 46.6 | 46.2 | 0.29 | 45.5 | 0.18 | |
Table 5: Results with BRETbase on the GLUE development set under medium-to-low sparsity regime. Numbers are the mean of five trials with different random seeds. PINS outperforms fine-tuning at medium-to-low sparsity.
| Sparsity | Method | RTE | MRPC | STS-B | CoLA | SST-2 | QNLI | MNLI | QQP Acc | Avg. |
|------------|-----------|---------|--------|---------|--------|---------|--------|--------|-----------|--------|
| Acc | F1 | Pearson | Mcc | Acc | Acc | Acc | | | | |
| 0% | Fine-tune | 69.3 | 90.3 | 90.2 | 58.3 | 92.4 | 91.3 | 84.0 | 91.5 | 83.4 |
| 50% | PINS | 70.8 | 91.4 | 89.7 | 60.6 | 92.9 | 91.8 | 85.1 | 91.3 | 84.2 |
| 30% | PINS | 71.7 | 91.2 | 89.8 | 60.4 | 93.3 | 92.0 | 85.1 | 91.5 | 84.4 |
## When Further Increasing Sparsity.
Data-to-text generation Table 4 shows the pruning results on E2E, DART and WebNLG at 80% sparsity. PINS achieves the best performance on all three datasets in all evaluation metrics. In particular, PINS delivers performance even better than fine-tuning on the E2E dataset by 0.7 ROUGE-L and 0.4 METEOR scores, respectively. We posit that this is due to the relative easiness of E2E compared to the other two datasets.
## 4.2.2 Results At Medium-To-Low Sparsity
The typical utility of pruning is to produce a sparse yet competitive model that can benefit downstream applications in terms of efficiency without sacrificing much task accuracy. We hypothesize that PINS might also bring a regularization effect compared to vanilla fine-tuning under the medium-to-low sparsity regime.
As shown in Table 5, when specifying a medium-to-low sparsity, e.g., 50%∼30%, our method can effectively play a role of regularization and improve model performance compared to vanilla fine-tuning. With half of the parameters being pruned, the sparse model produced by PINS
outperforms fine-tuning by 1 percentage point on the GLUE score. This observation suggests that appropriate pruning can effectively reduce variance without hurting model expressiveness.
## 4.3 Ablation Study
The self-regularization scheme is proposed and integrated into PINS to improve model generaliza-
Table 6: Ablation Study with BERTbase on the learning objective during iterative pruning at 80% sparsity.
| L | RTE | CoLA | MRPC |
|----------------------------|-------|--------|--------|
| empirical risk | 70.9 | 55.4 | 90.6 |
| w/ knowledge distillatiojn | 70.3 | 56.0 | 90.6 |
| w/ self-regularization | 72.7 | 57.1 | 90.9 |
tion. Here we investigate the effectiveness of selfregularization by comparing it to the conventional knowledge distillation scheme and the classical empirical risk minimization scheme.
The pruning results of using the three different learning objectives on RTE, CoLA, and MRPC are listed in Table 6. Pruning with PINS using classical empirical risk minimization still achieves performance better than existing baselines (Table 1).
Learning from a densely fine-tuned BERTbase as the teacher does not always improve and sometime may even hurt performance. In contrast, our proposed self-regularization consistently boosts model performance, which echoes our theoretical justification in Section 3.2.
## 4.4 Analysis
We provide an in-depth analysis of various importance criteria to uncover more valuable insights.
Sparsity pattern of weight matrices We are interested in the sparsity pattern produced by different pruning criteria. To this end, we plot the remaining parameters' distribution of the same weight matrix in BERTbase pruned via magnitude, sensitivity, and PINS in Figure 1. We observe
![7_image_0.png](7_image_0.png)
![7_image_2.png](7_image_2.png)
that magnitude-based pruning generates a sparsity pattern close to randomness. Sensitivity-based pruning produces a more structured pattern where the remaining parameters tend to occupy complete rows. Interestingly, the sparsity pattern produced by PINS exhibits the highest concentration on specific rows. This implies that the parameters contributing most to the end-task are preferably distributed in a structured way and PINS is more effective at extracting such patterns.
Layerwise rank distribution The highly structured sparsity pattern generated by PINS intrigues our interest to further analyze the intrinsic property of parameter matrices after pruning. Specifically, we inspect the matrix rank as it is usually associated with the complexity of matrix. To this end, we visualize the layerwise rank distribution of BERTbase pruned using different importance
![7_image_1.png](7_image_1.png)
| Sparsity | Time(s) | Storage(MB) | Acc. |
|------------|--------------|---------------|--------|
| 0% | 0.110 (1.0x) | 340 (1.0x) | 69.3 |
| 80% | 0.041 (2.7x) | 38 (8.9x) | 69.0 |
criteria on SST-2 dataset. As shown in Figure 4, magnitude pruning produces sparse matrices that are still near full-rank despite containing 80% zeros. Sensitivity pruning tends to generate sparsity pattern with lower rank compared to magnitude pruning. Notably, model pruned by PINS shows consistently lower matrix rank than the other two criteria. This implies that PINS is more effective at identifying the low-dimensional task representation during adaptation, which is usually correlated with tighter generalization bounds (Arora et al., 2018; Aghajanyan et al., 2021). Empirical validation of importance criterion In Section 3.1 we prove that the pruning decision derived by our importance criterion is theoretically optimal. Here we empirically validate this point by visualizing the change of learning objective as pruning proceeds. Figure 3 illustrates that our importance criterion indeed leads to the most significant decrease in the learning objective compared to heuristical ones like magnitude and sensitivity.
## 4.5 Efficiency Gain
We can exploit the resulting high sparsity to attain practical efficiency gain on storage and inference speed. We first apply quantization upon the pruned model and transform it into INT8 data type before saving it using Compressed Sparse Row (CSR)
format. We then leverage a sparsity-aware runtime (Kurtz et al., 2020) for accelerating inference.
As shown in Table 7, on the RTE dataset, the disk space and inference time of BERTbase pruned at 80% sparsity can be reduced by roughly 8.9x and 2.7x respectively with negligible accuracy loss.
## 5 Conclusion
We present PINS, a new iterative pruning method that hinges on a principled weight importance criterion to deliver the optimal stepwise pruning decision. Integrated with a self-regularization scheme tailored to pruning-during-adaptation, PINS allows for provably better generalization ability. Empirical experiments and analyses confirm the effectiveness of our method and shed further light on the different sparsity patterns produced by PINS and other existing methods.
## Limitations
Compared to the empirical risk minimization scheme, the introduced self-regularization scheme incurs certain overhead because each mini-batch of data will go through two models. For BERTbase scale pre-trained language models, the additional memory overhead is about 27% and the additional training time overhead is about 30%. Nevertheless, once pruned, the sparsified model can enjoy considerable efficiency gains in terms of storage and inference time. Therefore, this is a trade-off that future practitioners might need to consider.
## Acknowledgments
This work was generously supported by the CMB
Credit Card Center & SJTU joint research grant, and Meituan-SJTU joint research grant.
## References
Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer. 2021. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language* Processing (Volume 1: Long Papers), pages 7319–
7328, Online. Association for Computational Linguistics.
Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. 2018. Stronger generalization bounds for deep nets via a compression approach. In *International Conference on Machine Learning*, pages 254–
263. PMLR.
Matan Ben Noach and Yoav Goldberg. 2020. Compressing pre-trained language models by matrix de-
composition. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 884–889, Suzhou, China. Association for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),
pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Ondˇrej Dušek, Jekaterina Novikova, and Verena Rieser.
2020. Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge. *Computer Speech & Language*, 59:123–156.
Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Training pruned neural networks. *CoRR*, abs/1803.03635.
Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William Leiserson, Sage Moore, Bill Nell, Nir Shavit, and Dan Alistarh. 2020. Inducing and exploiting activation sparsity for fast inference on deep neural networks. In *Proceedings of the 37th International* Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5533–5543, Virtual. PMLR.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2020. Albert: A lite bert for self-supervised learning of language representations. In *ICLR*. OpenReview.net.
Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. 2018. Snip: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations.
Yuchao Li, Fuli Luo, Chuanqi Tan, Mengdi Wang, Songfang Huang, Shen Li, and Junjie Bai. 2022.
Parameter-efficient sparsity for large language models fine-tuning. In *Proceedings of the Thirty-First* International Joint Conference on Artificial Intelligence, IJCAI-22, pages 4223–4229. International Joint Conferences on Artificial Intelligence Organization. Main Track.
Chen Liang, Simiao Zuo, Minshuo Chen, Haoming Jiang, Xiaodong Liu, Pengcheng He, Tuo Zhao, and
Weizhu Chen. 2021. Super tickets in pre-trained language models: From model compression to improving generalization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6524–6538, Online. Association for Computational Linguistics.
David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, and Vladimir Vapnik. 2015. Unifying distillation and privileged information. *arXiv preprint* arXiv:1511.03643.
Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. *CoRR*,
abs/1711.05101.
Christos Louizos, Max Welling, and Diederik P
Kingma. 2018. Learning sparse neural networks through l_0 regularization. arXiv preprint arXiv:1712.01312.
Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, and Hassan Ghasemzadeh. 2019. Improved knowledge distillation via teacher assistant: Bridging the gap between student and teacher. *CoRR*, abs/1902.03393.
Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Nazneen Fatema Rajani, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Murori Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, and Richard Socher. 2020. Dart: Open-domain structured data record to text generation. *arXiv preprint* arXiv:2007.02871.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. *CoRR*,
abs/1606.05250.
Alex Renda, Jonathan Frankle, and Michael Carbin.
2020. Comparing rewinding and fine-tuning in neural network pruning. *CoRR*, abs/2003.02389.
Victor Sanh, Thomas Wolf, and Alexander Rush.
2020. Movement pruning: Adaptive sparsity by fine-tuning. In *Advances in Neural Information Processing Systems*, volume 33, pages 20378–20389.
Curran Associates, Inc.
Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2019. Q-BERT: hessian based ultra low precision quantization of BERT. *CoRR*,
abs/1909.05840.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019.
Patient knowledge distillation for BERT model compression. *CoRR*, abs/1908.09355.
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task:
Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147.
Vladimir Vapnik. 1998. *Statistical learning theory*.
Wiley.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. *CoRR*,
abs/1804.07461.
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. *CoRR*, abs/2002.10957.
Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2022. Platon: Pruning large transformer models with upper confidence bound of weight importance. In International Conference on Machine Learning, pages 26809–26823. PMLR.
Giulio Zhou and Gerasimos Lampouras. 2020.
WebNLG challenge 2020: Language agnostic delexicalisation for multilingual RDF-to-text generation. In *Proceedings of the 3rd International* Workshop on Natural Language Generation from the Semantic Web (WebNLG+), pages 186–191, Dublin, Ireland (Virtual). Association for Computational Linguistics.
Michael Zhu and Suyog Gupta. 2017. To prune, or not to prune: exploring the efficacy of pruning for model compression. *arXiv preprint arXiv:1710.01878*.
## A Results With More Plms On Subset Of Glue
In addition the widely used BERT and GPT-2 models, we also perform pruning experiments upon other two pre-trained language models:
Electrabase and MiniLM12L-384H to further verify the effectiveness of our method.
Due to computing resource constraint, we restrict our experiments on a subset of GLUE
task, including RTE, CoLA and QNLI at 80% and 90% sparsity. We compare PINS against IMP and PLATON as two representative baselines for magnitude-based and sensitivity-based pruning methods. We fix the batch size as 32 and learning rate as 3e-5 similar to the BERT experiments. We illustrate the pruning results on Table 8 and Table 9. At both sparsity levels, PINS
consistently outperforms IMP and PLATON on all three datasets, verifying the general effectiveness of PINS for language model pruning.
| Sparsity | Method | RTE | CoLA | |
|-------------|-----------|-------|--------|------|
| Acc | Mcc | | | |
| 0% | Fine-tune | 73.0 | 58.5 | 91.5 |
| IMP | 60.5 | 21.6 | 87.5 | |
| 80% | PLATON | 68.2 | 54.1 | 89.8 |
| PINS (ours) | 69.5 | 54.4 | 90.4 | |
| IMP | 57.5 | 14.1 | 83.9 | |
| 90% | PLATON | 63.1 | 38.8 | 88.0 |
| PINS (ours) | 66.2 | 44.8 | 88.6 | |
Table 8: Results with MiniLM12L-384H on the GLUE
development set.
Sparsity Method RTE
Acc
CoLA
Mcc
QNLI
Acc
0% Fine-tune 81.9 69.0 93.1
80%
IMP 59.9 11.2 87.5
PLATON 73.6 60.0 91.0 PINS (ours) 75.5 63.7 92.0
90%
IMP 52.9 0.0 83.0
PLATON 69.9 48.0 89.7
PINS (ours) 72.3 49.2 90.2
Table 9: Results with Electrabase on the GLUE development set.
## B Proof Of Theorem 1
Proof. Let ti and tj where ti ≥ tj denote the time steps at which two different checkpoints are saved; Let R(fθ
(t←ti
) ) and R(fθ
(t←tj
) ) denote the expected generalization error of models learned from fθ
(ti
) and fθ
(tj
) ; Let n denotes the size of training data; *| · |*C denotes a capacity measure like VC-dimension for function class Fθ. Based on previous expositions on VC theory, the following asymptotic generalization bound holds:
R(fθ (t←ti ) ) = R(fθ (t←ti ) ) − R(fθ (ti ) ) + R(fθ (ti ) ) ≤ O( |Fθ(t) |C nαi) + ϵt,ti + R(fθ (ti ) ) = O( |Fθ(t) |C nαi) + inf fθ (t)∈Fθ (t←ti ) R(fθ(t) ) | {z } bound(fθ (t←ti )) R(fθ (t←tj ) ) = R(fθ (t←tj ) ) − R(fθ (tj ) ) + R(fθ (tj ) ) ≤ O( |Fθ(t) |C n αj) + ϵt,tj + R(fθ (tj ) ) = O( |Fθ(t) |C n αj) + inf fθ (t)∈Fθ (t←tj ) R(fθ(t) ) | {z } bound(fθ (t←tj ))
where ϵ*t,ti* is the approximation error of function class Fθ
(t←ti
) with respect to fθ
(ti
) . ϵ*t,tj* is defined in analogy. Because: (1) θ
(ti)is a later checkpoint with higher sparsity than θ
(tj ), we have the learning speed 1 ≥ αi ≥ αj ≥
1 2
; (2) fθ
(ti
) has lower generalization error than fθ
(tj
) , we have the following inequality holds with high probability:
$$b o u n d(f_{\pmb\theta^{(t\gets t_{i})}})\leq b o u n d(f_{\pmb\theta^{(t\gets t_{j})}})$$
$\square$
## C More Post-Pruning Analyses
This section presents more visualized analyses of models sparsified by different pruning methods.
Figure 5 shows the layerwise rank distribution of BERTbase pruned using different importance criteria on the RTE dataset. The observation here is similar to what is discussed in the main body of the paper: PINS exhibits the lowest average matrix rank in the sparsified model compared to the other two criteria.
Figure 4 illustrates the weight distribution of BERTbase pruning using different importance criteria. From the left figure we can see that magnitude-based pruning tends to keep parameters with high absolute values, which is expected
![11_image_0.png](11_image_0.png)
![11_image_1.png](11_image_1.png)
based on its definition. Sensitivity and PINS produce similar weight value distribution mainly because the two methods both contain the gθ term in their importance calculation. Despite the similarity, we can still observe that PINS produces smoother distribution than sensitivity and covers more weights with larger absolute values.
The right figure shows the layerwise distribution of remaining parameters after pruning. A
clear trend is that PINS tends to retain more parameters in the middle layers (4-7), which also coincided with the inter-model sparsity pattern analysis in the main body of our paper. Both sensitivity and PINS remove a large proportion of parameters in the top layers (10-12) while magnitude-based pruning has no preference for model layers.
## D Sparsity Scheduler
The proportion of remaining weights is controlled by the sparsity scheduler, here we adopt the commonly used cubic sparsity schedule to progressively reach target sparsity, i.e., r
(t)at time step t within the maximum time steps T is given by:
$$\begin{cases}r_{i}&t\in[0,t_{i})\\ r_{f}+(r_{i}-r_{f})(\frac{T-t_{f}-t}{T-t_{f}-t_{i}})^{3}&t\in[t_{i},T-t_{f})\\ r_{f}&\text{otherwise}\end{cases}\tag{14}$$
where ri = 1.0, rf is the final percent of remained parameters, ti and tf are the warmup and cooldown steps.
## E Accelerating Inference And Reducing Storage
We attain practical efficiency gain in terms of inference time and disk storage space using different sets of off-the-shelf techniques. Specifically, we use DeepSparse2, a sparsity-aware inference runtime to accelerate inference of sparse model on CPUs. We also utilize the Pytorch builtin quantization function3and Compressed Sparse Row (CSR) format4to achieve a much smaller disk space requirement.
2https://github.com/neuralmagic/
deepsparse 3https://pytorch.org/docs/stable/
quantization.html 4https://github.com/huggingface/block_
movement_pruning/blob/master/Saving_ PruneBERT.ipynb
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4.1.1
✓ B1. Did you cite the creators of artifacts you used?
4.1.1 B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4.1.1;4.1.3
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4.1.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4.1.3
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4.2
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
liu-etal-2023-magic | The Magic of {IF}: Investigating Causal Reasoning Abilities in Large Language Models of Code | https://aclanthology.org/2023.findings-acl.574 | Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although large language models (LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like {``}if{``}, we want to explore whether Code-LLMs acquire better causal reasoning abilities. Our experiments show that compared to text-only LLMs, Code-LLMs with code prompts are better causal reasoners. We further intervene on the prompts from different aspects, and discover that the key point is the programming structure. Code and data are available at \url{https://github.com/xxxiaol/magic-if}. |
## The Magic Of If**: Investigating Causal Reasoning Abilities In Large** Language Models Of Code
Xiao Liu1, Da Yin2, Chen Zhang1**, Yansong Feng**1,3∗and **Dongyan Zhao**1,4,5 1Wangxuan Institute of Computer Technology, Peking University 2Computer Science Department, University of California, Los Angeles 3The MOE Key Laboratory of Computational Linguistics, Peking University 4Beijing Institute for General Artificial Intelligence 5State Key Laboratory of Media Convergence Production Technology and Systems
{lxlisa,zhangch,fengyansong,zhaody}@pku.edu.cn [email protected]
## Abstract
Causal reasoning, the ability to identify causeand-effect relationship, is crucial in human thinking. Although large language models
(LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like if, we want to explore whether CodeLLMs acquire better causal reasoning abilities.
Our experiments show that compared to textonly LLMs, Code-LLMs with code prompts are significantly better in causal reasoning. We further intervene on the prompts from different aspects, and discover that the programming structure is crucial in code prompt design, while Code-LLMs are robust towards format perturbations. Code and data are available at https://github.com/xxxiaol/magic-if.
## 1 Introduction
Human beings rely heavily on the capacity for causal reasoning (Sloman, 2005; Hagmayer et al.,
2007). People understand the observed facts, predict future events, and speculate about what might have happened if things had been different with the help of their causal reasoning skills. For instance, when we go home and find a mess, we probably want to figure out why it happened. If we determine that a bird flew into the house, we might then consider whether the mess could have been avoided if we had closed the window.
Although large language models (LLMs) demonstrate great language understanding and generation abilities, it is still challenging for them to perform complex causal reasoning such as the example above. Powerful LLMs are able to understand single cause-and-effect relations (Brown et al., 2020;
∗Corresponding author.
![0_image_0.png](0_image_0.png)
Wang et al., 2021), like *a man losing his balance* causes him to *fell*. However, when it comes to more complex causal structures involving multiple events and alternative branches (like *close the window* or not), LLMs perform much inferior to humans (Bhagavatula et al., 2019; Qin et al., 2019). In this paper, we consider two challenging causal reasoning tasks:
abductive reasoning and counterfactual reasoning.
Abductive reasoning requires models to generate a plausible reason for the *ending* while being consistent with the *premise*. Counterfactual reasoning asks what will occur in the *counterfactual branch*.
Causal relationships between events in these tasks are shown in Figure 1.
A potential difficulty for LLMs to learn complex
Abductive Reasoning Counterfactual Reasoning
![1_image_1.png](1_image_1.png)
![1_image_0.png](1_image_0.png)
causal structures is that they are rarely expressed explicitly in the text. News articles or narratives may contain multiple events with causal relationships, like an incident and a chain of consequences. However, these events are often written chronologically, and it is hard to extract the causal structure from the text without further annotation. Branches are expressed rarer in text, except for the multi-branching storytelling style (Nisi and Haahr, 2006).
On the other hand, causal relations are exhibited more commonly in code. Conditional statements like if direct the computer to execute certain commands, provided a condition is met. This explicitly demonstrates the causal relationship between the condition block and the *execution block*. Code can also express branching with elif or switch statements, and the nesting feature enables code to describe more complex structures1.
This motivates us to utilize code models in natural language causal reasoning. Recently, large language models of code (Code-LLMs) are receiving increasing attention (Chen et al., 2021; Xu et al.,
2022). They exhibit strong code generation performance, and their structural prediction abilities help complete structural natural language tasks like argument graph generation (Madaan et al., 2022)
and event argument extraction (Wang et al., 2022b).
Being pre-trained on code with abundant causal expressions, Code-LLMs may also have gained better causal reasoning abilities.
We conduct experiments on the unsupervised abductive reasoning and counterfactual reasoning tasks. To generate task outputs, we design code prompts like Figure 2 to clearly represent the causal structures of the tasks. Results show that Code-1Although causal expressions like if are also used in natural languages, representing complex causal structures in text is not as clear and structured as in code.
LLMs with code prompts perform much better than text-only LLMs and previous methods. To better understand why the code prompts are effective, we break down the prompts and analyze the influence of different aspects. We find that Code-LLMs are very sensitive to the programming structure (specifically, the conditional statements),
while being robust towards format perturbations and programming language changes.
Our main contributions are as follows: 1) We design code prompts to tackle causal reasoning tasks, by leveraging conditional statements in code to represent causal structures. 2) We evaluate CodeLLMs with code prompts on the abductive reasoning and counterfactual reasoning tasks, and exhibit that code models with code prompts are better causal reasoners than text models. 3) We break down the code prompt in detail and find that the programming structure is crucial to the performance.
## 2 Modeling Causal Structure With Code
We convert the input of causal reasoning tasks into the form of code prompt for Code-LLMs to understand better. We expect the prompts to meet two requirements: 1) clearly represent the causal relationships between events, and 2) as most CodeLLMs only support generating at the end, the target output should appear at the end of the prompts.
The first requirement is addressed with conditional statements. However, for the second, the target prediction is not always the last part of the conditional statements, e.g., in abductive reasoning we want to predict the hypothesis, which is the condition in the if structure. To address this, we uniformly use functions to represent events. As shown in Figure 2, the causal structure is described in the main function. All the event functions are listed afterwards,
| BLEU4 | ROUGEL | CIDEr | BERTScore | |
|--------------------------|----------|---------|-------------|------|
| DELOREAN | 1.6 | 19.1 | 7.9 | 41.7 |
| COLD | 1.8 | 19.5 | 10.7 | 42.7 |
| DIFFUSION | 7.1 | 28.3 | 30.7 | - |
| DAVINCI002 | 4.9 | 27.0 | 26.6 | 56.8 |
| DAVINCI003 | 4.6 | 25.8 | 10.7 | 57.1 |
| CODEX | 13.7 | 39.6 | 81.8 | 64.9 |
| (a) Abductive reasoning. | | | | |
| BLEU4 ROUGEL BERTScore | | | |
|-------------------------------|------|------|------|
| DELOREAN | 21.4 | 40.7 | 63.4 |
| CGMH | 41.3 | - | 73.8 |
| EDUCAT | 44.1 | - | 74.1 |
| DAVINCI002 | 49.0 | 54.7 | 73.0 |
| DAVINCI003 | 30.6 | 45.2 | 69.4 |
| CODEX | 66.8 | 70.0 | 82.5 |
| (b) Counterfactual reasoning. | | | |
| CODEX | Neutral | DAVINCI002 | |
|--------------------------------------------|-----------|--------------|-------|
| Abductive Reasoning Coherence with Premise | 34% | 48.5% | 17.5% |
| Coherence with Ending | 32% | 42.5% | 25.5% |
| Overall Coherence | 40% | 38% | 22% |
| Counterfactual Reasoning Coherence | 36.5% | 39.5% | 24% |
| Preservation | 47.5% | 39.5% | 13% |
Table 2: Human evaluation of comparing CODEX and DAVINCI002.
## Leaving The Target Event Function At The Last.
Abductive Reasoning. Abductive reasoning requires models to generate a plausible hypothesis H given the observations: premise P and ending E. The chronological order of these three events is P → H → E, and the hypothesis causes the ending to occur.
In Figure 2, we regard the task definition as an instruction and place it as a comment at the beginning of the prompt. The causal structure is represented in the main function like: executing the premise, and if the hypothesis is met, executing the ending2.
The content of each event is presented as a comment of its function. The hypothesis function is placed at the last, leaving for models to complete.
The generation process stops with a line break.
Counterfactual Reasoning. Counterfactual reasoning aims to rewrite a story under a counterfactual condition. As in Figure 1, the input consists of four parts: the premise P, the initial context C1, the original ending E1, and the counterfactual context C2. Models are asked to generate the counterfactual ending E2 that *minimally* modifies the original ending E1 and is coherent with the counterfactual context C2.
The causal relationships are represented with the if-elif structure. The premise P is executed first, and then if the initial context C1 is met, the original ending E1 is executed; otherwise, if the counterfactual context C2 is met, the counterfactual ending E2 will be executed. For ease of exposition, we call the context hypothesis as well, being consistent with the former task. The event contents are also written as comments for event functions. We use \#
end to mark the finish of the ending.
## 3 Evaluation
Datasets. We experiment on the ART dataset (Bhagavatula et al., 2019) for the evaluation of abductive reasoning, and the TimeTravel dataset (Qin et al.,
2019) for counterfactual reasoning, with 3,561 and 1,871 test instances, respectively.
Models. We experiment with CODEX (Chen et al., 2021), trained on a blend of code and text, as the Code-LLM. The specific version is code-davinci-002. We compare with two LLMs:
the latest versions of GPT-3 (Brown et al., 2020)
text-davinci-002 and text-davinci-003 (referred to as DAVINCI002 and DAVINCI003). Both of them originate from CODEX and are tuned with instructions. We follow OpenAI's default settings in CODEX and DAVINCI decoding, and the text prompts for DAVINCI are in Figure A.1.
We also compare with previous unsupervised methods on these tasks, including DELOREAN (Qin et al., 2020), COLD (Qin et al., 2022), DIFFU-SION (Li et al., 2022), CGMH (Miao et al., 2019),
and EDUCAT (Chen et al., 2022a)
3. Appendix A.3 3All these methods except DIFFUSION use GPT-2 (Radford et al., 2019) as the base model, and the model size ranges from medium to XL.
| BLEU4 ROUGEL CIDEr BERTScore | BLEU4 ROUGEL BERTScore | | | |
|--------------------------------|--------------------------|------|------|------|
| CODEXtext | 11.7 | 37.5 | 78.5 | 62.5 |
| CODEXcode | 13.7 | 39.6 | 81.8 | 64.9 |
| CODEX∗ code | 16.5 | 42.0 | 91.6 | 66.3 |
| DAVINCItext | 4.9 | 27.0 | 26.6 | 56.8 |
| DAVINCIcode | 6.7 | 31.1 | 46.2 | 59.9 |
| DAVINCI∗ code | 9.0 | 35.0 | 64.0 | 62.2 |
| (a) Abductive reasoning. | CODEXtext | 55.1 | 61.3 | 77.8 |
| CODEXcode | 66.8 | 70.0 | 82.5 | |
| CODEX∗ code | 73.3 | 74.7 | 85.3 | |
| DAVINCItext | 49.0 | 54.7 | 73.0 | |
| DAVINCIcode | 40.4 | 48.5 | 70.5 | |
| DAVINCI∗ code | 43.7 | 52.0 | 72.8 | |
| (b) Counterfactual reasoning. | | | | |
## Provides A Brief Introduction Of These Methods.
Automatic Evaluation. We use the following automatic evaluation metrics: BLEU4 (Papineni et al., 2002), ROUGEL (Lin, 2004), CIDEr (Vedantam et al., 2015) and BERTScore (Zhang et al.,
2019) based on BERT-base for abductive reasoning; BLEU4, ROUGEL and BERTScore for counterfactual reasoning.
Table 1 reports the automatic evaluation results in the zero-shot setting. CODEX significantly outperforms previous methods and DAVINCI on both tasks (with significance level α = 0.01), exhibiting strong causal reasoning ability. Although the two DAVINCI models are based on CODEX, their causal reasoning abilities may be weakened during instruction tuning, and this phenomenon is called alignment tax (Ouyang et al., 2022). DAVINCI003 underperforms DAVINCI002 on most metrics, probably because it tends to generate longer and more discursive outputs, which do not comply with the tasks.
Human Evaluation. We conduct pairwise comparison between CODEX and DAVINCI002 on 100 test examples. Annotators are asked to choose the better output given the task requirements. For abductive reasoning, the outputs are rated from three aspects: coherence with the premise, coherence with the ending, and the overall coherence. For counterfactual reasoning, the outputs are rated from coherence with the context and the extent of preserving the original ending. Each example is rated by at least two annotators, and the average interrater reliability is 0.64.
The results are shown in Table 2. CODEX outperforms DAVINCI002 in all aspects. It better considers the context in generation, and is able to preserve the original content in counterfactual reasoning.
Contributions of the Model and the Prompt. We exchange the prompts of code and text models, to measure the contributions of the model and the prompt. The results are in Table 3. We find that CODEX performs better with the code prompt, as the code prompt clearly describes the causal relation between events. Code prompts benefit the text model DAVINCI002 on abductive reasoning, but have negative impacts on counterfactual reasoning. A possible reason is that the causal structure in counterfactual reasoning is more complicated, leading to a more complex code which is harder for text models to understand.
## 4 What Are Crucial In Code Prompts?
To paint a better picture of the key points in the code prompts, we intervene on the prompts from four aspects and measure the influences of the interventions. The four aspects we select are information, structure, *format*, and *language*. The former two, the prior information provided and the programming structure of functions, are contentrelated; the latter two, the code format and programming languages, are form-related. An ideal model should rely on the content and be insensitive to form perturbations. The interventions are described below, with examples in Figure A.2.
Information. We study two types of prior information: task instructions and function names. In No Instruction, we remove the task instruction from the prompts. In *Function Name Perturbation*, we replace original function names with anonymous functionX. For example, we replace premise() and hypothesis() in Figure 2 with functionA() and functionB(), respectively. It eliminates the information in function names and only allows models to learn the event relations from programming structures.
Structure. The first way to intervene in the programming structure is to convert the conditional structures into sequential structures, referred to as *Sequential Structure*. The events are executed sequentially, like premise(), hypothesis(),
| BLEU4 | ROUGEL | CIDEr | BERTScore | | |
|----------------------|----------------------------|---------|-------------|------|------|
| CODEX | 13.7 | 39.6 | 81.8 | 64.9 | |
| No Instruction | 12.1 | 37.4 | 73.8 | 62.9 | |
| Information | Function Name Perturbation | 15.1 | 39.1 | 77.8 | 64.6 |
| Sequential Structure | 9.6 | 36.8 | 72.0 | 63.5 | |
| Structure | Disruption | 7.9 | 30.3 | 49.8 | 58.5 |
| Class | 16.0 | 41.0 | 87.4 | 65.8 | |
| Format | Print | 13.8 | 39.4 | 82.0 | 65.0 |
| Return | 13.0 | 40.3 | 83.4 | 65.5 | |
| Java | 16.5 | 42.0 | 91.6 | 66.3 | |
| Language | C | 15.5 | 41.0 | 88.0 | 65.6 |
ending() in abductive reasoning. In the second way called *Disruption*, we randomly disrupt the positions of the functions in the conditional structure.
For instance, if hypothesis(): ending() can be disrupted into if ending(): hypothesis().
We also apply the function name perturbation in disruption to eliminate the impact of function names.
Format. We test three formats besides the original one: Class, *Print* and *Return*. The first one converts the original code into a class. We define the programming structure in the __init__ method, and move the event functions into the class. In *Print*,
we represent the content of events as a string and print it in the function body, like def premise():
print("The Smiths ..."). And in *Return*, the string is the return value of event functions.
Language. We also convert the original Python programs into two other languages, *Java* and C, to evaluate the influence of programming languages.
Intervention Results. We evaluate the influence of interventions on abductive reasoning in Table 4, and the results on counterfactual reasoning are in Table A.2. The absence of prior information causes a small decrease in results. Even if the instruction or function names are not provided, CODEX is able to perform causal reasoning based on conditional statements. Changes in the programming structure have a larger negative impact. Comparing *Function* Name Perturbation and *Disruption*, the alteration of two characters (swap B and C in functionB and functionC) results in a major drop, showing that the conditional structure that reasonably depicts the relations between events is crucial in CODEX
reasoning.
CODEX is quite robust towards format and language changes. Settings like *Class* and *Java* are even better than the original one, revealing that the performance can be further improved with delicate prompt engineering.
## 5 Conclusion
We investigate the causal reasoning ability of CodeLLMs. With code prompts of conditional statements, Code-LLMs achieve great performance in abductive and counterfactual reasoning, outperforming text-only LLMs significantly. Our study on different aspects of code prompts shows that providing a reasonable causal structure in code can help generate plausible outputs, and Code-LLMs are robust towards format perturbations.
## Limitations
Language Our experiments are conducted on English, as all Code-LLMs we know are pre-trained on English programming languages. Fundamentally, most popular programming languages are English-based, but international programming languages (which work in multiple languages) like Scratch, or non-English-based programming languages like Qalb also emerge. We look forward to the appearance of Code-LLMs on these programming languages.
Prompt Engineering We manually design the prompts without prompt engineering techniques such as prompt search. The searched prompts may outperform the ones we used, but our experiments on interventions show that CODEX is fairly robust towards format perturbations.
Model LLMs update quickly. From the time we submitted the paper until now, several new LLMs have been released. We try to compare their performance with ours. We select three new LLMs:
CHATGPT, GPT-4 (OpenAI, 2023), and BARD4, and feed the text prompts to them. Because we do not have access to some of their APIs, we only experiment on a subset of 100 instances and report 4Experiments are done with models updated to May 10, 2023.
the results in Table 5. CODEX outperforms all these models in the automatic evaluation, but part of the reason is that these models provide more detailed outputs than the reference. We provide a case study in Appendix A.5.
Since CODEX is no longer available to the public, we provide CODEX generation results in our GitHub repository. We also looked for alternatives and tried two open source Code-LLMs CODEGEN (Nijkamp et al., 2022) (version CodeGen-16BMono) and STARCODER (Li et al., 2023) with our code prompts. However, as shown in the case study, their performance is not comparable to CODEX, probably because they are more than ten times smaller in size.
| BLEU4 ROUGEL BERTScore | | | |
|-------------------------------|------|------|------|
| CODEX | 68.4 | 70.3 | 84.7 |
| CHATGPT | 15.3 | 34.7 | 70.0 |
| GPT-4 | 38.5 | 55.5 | 78.6 |
| BARD | 12.1 | 22.0 | 62.1 |
| (b) Counterfactual reasoning. | | | |
## Ethics Statement
Our work is based on off-the-shelf LLMs. As the results may inherit the underlying bias of LLMs, they cannot be used individually without human supervision. The Codex API was free when the experiments were conducted, and the Davinci APIs cost $0.02 per thousand tokens. We conduct all the experiments with less than $100. We recruit annotators for human evaluation from friends and colleagues of authors. All annotators are fairly paid with more than $10 per hour.
## Acknowledgments
This work is supported in part by NSFC
(62161160339). We would like to thank the anonymous reviewers for the helpful discussions and suggestions. For any correspondence, please contact Yansong Feng.
## References
Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. In International Conference on Learning Representations.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Du-Seong Chang and Key-Sun Choi. 2005. Causal relation extraction using cue phrase and lexical pair probabilities. In *International Conference on Natural* Language Processing, pages 61–70. Springer.
Jiangjie Chen, Chun Gan, Sijie Cheng, Hao Zhou, Yanghua Xiao, and Lei Li. 2022a. Unsupervised editing for counterfactual stories. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 36, pages 10473–10481.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. *arXiv preprint* arXiv:2107.03374.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. 2022b. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588.
Li Du, Xiao Ding, Kai Xiong, Ting Liu, and Bing Qin.
2021. Excar: Event graph knowledge enhanced explainable causal reasoning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2354–2363.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language models. *arXiv preprint arXiv:2211.10435*.
Andrew S Gordon, Cosmin A Bejan, and Kenji Sagae.
2011. Commonsense causal reasoning using millions of personal stories. In *Twenty-Fifth AAAI Conference* on Artificial Intelligence.
York Hagmayer, Steven A Sloman, David A Lagnado, and Michael R Waldmann. 2007. Causal reasoning through intervention. Causal learning: Psychology, philosophy, and computation, pages 86–100.
| BLEU4 | ROUGEL | CIDEr | BERTScore | |
|--------------------------|----------|---------|-------------|------|
| CODEX | 15.0 | 39.8 | 82.2 | 67.8 |
| CHATGPT | 5.1 | 26.9 | 17.5 | 62.6 |
| GPT-4 | 6.3 | 29.2 | 27.8 | 65.1 |
| BARD | 5.7 | 31.5 | 14.8 | 66.0 |
| (a) Abductive reasoning. | | | | |
Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, and Mari Ostendorf. 2022. Incontext learning for few-shot dialogue state tracking.
ArXiv, abs/2203.08568.
Pengfei Li and Kezhi Mao. 2019. Knowledge-oriented convolutional neural network for causal relation extraction from natural language texts. *Expert Systems* with Applications, 115:512–523.
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al.
2023. Starcoder: may the source be with you! *arXiv* preprint arXiv:2305.06161.
Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori B Hashimoto. 2022. Diffusionlm improves controllable text generation. *arXiv* preprint arXiv:2205.14217.
Zhongyang Li, Tongfei Chen, and Benjamin Van Durme.
2019. Learning to rank for plausible plausibility. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4818–
4823.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81.
Xiao Liu, Da Yin, Yansong Feng, Yuting Wu, and Dongyan Zhao. 2021. Everything has a cause: Leveraging causal inference in legal text analysis. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1928–1941.
Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. 2022. Language models of code are few-shot commonsense learners. arXiv preprint arXiv:2210.07128.
Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. Cgmh: Constrained sentence generation by metropolis-hastings sampling. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 33, pages 6834–6842.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. Codegen: An open large language model for code with multi-turn program synthesis.
arXiv preprint arXiv:2203.13474.
Valentina Nisi and Mads Haahr. 2006. Weird view:
interactive multilinear narratives and real-life community stories. *Crossings*, 2:27.
OpenAI. 2023. Gpt-4 technical report. *arXiv*.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. *arXiv preprint* arXiv:2203.02155.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019.
Counterfactual story reasoning and generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5043–5053.
Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena D Hwang, Ronan Le Bras, Antoine Bosselut, and Yejin Choi. 2020. Back to the future:
Unsupervised backprop-based decoding for counterfactual and abductive commonsense reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 794–805.
Lianhui Qin, Sean Welleck, Daniel Khashabi, and Yejin Choi. 2022. Cold decoding: Energy-based constrained text generation with langevin dynamics.
arXiv preprint arXiv:2202.11705.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI
blog, 1(8):9.
Bryan Rink, Cosmin Adrian Bejan, and Sanda Harabagiu. 2010. Learning textual graph patterns to detect causal event relations. In *Twenty-Third International FLAIRS Conference*.
Steven Sloman. 2005. *Causal models: How people* think about the world and its alternatives. Oxford University Press.
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575.
Jiashuo Wang, Yi Cheng, and Wenjie Li. 2022a.
Care: Causality reasoning for empathetic responses by conditional graph generation. *arXiv preprint* arXiv:2211.00255.
Xingyao Wang, Sha Li, and Heng Ji. 2022b.
Code4struct: Code generation for few-shot structured prediction from natural language. arXiv preprint arXiv:2210.12810.
Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao.
2021. Towards zero-label language learning. arXiv preprint arXiv:2109.09193.
Yuhuai Wu, Albert Q Jiang, Wenda Li, Markus N
Rabe, Charles Staats, Mateja Jamnik, and Christian Szegedy. 2022. Autoformalization with large language models. *arXiv preprint arXiv:2205.12615*.
Frank F Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, pages 1–10.
Jiayao Zhang, Hongming Zhang, Weijie Su, and Dan Roth. 2022. Rock: Causal inference principles for reasoning about commonsense causality. In *International Conference on Machine Learning*, pages 26750–26771. PMLR.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In *International Conference on Learning Representations*.
## A Appendix A.1 Related Work
Causal Reasoning There is a growing interest in the NLP community to equip models with causal reasoning abilities. Chang and Choi (2005); Gordon et al. (2011) measure causality between words and phrases with statistical methods, Rink et al.
(2010); Li and Mao (2019) use explicit semantic cues, and Liu et al. (2021); Zhang et al. (2022) discover causal relations with causal inference methods like propensity score matching. Li et al. (2019)
finetune LLMs on causal event corpus, and Du et al.
(2021); Wang et al. (2022a) augment LLMs with causal knowledge graphs. Contrast to them, we explore the causal reasoning abilities acquired by Code-LLMs in pre-training.
Applying Code-LLMs to Natural Language Tasks With the recent development of CodeLLMs, several works attempt to solve natural language tasks with code models. They mainly focus on two areas: numerical reasoning and structural prediction. Gao et al. (2022); Chen et al. (2022b);
Wu et al. (2022) apply Code-LLMs to numerical reasoning. They generate programs with CodeLLMs and feed the programs into an external interpreter to derive the answer. Madaan et al. (2022);
Wang et al. (2022b) leverage the text-to-structure translation ability of Code-LLMs to perform structural prediction tasks. They ask models to generate structures in the form of code, and convert the generated code into the task output format. In addition, Hu et al. (2022) takes advantages of CodeLLMs on text-to-SQL generation. Different from them, we leverage the causal reasoning ability of Code-LLMs, and ask them to generate natural language events given the causal structure.
## A.2 Prompts
Figure A.1 demonstrates the prompts of probing DAVINCI. Specifically, the language conversion is made automatically by CODEX with the instruction
\# python to java/c. Figure A.2 shows the interventions on code prompts for abductive reasoning.
## A.3 Models For Comparison
We compare with previous unsupervised methods on the two tasks, including DELOREAN (Qin et al., 2020), COLD (Qin et al., 2022), and DIFFU-SION (Li et al., 2022) on abductive reasoning; and CGMH (Miao et al., 2019), EDUCAT (Chen et al.,
2022a), DELOREAN, and COLD on counterfactual reasoning. Among them, DELOREAN and COLD
are constraint-based models. They regard the task requirements as constraints (for example, the generated text should be consistent with the premise, and coherent with the ending in the abductive reasoning task), and iteratively update text representation to meet the constraints. CGMH and EDUCAT are editing-based models targeted for counterfactual reasoning. They start from the original ending and edit it to meet the counterfactual context. DIFFU-SION builds a controllable LM based on continuous diffusions to perform control tasks including abductive reasoning.
## A.4 Additional Results
| Min-Edit | BERTScore | |
|------------|-------------|------|
| DELOREAN | 52.9 | 73.7 |
| COLD | 56.8 | 73.5 |
| CODEX | 58.0 | 79.5 |
![8_image_0.png](8_image_0.png)
First-Sentence Setting of Counterfactual Reasoning Endings in the original counterfactual reasoning data TimeTravel are of three sentences.
Due to the computation constraint of COLD (Qin et al., 2022), it is evaluated in a first-sentence setting: only the first sentence of the original ending is used, and models are asked to generate a one-sentence counterfactual ending. We conduct experiments in the first-sentence setting with the metrics used in Qin et al. (2022). As shown in Table A.1, CODEX outperforms previous methods in this setting.
Intervention on Counterfactual Reasoning Table A.2 demonstrates the intervention results on counterfactual reasoning. The observations are similar to those in the abductive reasoning task: changes in the programming structure affect CODEX's performance largely, changes in the information affect less, and CODEX is robust towards format and language changes.
One-shot Setting We also conduct experiments in the one-shot setting. Models are shown with one demonstration example in the in-context learning manner, and the example is identical among the models. As shown in Table A.3, both DAVINCI002 and CODEX are better than in the
![9_image_1.png](9_image_1.png)
![9_image_0.png](9_image_0.png)
Figure A.1: Example text prompts of abductive reasoning and counterfactual reasoning.
CODEX 66.8 70.0 82.5
| No Instruction | 55.4 | 60.1 | 77.0 | |
|----------------------|----------------------------|--------|--------|------|
| Information | Function Name Perturbation | 65.4 | 69.0 | 82.2 |
| Sequential Structure | 43.4 | 50.2 | 68.2 | |
| Structure | Disruption | 16.0 | 23.5 | 55.2 |
| Format | Print | 73.3 | 74.7 | 85.3 |
| Java | 71.1 | 73.5 | 84.5 | |
| Language | C | 71.9 | 74.2 | 85.0 |
| BLEU4 | ROUGEL | BERTScore |
|---------|----------|-------------|
zero-shot setting, while CODEX still largely outperforms DAVINCI002, showing that the advantage of CODEX is robust across different settings.
## A.5 Case Study
We randomly select some generation examples and demonstrate them in Table A.4. Comparing CODEX and DAVINCI, CODEX generations are more coherent with the context, while DAVINCI sometimes cannot take into account the premise.
CODEX also understands the task instruction well and better preserves the original ending in counterfactual reasoning. Generations of more powerful LLMs like CHATGPT and GPT-4 are coherent with the context, but they add much detail and barely keep the original ending. Although open source Code-LLMs like CODEGEN and STARCODER can follow the code prompts and generate sentences in the required format, most of their outputs are inconsistent with the premise and the ending.
| BLEU4 | ROUGEL | CIDEr | BERTScore | BLEU4 ROUGEL BERTScore |
|-------------------------------|------------|---------|-------------|--------------------------|
| DAVINCI002 | 8.2 | 33.5 | 55.9 | 61.7 |
| CODEX | 17.9 | 42.3 | 91.7 | 67.1 |
| (a) Abductive reasoning. | DAVINCI002 | 53.5 | 58.8 | 76.0 |
| CODEX | 74.3 | 76.2 | 86.1 | |
| (b) Counterfactual reasoning. | | | | |
Table A.3: Evaluation results in the one-shot setting (%).
| Abductive Reasoning Premise: Angie went to a cocktail party hosted by her best friend. Ending: Angie decided to be quiet about what she overheard. Hypothesis [CODEX]: Angie overheard her best friend talking about her behind her back. Hypothesis [DAVINCI002]: Angie decided to be quiet about what she overheard because she didn't want to start any drama |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Premise: Angie went to a cocktail party hosted by her best friend.
Ending: Angie decided to be quiet about what she overheard.
Hypothesis [C**ODEX**]: Angie overheard her best friend talking about her behind her back.
Hypothesis [D**AVINCI**002]: Angie decided to be quiet about what she overheard because she didn't want to start any drama with her best friend.
Hypothesis [D**AVINCI**003]: Angie overheard a conversation between two people at the party that made her uncomfortable, so she decided to stay quiet and not tell anyone what she heard.
Hypothesis [CHAT**GPT]:** Angie overheard something that she felt was better left unsaid.
Hypothesis [GPT-4]: Angie overheard a sensitive conversation that she felt was not her place to share.
Hypothesis [BARD]: Angie overheard something at the party that she did not want to share with her best friend.
Hypothesis [CODEGEN]: Angie's best friend was a bartender. Hypothesis [STARC**ODER**]: If judge is cheating on Annie, then judge was correct in trolling an English major.
Counterfactual Reasoning Premise: My grandpa took my brothers and I fishing in the summer. Initial Event: We went to a small pond close to his house.
Original Ending: He set up our fishing poles and we began to fish. It was so boring when nobody caught any fish. We were disappointed to go home empty handed that day.
Counterfactual Event: We caught four huge trouts within the first hour.
Counterfactual Ending [C**ODEX**]: He set up our fishing poles and we began to fish. It was so exciting when we caught four huge trouts within the first hour. We were happy to go home with four trouts that day.
Counterfactual Ending [D**AVINCI**002]: We were so excited to have caught four trouts within the first hour! We were proud to show our grandpa our catches.
Counterfactual Ending [D**AVINCI**003]: He set up our fishing poles and we began to fish. To our pleasant surprise, we caught four huge trouts within the first hour. We were delighted to take home our catches that day.
Counterfactual Ending [CHAT**GPT]:** As soon as we cast our lines, we felt a tug and reeled in four huge trouts within the first hour. We couldn't believe our luck and spent the rest of the day catching even more fish. Grandpa was proud of us and we were thrilled to bring home a big haul for dinner. It was a memorable day of fishing that we would never forget.
Counterfactual Ending [GPT-4]: He set up our fishing poles and we began to fish. To our surprise, we caught four huge trouts within the first hour. We were thrilled with our success and couldn't wait to show off our catch when we got home.
Counterfactual Ending [BARD]: We caught four huge trouts within the first hour! We were so excited that we forgot all about being bored. We had a great time fishing with grandpa that day.
Counterfactual Ending [CODEGEN]: He set up our fishing poles and we began to fish. It was so boring when nobody caught any fish. We were disappointed to go home empty handed that day. We caught four huge trouts within the first hour.
Counterfactual Ending [STARC**ODER**]: We were finally happy to catch all of the trouts. And while he is a very kind man, you will never see him again. We will always love our old family in China better than ever before.
Table A.4: Examples of model generations.
| 1. Information: No instruction | 6. Format: Print # task: generate a plausible explanatory hypothesis given the premise and the ending def main(): premise() if hypothesis(): ending() def premise(): print("The Smiths were having holidays done of the children.") def ending(): print("Ty's face lit up as he ran to the new toy, happily posing for photos.") def hypothesis(): print(" 7. Format: Return # task: generate a plausible explanatory hypothesis given the premise and the ending def main(): premise() if hypothesis(): ending() def premise(): return("The Smiths were having holidays done of the children.") def ending(): return("Ty's face lit up as he ran to the new toy, happily posing for photos.") def hypothesis(): return(" |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| def main(): premise() if hypothesis(): ending() def premise(): # The Smiths were having holidays done of the children. def ending(): # Ty's face lit up as he ran to the new toy, happily posing for photos. def hypothesis(): # 2. Information: Function Name Perturbation # task: generate a plausible explanatory hypothesis given the premise and the ending def main(): functionA() if functionB(): functionC() def functionA(): # The Smiths were having holidays done of the children. def functionC(): # Ty's face lit up as he ran to the new toy, happily posing for photos. def functionB(): # 3. Structure: Sequential Structure # task: generate a plausible explanatory hypothesis given the premise and the ending def main(): premise() hypothesis() ending() def premise(): # The Smiths were having holidays done of the children. def ending(): # Ty's face lit up as he ran to the new toy, happily posing for photos. def hypothesis(): # 8. Language: Java // task: generate a plausible explanatory hypothesis given the premise and the ending public class Story { public static void main(String[] args) { premise(); if (hypothesis()) { ending(); } } public static void premise() { // The Smiths were having holidays done of the children. } public static void ending() { // Ty's face lit up as he ran to the new toy, happily posing for photos. } public static boolean hypothesis() { // 4. Structure: Disruption # task: generate a plausible explanatory hypothesis given the premise and the ending def main(): functionA() if functionB(): functionC() def functionA(): # The Smiths were having holidays done of the children. def functionB(): # Ty's face lit up as he ran to the new toy, happily posing for photos. def functionC(): # 9. Language: C // task: generate a plausible explanatory hypothesis given the premise and the ending int main() { premise(); if (hypothesis()) { ending(); } } void premise() { // The Smiths were having holidays done of the children. } void ending() { // Ty's face lit up as he ran to the new toy, happily posing for photos. } int hypothesis() { // 5. Format: Class # task: generate a plausible explanatory hypothesis given the premise and the ending class Story: def __init__(self): self.premise() if self.hypothesis() self.ending() def premise(self): # The Smiths were having holidays done of the children. def ending(self): # Ty's face lit up as he ran to the new toy, happily posing for photos. def hypothesis(self): # | |
Figure A.2: Examples of code prompt interventions in abductive reasoning.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitation Section
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1. Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3. Evaluation
✓ B1. Did you cite the creators of artifacts you used?
3. Evaluation
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. In the supplementary data
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3. Evaluation B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. The data we use is created and checked by previous work.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Limitation Section
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3. Evaluation
## C ✓ **Did You Run Computational Experiments?** 3 & 4
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? Not applicable. The parameters and computational budget are not public available.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Limitation Section
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3. Evaluation
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In the supplementary code D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Appendix A.4
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
The instructions are briefly introduced in Appendix A.4
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Ethics Statement
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Appendix A.4 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Ethics review is not required.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Ethics Statement |
yang-etal-2023-learning-leverage | Learning to Leverage High-Order Medical Knowledge Graph for Joint Entity and Relation Extraction | https://aclanthology.org/2023.findings-acl.575 | Automatic medical entity and relation extraction is essential for daily electronic medical record (EMR) analysis, and has attracted a lot of academic attention. Tremendous progress has been made in recent years. However, medical terms are difficult to understand, and their relations are more complicated than general ones. Based on this situation, domain knowledge gives better background and contexts for medical terms. Despite the benefits of medical domain knowledge, the utilization way of it for joint entity and relation extraction is inadequate. To foster this line of research, in this work, we propose to leverage the medical knowledge graph for extracting entities and relations for Chinese Medical Texts in a collective way. Specifically, we propose to construct a high-order heterogeneous graph based on medical knowledge graph, which is linked to the entity mentions in the text. In this way, neighbors from the high-order heterogeneous graph can pass the message to each other for better global context representations. Our experiments on real Chinese Medical Texts show that our method is more effective than state-of-the-art methods. | # Learning To Leverage High-Order Medical Knowledge Graph For Joint Entity And Relation Extraction
Zhe Yang,Yi Huang∗
, and Junlan Feng∗
JIUTIAN Team, China Mobile Research Institute
{yangzhe,huangyi,fengjunlan}@chinamobile.com
## Abstract
Automatic medical entity and relation extraction is essential for daily electronic medical record (EMR) analysis, and has attracted a lot of academic attention. Tremendous progress has been made in recent years. However, medical terms are difficult to understand, and their relations are more complicated than general ones. Based on this situation, domain knowledge gives better background and contexts for medical terms. Despite the benefits of medical domain knowledge, the utilization way of it for joint entity and relation extraction is inadequate. To foster this line of research, in this work, we propose to leverage the medical knowledge graph for extracting entities and relations for Chinese Medical Texts in a collective way. Specifically, we propose to construct a high-order heterogeneous graph based on medical knowledge graph, which is linked to the entity mentions in the text. In this way, neighbors from the high-order heterogeneous graph can pass the message to each other for better global context representations. Our experiments on real Chinese Medical Texts show that our method is more effective than state-ofthe-art methods.
## 1 Introduction
Medical text, e.g., electronic medical record
(EMR), has been produced at a rapid speed and massive volume every day. Without any structured organization, this enormous volume of medical information is difficult to be read through by humans in a short time (Shang et al., 2021). Due to this situation, many researchers have recently paid great attention to the joint entity and relation extraction in the medical domain (Lai et al., 2021; Verlinden et al., 2021).
The challenge of joint entity and relation extraction in medical domain is that medical terms are usually difficult to understand due to the requirement of medical domain knowledge, especially for abbreviations of medical terms in the medical text.
Even worse, relations between medical entities become even more complicated. Therefore, medical domain knowledge that could provide meaningful contexts and backgrounds is essential for the better extraction of medical entities and relations. Despite the advantages of medical domain knowledge, most previous works fail to use medical domain knowledge (Li et al., 2017; Xue et al., 2019; Pang et al., 2021; Luo et al., 2020). They solely rely on the local information in the medical text to extract entities and relations with language model (LM),
which is insufficient for incomprehensible medical terms and complicated relations between entities.
Some recent works utilize medical knowledge for joint entity and relation extraction (Lai et al.,
2021; Verlinden et al., 2021). However, both Lai et al. (2021) and Verlinden et al. (2021) simply align entity representation (node representation)
from knowledge graph to local texts and fail to explicitly introduce the complicated relation contexts (edge representation) in the medical knowledge graph to enhance the deep representations of their involved entities. Huang et al. (2020)
propose graph edge-conditioned attention network
(GEANet) which integrates initial static relation embedding into attention mechanism for entity representation enhancement in medical knowledge graph. Nonetheless, it leaves out relational update during knowledge graph training process. Battaglia et al. (2018) propound graph network (GN) framework to update node and edge features iteratively within a heterogeneous graph. However, the edge representation update is based on the sender and receiver nodes information it links, which will bring about fluctuation since the amount of nodes is far outweighed that of edge types.
Therefore, we propose a method to fix these issues by providing additional relation contexts from
∗Corresponding authors
![1_image_0.png](1_image_0.png)
medical knowledge graph to enrich the deep representations of entity mentions. Specifically, we propose to construct a high-order heterogeneous graph, e.g., Fig. 1, to provide meaningful global contexts for its linked entity mentions. In Fig. 1, we denote Ge as the standard first-order graph with entities as nodes and relations as edges, and represent Gr as Ge's converted second-order graph with relations as nodes and entity types as edges. For every relation pair of an entity in Ge, e.g., r3 and r2, we link an edge, i.e., the entity type te1
, in Gr connecting them. In this way, both message passing of entities via different relations and message propagation of relations via different entity types can be well diffused in the global graph structure. After extracting the high-order heterogeneous graph from the medical knowledge graph, we fuse the entity and relation representations in the global context obtained from the high-order heterogeneous graph with the local information extracted from the medical text.
To summarize, our contributions are:
- We propose a high-order graph modeling method for knowledge fusion, which treats text related sub-graph as the first-order graph with entities as nodes and its converted graph as the secondorder one with relations as nodes. We update the hidden representations of nodes in the two order graphs separately as the entity/relation representations for knowledge graph.
- We present a knowledge-enhancement method for medical text encoding, which boosts the entity representation of the first-order graph with the feedback of the second-order relation representation.
And it further enhances the encoding of the entity mentions from medical text for joint extraction.
- We have performed substantial experiments against existing methods. Our evaluation results on real medical datasets verify that our method is more effective than state-of-the-art methods.
The rest of this paper is organized as follows.
Section 2 discusses related work. In Section 3, we introduce the typical algorithms for text and graph representation and present the proposed method for knowledge-enhanced joint extraction. Section 4 shows the evaluation results on two datasets with compared to some other advanced methods. We conclude in Section 5.
## 2 Related Work
There are two categories of entity and relation extraction methods: pipeline-based methods and joint extraction methods.
## Pipeline-Based Entity And Relation Extraction
methods: These work usually first extract entities as outputs, then extract relations for the returned entities. The drawback of pipeline-based extraction methods is that the errors of entity extraction may be accumulated when extracting relations for the already returned entities. For example, Zhong and Chen (2021) put forward an extra encoder and fuse the entity type information to enhance entity pair representation during the relation extraction task.
Jointly entity and relation extraction methods: Some recent work extracts entity and relation in a collective way to overcome the accumulated error problem in the pipeline-based methods. There are two kinds of standard extraction methods, which are non-knowledge-enhanced and knowledge-enhanced methods.
Most joint entity and relation extraction methods ignore the domain knowledge. Wang et al. (2018)
utilize a novel graph scheme to solve the problem. Luan et al. (2018) apply multi-task method to optimize entities, relations, and coreference simultaneously.Bekoulis et al. (2018) further propose multi-context based adversarial training method. Luan et al. (2019) utilize dynamic span graphs to form a general framework. Fu et al. (2019) model text as relational graphs for joint extraction. Zhao et al. (2021) model dense cross-modal interactions for joint extraction. Wang and Lu (2020) apply table-sequence encoders to extract jointly. Lin et al.
(2020) apply neural model for information extraction with global features. Recently, Eberts and Ulges (2020) propose a span-based method with transformer pre-training. Further, Ji et al. (2020) apply span-based method with attention-based spanspecific and contextual semantic representations. Moreover, Wei et al. (2020) propose a cascade binary tagging framework for entity and relation extraction. Yan et al. (2021) propose a partition filter network for joint entity and relation extraction. Despite that these works extract entity and relation in a joint way, they are not designed for medical text. While some previous works are designed for medical text, they ignore the help of medical domain knowledge when modeling (e.g., Li et al.,
2017; Xue et al., 2019; Pang et al., 2021; Luo et al.,
2020). Only a few works incorporate medical domain knowledge to enhance the contexts of the medical text for joint entity and relation extraction
(Lai et al., 2021; Verlinden et al., 2021). Among them, Lai et al., 2021 extract entity and relation jointly with knowledge-enhanced collective inference. Verlinden et al. (2021) inject knowledge base information into entity and relation extraction and coreference resolution simultaneously. However, both Lai et al. (2021) and Verlinden et al. (2021)
fail to explicitly incorporate complicated relation contexts for their involved entities. Thus the interaction between entity and relation extraction will not be captured. Our method overcomes this drawback by providing relation contexts when modeling deep representations of entities.
Most of the current work does not use domain knowledge, or only uses the entity context in domain knowledge, and does not include the relation context. While we explicitly model the entity context and relation context as the important context information for entity span, which improves the results of joint extraction.
## 3 Proposed Method
Fig. 2 shows the framework of our method. It first extracts the high-order heterogeneous graph from the medical knowledge graph (Section 3.1),
and learns entity and relation representation from the global context (Section 3.2). After that, we learn the representation of entity mentions from the local context in the medical text (Section 3.3). The fusion of entity representations from both global and local contexts and its corresponding relation representation (Section 3.4) is utilized to extract entities and relations in a collective way (Section 3.5).
## 3.1 High-Order Heterogeneous Graph Extraction
The challenge is that there are multiple relations in the heterogeneous graph, and modeling entities together with their contextual relation information is non-trivial. We propose to construct a high-order heterogeneous graph from the knowledge graph for the medical text, such that the text representation contains additional global knowledge contexts of related entities and relations.
The high-order heterogeneous graph comprises of first-order graph (a text related sub-graph of original knowledge graph) and its converted secondorder graph. Given a piece of medical text expressed as words t = (w0, w1, ..., wi*, ..., w*k) and a domain knowledge graph G = (*V, ε, E*), the goal of high-order heterogeneous graph extraction is to obtain Ge = (Ve, Ee) and Gr = (Vr, Er). Specifically,
$$\begin{array}{c}{{V_{e}=V\cap\{w_{i}\},}}\\ {{(e_{i},r_{e_{i},e_{j}},e_{j})\in\varepsilon,}}\\ {{r_{e_{i},e_{j}}=E_{e}(e_{i},e_{j}),}}\end{array}\qquad\qquad(1)$$
$\in V,\;x\quad\hdots\in E$ .
where vei
, vej ∈ Ve, rei,ej ∈ Ee and ε is the set of all triplets {(eh, reh,et, et)} in knowledge graph.
And
$$\begin{array}{c}{{V_{r}\subseteq E_{e},}}\\ {{E_{r}\subseteq T(V_{e}),}}\\ {{\mathbb{I}(E_{r}(E_{e}(e_{i},e_{j}),E_{e}(e_{l},e_{m}))){=}{\mathbb{I}(e_{i}{=}e_{m}\backslash e_{j}{=}e_{l}),}}}\end{array}$$
where T(.) means the type of an entity. I is the indicator function and I(Er(.)) measures whether the edge Er(.) in second-order graph exists.
Hence, it first searches through the medical knowledge graph to extract all triplets as the firstorder graph pertaining to current medical text, and converts it to the second-order graph with nodes and edges switched. To be detailed, we traverse nodes, i.e., entities, in the first-order graph and collect their in and **out edges**, i.e., relations. Concretely, when it is the head entity in a triplet, we call the corresponding relation the **out relation**,
conversely, the **in relation**. For any in-out relation pair of an entity, we consider relations as nodes for the second-order graph, and the type of the corresponding entity as the edge linking them. As it can be seen in the High-Order Heterogeneous Graph Extraction Module part of Fig. 2, a Chinese entity "鹅口疮(thrush) " is revolved around by relations of " 可能疾病(possible disease) ", "并发 症(complication) " and "传染方式(mode of infection) " in the first-order graph, during which the first two are in relations, and the other is the out relation. For any in-out relation pair, e.g., "并发 症-传染方式", we take them as two nodes in the
![3_image_0.png](3_image_0.png)
converted second-order graph and the type of the entity "鹅口疮", i.e., "疾病(disease) " as the linking edge. As a counter example, "可能疾病" and
"并发症" are all **in relations** for entity "鹅口疮",
therefore, no edge exists to link them. We merely link the in-out relation pair on account of the message flowing direction, i.e., from head entity to tail entity.
This high-order heterogeneous graph we propose is distinguished from the node layer and edge layer in Jiang et al. (2020), which lies in two aspects:
(1) We only consider the in-out relation pairs as edges for second-order graph, which better reflects the direction of information flow, while Jiang et al.
(2020) links all relation pairs.
(2) The edge in second-order graph represents the entity type in our method, however, it remains as the entity by authors of Jiang et al. (2020), where the number of entities is very huge and will not be feasible for GNN to learn.
## 3.2 High-Order Heterogeneous Graph Modeling
Our idea is to propagate messages among both the first-order graph Ge and the converted secondorder graph Gr in Fig. 2, in order to capture the complex global information from knowledge graph.
Hence, entity mentions in the medical text can integrate their local information with the related global
## Contexts For The Better Encoding.
We first propagate message among the standard first-order graph Ge with relational graph convolutional network (RGCN) (Schlichtkrull et al., 2018).
We apply TransE (Bordes et al., 2013) as the initialization of the embedding for each node vi. The embedding of each node vi, i.e., entity, can be updated as:
$$v_{i}^{l+1}=\text{ReLU}(\text{U}^{l}v_{i}^{l}+\sum_{k\in R}\sum_{v_{j}\in N_{i}^{k}}\left(\frac{1}{|N_{i}^{k}|}\text{U}_{k}^{l}v_{j}^{l}\right))\,\tag{3}$$
where v l i is the embedding of the node vi at layer l. Nk iis the set of neighbors of vi under relation k. U
lis the trainable parameter at layer l. U
l k is relation specific weighted parameter.
We further model the global contexts in the converted second-order graph Gr. Different from the first-order graph Ge, the nodes and edges in secondorder graph represent relations and entity types respectively. We pass messages among the neighbors of the node, i.e., relation ri, via different entity types in graph Gr. Then the information among multiple entity types of relation ri can be summarized. Here the initialization of the embedding for each relation riis also obtained by the TransE
model. The embedding of each node, i.e., relation ri, is integrated by the deep representations of its neighbors as:
$$r_{i}^{l+1}=\mathrm{ReLU}(\mathbf{O}^{l}r_{i}^{l}+\sum_{t\in T}\sum_{r_{j}\in N_{i}^{t}}\left({\frac{1}{|N_{i}^{t}|}}\mathbf{O}_{t}^{l}r_{j}^{l}\right))\,,\,\,(4)$$
where r l i is the embedding of the relation ri at layer l. Nt i is the set of neighbors of relation ri under the entity type t ∈ T, while |Nt i| is the number of neighbors of ri. O
lis the trainable parameter at layer l. And ReLU is the activation function.
The high-order heterogeneous graph modeling provides deep representations for the entities and relations. Therefore, complicated relation information can be well preserved for the involved entities in the modeling of Section 3.5.
## 3.3 Spans Representation With Transformer Encoder
After modeling the high-order heterogeneous graph extracted from the medical knowledge graph, we then model the local contexts in the medical text.
The medical text is organized here as tokens t = (x0, x1, ..., xi*, ..., x*n−1), and a successive sequence (xsi
, ..., xei
) in text is a span which means an entity mention for medical text. Each span siis modeled as:
$$s_{i}=g_{s}([x_{s_{i}},x_{e_{i}},\hat{x}_{i},\phi(s_{i})])\;,$$
where xsi denotes the token level embedding from the transformer encoder, e.g., BERT, of the start of span si, while xei represents the token level embedding of the end of span si. xˆiis an attentionweighted sum of the token representations in the span. And ϕ(si) is a feature vector modeling the length of si. gs is a feed-forward neural network
(Lee et al., 2017).
Then a span-based GCN is applied on the graph with spans as nodes and relations between spans as edges:
$$h_{i}^{l+1}=h_{i}^{l}+f_{span}^{l}(\mbox{ReLU}(f_{s}^{l}(h_{i}^{l}),f_{s^{\prime}}^{l}(h_{i}^{l})))\,\tag{6}$$ and
$$f_{s}^{l}(h_{i}^{l})=\sum_{s_{j}\in s_{s\in t},j\neq i}\sum_{k\in R}r_{ij}[k](W_{k}h_{j}^{l}+b_{k}),$$ $$f_{s^{\prime}}^{l}(h_{i}^{l})=\sum_{s_{j}\in s_{s\in t},j\neq i}\sum_{k\in R}r_{ji}[k](W_{k}^{\prime}h_{j}^{l}+b_{k}^{\prime}),\tag{7}$$ $$r_{ij}[k]=Softmax(f_{r}([s_{i},s_{j},s_{i}\circ s_{j}]))[k],$$
where h l+1 iis the deep representation of si at the layer l + 1, f*span*and fr are feedforward neural networks. f ls and f l s′ are bidirectional GCN (Marcheggiani and Titov, 2017; Fu et al., 2019) for h l i
, rij [k]
measures the relation score for relation k between spans of si and sj , ◦ denotes the element-wise multiplication. And h 0 i is initialized as si.
## 3.4 Knowledge Graph Enhanced Span Representation
After applying span-based GCN, we obtain the hidden representation hi of span si derived from local context, e.g., the medical text. Then we apply an attention mechanism (Lai et al., 2021) to integrate the deep representation of hidden representation hi of span si from the local medical text with the deep representation of entities vi from the global contexts in the first-order graph as fie
:
$$f_{i e}=W_{i e}f_{c e}(h_{i})+\sum_{v_{j}\in C(s_{i e})}W_{i j e}f_{v}(v_{j})\ ,\tag{8}$$
where C(sie) is the candidate set of entities corresponding to span siin the dual heterogeneous graph. And fce(hi) and fv(vj ) are the transformed representation of hi and vj by two feedforward neural networks. And Wie and Wije are attention scores of the two transformed representations:
$$W_{i e}=\frac{\exp(\alpha_{i_{e}})}{(\exp(\alpha_{i_{e}})+\sum_{v_{j}\in C(s_{i})}\exp(\alpha_{i j_{e}}))}\;,\tag{9}$$
and
Here αije
$$\frac{\exp(\alpha_{i j_{e}})}{(\exp(\alpha_{i_{e}})+\sum_{v_{j}\in C(s_{i})}\exp(\alpha_{i j_{e}}))}\;.\tag{10}$$
$$W_{i j_{e}}=$$
and αie
are importance scores of the
transformed entity representation vj and the transformed span representation hito the span representation hi:
$$\alpha_{i j_{e}}=f_{\alpha_{e}}([h_{i},f_{v}(v_{j})])\ ,\qquad\qquad(11)$$
$$\mathrm{and}$$
$$\alpha_{i_{e}}=f_{\alpha_{e}}([h_{i},f_{c_{e}}(h_{i})])\;,\qquad\qquad(12)$$
where fαe is a feedforward neural network.
Next, we fuse the deep representations of relations ri from the global contexts in the secondorder graph Gr with the deep representation hi of span si from the local medical text. Distinguished from first-order graph fusion in equation 8, we argue that the corresponding relations in secondorder graph may be irrelevant to the span depending on current medical text, and arouse disproportionate or even noisy aggregating representations for local context. Therefore, a selective gate mechanism (Li et al., 2020) is utilized to perform this fusion as fir
:
$$f_{i_{r}}=g_{i_{r}}f_{c_{r}}(h_{i})+\sum_{r_{j}\in C(s_{i_{r}})}g_{ij_{r}}f_{r}(r_{j}),\tag{13}$$
where C(sir) is the candidate set of relations in the second-order heterogeneous graph for the span si. To be detailed, C(sir) is a subset of relations from knowledge graph triplets where si holds as head or tail entity. And fcr(hi) and fr(rj ) are the transformed representation of hi and rj by two feedforward neural networks. And gir and gijr are gate scores of the two transformed representations:
$$g_{i_{r}}{=}\sigma(W_{2}(\mbox{ReLU}(W_{1}[f_{c_{r}}(h_{i}),h_{i}]{+}b_{1}){+}b_{2}),\tag{14}$$
## And
$$g_{ij_{r}}{=}\sigma(W_{2}(\mathrm{ReLU}(W_{1}[f_{r}(r_{j}),h_{i}]{+}b_{1}){+}b_{2}),\tag{15}$$
where σ is the *Sigmoid* activate function which maps the results into the interval of (0, 1).
Then, the fused representation fi can be modeled as:
$$f_{i}=(W_{sum_{e}}f_{ie}+W_{sum_{r}}f_{ir})||h_{i}\;,\tag{16}$$
which means an integration of deep representation among entities, relations and the span. Wsume and Wsumr serve as feedforward neural networks. || means concatenation operation.
## 3.5 Collective Entity And Relation Extraction
Finally, we map the integrated representation fito the entity type space as:
$$e_{i}=\mathrm{Softmax}(g_{e}(f_{i}))\;,$$
$$(17)$$
ei = Softmax(ge(fi)) , (17)
where ge is a feed-forward neural network to map fito the entity space.
Similarly, the relation between span i and j can be mapped to the relation type space as:
$$r_{i j}=\mathrm{Softmax}(g_{r}(f_{i},f_{j}))\;,$$
$${\mathcal{L}}={\mathcal{L}}_{e}+{\mathcal{L}}_{r}\;,$$
where gr is a feed-forward neural network to map fi and fj to the relation space.
The training of collectively extracting entity and relation can be optimized by minimizing the loss function as:
L = Le + Lr , (19)
where Le represents the cross-entropy loss of entities, and Lr denotes the cross-entropy loss of relations.
Different from the pipeline way that optimizes entities and relations in separate steps, our method conducts the optimization in an end-to-end mode. In this way, the errors of extracting entities and relations can be reduced collectively.
## 4 Experiments
In this section, we evaluate our model with extensive experiments. We first show the experimental setup, which contains datasets, baselines for comparison, and evaluation metrics. Then we evaluate the effectiveness of our model and baselines.
## 4.1 Experimental Setup 4.1.1 Datasets
We evaluate our method on several real medical datasets with one medical knowledge graph dataset.
Chinese Medical Text datasets: We evaluate our model on three Chinese medical text datasets.
- The first medical text dataset is the **CHIP2020** dataset 1, which contains 17,924 sentences from biomedical Chinese text that captures relations between medical entities (Guan et al., 2020).
The dataset includes a pediatric labeled corpus for hundreds of common diseases. The training and testing splits contain 14,339 and 3,585 sentences, respectively.
- The next medical dataset is the **CHIP-2022**
dataset 2, which contains 1000 samples and is split into 850/150 for training and testing set, respectively. CHIP2022 aims to extract casual, entailment and conditional relations between entities whose type are not distinguished. In the experiments, we merely trace out the previous two types of relations for joint extraction.
- The third medical text dataset is the **DiaKG**
dataset 3, which contains sentences from diabetes text that reflects relations between medical entities
(Chang et al., 2021). The dataset comes from 41 diabetes guidelines and consensuses from authoritative journals in China. The diabetes text contains 22,050 entities and 6,890 relationships. Finally, the prepossessed training and testing splits contain 1,170 and 239 sentences, respectively.
1http://cips-chip.org.cn/2020/eval2 2http://cips-chip.org.cn/2022/eval2 3https://tianchi.aliyun.com/dataset/
dataDetail?dataId=88836
$$(18)$$
| Model | CHIP-2020 | CHIP-2022 | DIAKG | | | | | | |
|-----------------------------------|-------------|-------------|---------|----------|---------|--------|----------|---------|------|
| Entity | Relation | Overall | Entity | Relation | Overall | Entity | Relation | Overall | |
| PFN (Yan et al., 2021) | 72.2 | 52.7 | 62.5 | 59.3 | 42.0 | 50.7 | 63.0 | 44.5 | 53.8 |
| CASREL (Wei et al., 2020) | 66.1 | 49.5 | 57.8 | 56.5 | 39.1 | 47.8 | 50.4 | 31.0 | 40.7 |
| SPERT (Eberts and Ulges, 2020) | 73.5 | 50.1 | 61.8 | 57.8 | 38.7 | 48.3 | 65.5 | 29.1 | 47.3 |
| KB-graph (Verlinden et al., 2021) | 73.9 | 61.8 | 67.9 | 65.0 | 32.5 | 48.8 | 69.3 | 51.1 | 60.2 |
| KECI (Lai et al., 2021) | 74.3 | 61.1 | 67.7 | 63.4 | 32.6 | 48.0 | 72.6 | 55.1 | 63.8 |
| KECI_nnconv (Li et al., 2017) | 74.2 | 61.7 | 68.0 | 63.5 | 32.4 | 48.0 | 73.2 | 54.1 | 63.7 |
| KECI_gea (Huang et al., 2020) | 75.6 | 61.0 | 68.3 | 61.8 | 32.5 | 47.2 | 73.1 | 54.9 | 64.0 |
| KECI_gn (Battaglia et al., 2018) | 74.7 | 60.5 | 67.6 | 64.0 | 32.4 | 48.2 | 73.9 | 54.5 | 64.2 |
| Our Model | 76.5 | 61.8 | 69.2 | 66.7 | 32.5 | 49.6 | 74.2 | 54.4 | 64.3 |
Table 1: Evaluation results (%) on CHIP-2020 & CHIP-2022 & DiaKG datasets.
Chinese Medical knowledge graph dataset:
We extract Chinese medical triplets from the public knowledge graph dataset 4, which contains diseaserelated entities and disease-related triples. After preprocessing, we obtain 215,745 triplets for the medical knowledge graph. These triplets contain 16,735 entities and 13 relations, and the amount of entity types is 9.
## 4.1.2 Baselines For Comparison
We compare our model with state-of-the-art joint extraction methods, variant versions of knowledgeenhanced baseline included.
- PFN (Partition Filter Network) is an advanced joint extraction method that does not utilize a knowledge graph. It segments the encoder into entity extraction and relation extraction parts, and accomplishes NER-specific and relation-specific tasks with shared part separately (Yan et al., 2021).
- CASREL (Cascade Binary Tagging Framework for Relational Triple Extraction) is an advanced joint extraction method without knowledge graph enhancement. It models triplets extraction into head entity extraction as well as relation and tail entity extraction with fused head entity representation. The two parts share the same encoder for a joint extraction task (Wei et al., 2020).
- **SPERT** (Span-based Joint Entity and Relation Extraction with Transformer Pre-training) is also an advanced joint extraction method without knowledge graph context. It attaches entity labels for spans derived from the text, and traversed span pairs for relation judgement (Eberts and Ulges, 2020).
- **KB-graph** is an advanced joint extraction method, which injects medical knowledge into entity and relation extraction and coreference resolution simultaneously (Verlinden et al., 2021).
- **KECI** (Knowledge-Enhanced Collective Inference) is an advanced joint extraction method, which extracts entity and relation collectively with 4http://www.openkg.cn/dataset/medical knowledge-enhanced collective inference. It utilizes RGCN algorithm for knowledge graph representation (Lai et al., 2021). Compared with the proposed method, KECI merely models the firstorder graph for knowledge fusion.
- **KECI_nnconv** is the variant version of KECI
(Lai et al., 2021) in which the NNConv (Gilmer et al., 2017) is utilized instead of RGCN. Different from RGCN, NNConv fuses the initial edge feature for message propagation and further enhances the node representation, which is similar to GEANet.
- **KECI_gea** is one variant version of KECI
(Lai et al., 2021) that the RGCN part is replaced by GEANet (Huang et al., 2020) which is similar to NNCONV.
- **KECI_gn** is also one variant version of KECI
(Lai et al., 2021) where the node and edge features are simultaneously derived from the graph net (GN)
framework (Battaglia et al., 2018).
## 4.1.3 Evaluation Metrics
We evaluate methods by Micro-F1 scores for entity and relation extraction. Also, we use the average Micro-F1 scores of entities and their relations in each medical text as the overall scores for each medical text.
## 4.1.4 Implementation Details
We implement our method with PyTorch and MedBERT-kd Transformer 5 which is based on the structure of BERT and trained on a mount of Chinese clinic texts. The number of parameters of our model is 135,557,734. We conduct hyperparameter tuning by using a Bayesian optimizer for all the methods on real datasets. The scopes of hyper-parameters are: {16, 32} for batch size, {2e5, 3e-5, 4e-5, 5e-5} for lower learning rate, {1e-4, 2e-4, 5e-4} for upper learning rate.
A coarse-to-fine training process is applied during training process. Firstly, we leave out the
![7_image_0.png](7_image_0.png)
knowledge graph fusion part and only employ hidden representations of spans from the local context to train a better downstream task-specific BERT
structure. After that, global information from knowledge graph representation is added for a precise training.
We evaluate all models with GPUs on the JIUTIAN Platform of China Mobile Research Institute.
## 4.2 Evaluation Results
We evaluate the effectiveness of all methods on three real datasets in Table 1, where we show the Micro-F1 values of both entities and relations. We also show the averaged overall Micro-F1 values.
From Table 1, we make the following observations:
(1) KECI outperforms other joint extraction methods which do not involve medical knowledge graph. For example, KECI has 9.9%, 0.2% and 23.1% improvements compared with CASREL on CHIP-2020, CHIP-2022 and DiaKG datasets, respectively. It shows that the medical knowledge graph is essential for providing more contexts in joint entity and relation extraction.
(2) Our model works better than KECI in overall values. Specifically, our model achieves 1.5%,
1.6% and 0.5% improvements compared with the KECI model on CHIP-2020, CHIP-2022 and DiaKG datasets, respectively. It verifies that highorder graph context could provide more information for text modeling. Moreover, before adding relation context, as shown in Table 1, the Micro-F1 score of the entity for KECI is 74.3 % on CHIP2020 dataset, while our method uses relation context and Micro-F1 is increased to 76.5 %. On average, there are 2.2%, 3.3%, and 1.6% improvements on the three datasets respectively after adding relation contexts. It can be seen that the relation context does have a significant effect on improving the entity value.
(3) Results in **CHIP-2022** show that our model has a little poorer performance than that of PFN
in overall score, however, we still prevail much in entity result. This is attributed to the fact that the relation types in **CHIP-2022** dataset are all logical types, such as casual and entailment relations, which varies considerably from medical knowledge graph.
We further make detailed evaluation on all the entity types which can be seen in table 2 and 3. Generally, our model achieves a higher F1 score among entity types, especially in the less-train-instance targets, e.g., "部位(part) ", "其他治疗(other therapy) ", "其他(others) " in CHIP-2020 dataset and
"Amount", "Method", "Pathogenesis" in DiaKG
dataset, our model performs almost above 4% better than KECI (Lai et al., 2021). When mentioning CHIP-2022 data, we neglect the entity type results owing to its lack of discrimination between entity types.
We also make comparisons between proposed high-order heterogeneous graph modeling method and some typical graph neural networks (GNN).
We can conclude that the high-order graph modeling for both entity and relation update is more efficient than a single order heterogeneous graph modeling for entity representation, even with the extra relation update. Details are as follows:
(1) The proposed model exceeds KECI_nnconv, KECI_gea and KECI_gn by 1.2%, 0.9%, 1.6%,
respectively, in CHIP-2020 dataset.
(2) The proposed model surpasses KECI_nnconv, KECI_gea and KECI_gn by 1.6%, 2.4%, 1.4%, respectively, in CHIP-2022 dataset.
(3) The proposed model beats KECI_nnconv, KECI_gea and KECI_gn by 0.6%, 0.3%, 0.1%,
respectively, in DiaKG dataset.
![8_image_0.png](8_image_0.png)
![8_image_1.png](8_image_1.png)
![8_image_2.png](8_image_2.png)
TypeKECI(Lai et al., 2021) **Our Model**
P R F1 P R F1 surgery 55.78 54.30 55.03 47.34 64.90 54.75 inspection 65.75 61.03 63.30 58.07 72.31 64.41 epidemiology 60.76 58.22 59.46 52.92 73.26 61.45 disease 70.04 84.52 76.60 79.95 80.24 80.09 symptom 71.70 63.12 67.14 68.00 72.74 70.29 sociology 66.83 46.64 54.94 60.14 57.79 58.94 medicine 61.27 75.68 67.72 66.84 75.78 71.03 part 72.93 46.41 56.72 60.35 66.27 63.17 prognosis - - - - - -
other-therapy 47.70 33.24 39.18 47.67 38.78 42.77 others - - - 100.00 5.83 11.02
Table 2: Entity evaluation details (%) on CHIP-2020 dataset. (- means the result is zero.)
![8_image_4.png](8_image_4.png)
## 4.3 Ablation Study
We conduct an ablation study on our model to evaluate the effectiveness of its main modules in Figures 3. Specifically, we compare our model with the following variants w.r.t. the overall score of each medical text, i.e., average Micro-F1 score of entities and relation in each medical text.
- **MnoKG** (Model without Knowledge Graph)
ignores knowledge graph for triplets extraction.
- **MEnt** (Model with Entity) merely fuses firstorder graph into medical text for joint extraction.
- **MRel** (Model with Relation) merely fuses second-order graph into medical text for extraction.
- **MnoGNN** (Model without GNN update) fuses initial node representations of high-order graph from TransE into text without GNN to update.
- **MRAtt** (Model with Relation Attention fusion) is one variant version of our model, which uses an attention mechanism in Equation 20 to replace Equation 16 for relation fusion:
$$f_{i_{r}}=W_{i_{r}}f_{c_{r}}(h_{i})+\sum_{r_{j}\in C(s_{i_{r}})}W_{ij_{r}}f_{r}(r_{j})\,\tag{20}$$ where $W_{i_{r}}$ and $W_{ij_{r}}$ are similar to 9 and 10, respectively.
are similar to 9 and 10, respectively.
From the Figures, we make the following observations:
![8_image_3.png](8_image_3.png)
(1) Our model beats **MnoKG** on all real datasets.
In particular, the performance improvements of our model are 8.9%, 6.3% and 5.9% better compared with MnoKG on CHIP-2020, CHIP-2022 and DiaKG datasets, respectively. It shows that using medical knowledge graph is important for providing external contexts.
(2) Our model outperforms **MEnt**. For example, the overall improvements are 1.5%, 1.6% and 0.5%
compared with MEnt on CHIP-2020, CHIP-2022 and DiaKG datasets, respectively. It justifies that the second-order graph provides a positive feedback for the first-order representations and further enhance the encoding of entity mentions in medical text.
(3) Our model surpasses **MRel** and the rate is less than that of **MEnt**. It seems the second-order graph plays a major role in dual heterogeneous graph for joint extraction task.
(4) Our model precedes **MnoGNN**. Specifically, the exceeding values are 1.2%, 2.5% and 0.1% for the datasets. It shows that the GNN update part is vital for a better knowledge graph information fusion.
(5) The **MEnt** model only added the first-order graph, and the **MRel** only added the second-order graph. It can be seen from the data that the effect of **MRel** is better than that of **MEnt**. Second-order graph can improve entity extraction even more effectively, because it acts directly on the entity span.
(6) Our model performs better than **MRAtt**. To be specific, the overall improvement of our model is 0.5%/1.5% compared with MRAtt on CHIP2020/CHIP-2022 datasets and the result is comparable to that of MRAtt on DiaKG dataset. It means that the selective gate mechanism can improve the performance for relation fusion.
## 5 Conclusions
In this paper, we study the problem of the joint entity and relation extraction in the medical text. We propose to construct the high-order heterogeneous graph from the medical knowledge graph, and learn the entity span representations in a knowledgeenhanced which integrates the deep representations from both global and local contexts. The extraction of entities and relations is in a collective way. The experimental results show that our model is more effective than state-of-the-art methods.
## Limitations
We inject the medical knowledge graph into local texts for entity span representations enhancement.
However, unlike most joint extraction methods, the proposed model is hard to be trained in a parallel way. Therefore, it is time-consuming to obtain a well-trained model. We would like to optimize the architecture of the model in the future.
Moreover, our model is adapted to Chinese medical texts where a token usually means a character
(not a word). Hence, there will be errors when aligning the entities from the knowledge graph with mentions from local texts. Word segmentation task will be considered in our future work.
## Acknowledgements
This work is supported by China Mobile Holistic Artificial Intelligence Major Project Funding
(R22105ZS, R22105ZSC01) and National Key R&D Program of China (2021ZD0140408).
## References
Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinícius Flores Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Çaglar Gülçehre, H. Francis Song, Andrew J. Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey R.
Allen, Charlie Nash, Victoria Langston, Chris Dyer, Nicolas Manfred Otto Heess, Daan Wierstra, Pushmeet Kohli, Matthew M. Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. 2018. Relational inductive biases, deep learning, and graph networks.
ArXiv, abs/1806.01261.
Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extraction. In EMNLP.
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26.
Dejie Chang, Mosha Chen, Chaozhen Liu, Liping Liu, Dongdong Li, Wei Li, Fei Kong, Bangchang Liu, Xiaobin Luo, Ji Qi, et al. 2021. Diakg: An annotated diabetes dataset for medical knowledge graph construction. In *China Conference on Knowledge Graph* and Semantic Computing, pages 308–314. Springer.
Markus Eberts and Adrian Ulges. 2020. Span-based joint entity and relation extraction with transformer pre-training. In *ECAI 2020*, pages 2006–2013. IOS
Press.
Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019.
Graphrel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1409–1418.
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In *Proceedings of the 34th International Conference on Machine* Learning (ICML), pages 1263–1272.
T. Guan, H. Zan, X. Zhou, H. Xu, and K Zhang.
2020. *CMeIE: Construction and Evaluation of Chinese Medical Information Extraction Dataset*. Natural Language Processing and Chinese Computing, 9th CCF International Conference, NLPCC 2020, Zhengzhou, China, October 14–18, 2020, Proceedings, Part I.
Kung-Hsiang Huang, Mu Yang, and Nanyun Peng. 2020.
Biomedical event extraction with hierarchical knowledge graphs. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1277–
1285, Online. Association for Computational Linguistics.
Bin Ji, Jie Yu, Shasha Li, Jun Ma, Qingbo Wu, Yusong Tan, and Huijun Liu. 2020. Span-based joint entity and relation extraction with attention-based spanspecific and contextual semantic representations. In Proceedings of the 28th International Conference on Computational Linguistics, pages 88–99.
Xiaodong Jiang, Ronghang Zhu, Sheng Li, and Pengsheng Ji. 2020. Co-embedding of nodes and edges with graph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Tuan Lai, Heng Ji, ChengXiang Zhai, and Quan Hung Tran. 2021. Joint biomedical entity and relation extraction with knowledge-enhanced collective inference. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 6248–6260, Online. Association for Computational Linguistics.
Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188–197.
Fei Li, Meishan Zhang, Guohong Fu, and Donghong Ji.
2017. A neural joint model for entity and relation extraction from biomedical text. *BMC bioinformatics*,
18(1):1–11.
Yang Li, Guodong Long, Tao Shen, Tianyi Zhou, Lina Yao, Huan Huo, and Jing Jiang. 2020. Self-attention enhanced selective gate with entity-aware embedding for distantly supervised relation extraction. *Proceedings of the AAAI Conference on Artificial Intelligence*,
34(05):8269–8276.
Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020.
A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999–8009.
Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 3219–3232.
Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In *Proceedings of NAACL-HLT*, pages 3036–3046.
Ling Luo, Zhihao Yang, Mingyu Cao, Lei Wang, Yin Zhang, and Hongfei Lin. 2020. A neural networkbased joint learning approach for biomedical entity and relation extraction from biomedical literature.
Journal of Biomedical Informatics, 103:103384.
Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. *arXiv preprint* arXiv:1703.04826.
Yali Pang, Tong Zhou, and Zhichang Zhang. 2021. A
joint model for chinese medical entity and relation extraction based on graph convolutional networks. In 2021 3rd International Conference on Natural Language Processing (ICNLP), pages 119–124. IEEE.
Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling.
2018. Modeling relational data with graph convolutional networks. In *European semantic web conference*, pages 593–607. Springer.
Yong Shang, Yu Tian, Min Zhou, Tianshu Zhou, Kewei Lyu, Zhixiao Wang, Ran Xin, Tingbo Liang, Shiqiang Zhu, and Jingsong Li. 2021. Ehr-oriented knowledge graph system: Toward efficient utilization of nonused information buried in routine clinical practice.
IEEE Journal of Biomedical and Health Informatics, 25(7):2463–2475.
Severine Verlinden, Klim Zaporojets, Johannes Deleu, Thomas Demeester, and Chris Develder. 2021. Injecting knowledge base information into end-to-end joint entity and relation extraction and coreference resolution. *arXiv preprint arXiv:2107.02286*.
Jue Wang and Wei Lu. 2020. Two are better than one: Joint entity and relation extraction with tablesequence encoders. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1706–1721.
Shaolei Wang, Yue Zhang, Wanxiang Che, and Ting Liu. 2018. Joint extraction of entities and relations based on a novel graph scheme. In *IJCAI*, pages 4461–4467. Yokohama.
Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2020. A novel cascade binary tagging framework for relational triple extraction. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 1476–
1488.
Kui Xue, Yangming Zhou, Zhiyuan Ma, Tong Ruan, Huanhuan Zhang, and Ping He. 2019. Fine-tuning bert for joint entity and relation extraction in chinese medical text. In *2019 IEEE International Conference* on Bioinformatics and Biomedicine (BIBM), pages 892–897. IEEE.
Zhiheng Yan, Chong Zhang, Jinlan Fu, Qi Zhang, and Zhongyu Wei. 2021. A partition filter network for joint entity and relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 185–197, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Shan Zhao, Minghao Hu, Zhiping Cai, and Fang Liu.
2021. Modeling dense cross-modal interactions for joint entity-relation extraction. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pages 4032–4038.
Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In North American Association for Computational Linguistics (NAACL).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
the "Limitations" section which is next to the section 5
✓ A2. Did you discuss any potential risks of your work?
the "Limitations" section which is next to the section 5
✓ A3. Do the abstract and introduction summarize the paper's main claims?
the 1st section (introduction) and the "abstract" section which is ahead of section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4 (Experiments)
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
section 4.1.4 (Implementation details)
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 4.1.4 (Implementation details)
✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
the results are stable without large error bars
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? not used D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ivison-etal-2023-data | Data-Efficient Finetuning Using Cross-Task Nearest Neighbors | https://aclanthology.org/2023.findings-acl.576 | Obtaining labeled data to train a model for a task of interest is often expensive. Prior work shows training models on multitask data augmented with task descriptions (prompts) effectively transfers knowledge to new tasks. Towards efficiently building task-specific models, we assume access to a small number (32-1000) of unlabeled target-task examples and use those to retrieve the most similar labeled examples from a large pool of multitask data augmented with prompts. Compared to the current practice of finetuning models on uniformly sampled prompted multitask data (e.g.: FLAN, T0), our approach of finetuning on cross-task nearest neighbors is significantly more data-efficient. Using only 2{\%} of the data from the P3 pool without any labeled target-task data, our models outperform strong baselines trained on all available data by 3-30{\%} on 12 out of 14 datasets representing held-out tasks including legal and scientific document QA. Similarly, models trained on cross-task nearest neighbors from SuperNaturalInstructions, representing about 5{\%} of the pool, obtain comparable performance to state-of-the-art models on 12 held-out tasks from that pool. Moreover, the models produced by our approach also provide a better initialization than single multitask finetuned models for few-shot finetuning on target-task data, as shown by a 2-23{\%} relative improvement over few-shot finetuned T0-3B models on 8 datasets. | # Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Hamish Ivisonα Noah A. Smithαβ Hannaneh Hajishirziαβ **Pradeep Dasigi**α αAllen Institute for AI
βPaul G. Allen School of Computer Science & Engineering, University of Washington
{hamishi,noah,hannah,pradeepd}@allenai.org
## Abstract
Obtaining labeled data to train a model for a task of interest is often expensive. Prior work shows training models on multitask data augmented with task descriptions (prompts) effectively transfers knowledge to new tasks. Towards efficiently building task-specific models, we assume access to a small number (32–1000)
of unlabeled target-task examples and use those to retrieve the most similar labeled examples from a large pool of multitask data augmented with prompts. Compared to the current practice of finetuning models on uniformly sampled prompted multitask data (e.g., FLAN, T0), our approach of finetuning on cross-task nearest neighbors is significantly more data-efficient.
Using only 2% of the data from the P3 pool without any labeled target-task data, our models outperform strong baselines trained on all available data by 3–30% on 12 out of 14 datasets representing held-out tasks including legal and scientific document QA. Similarly, models trained on cross-task nearest neighbors from SuperNaturalInstructions, representing about 5% of the pool, obtain comparable performance to stateof-the-art models on 12 held-out tasks from that pool. Moreover, the models produced by our approach also provide a better initialization than single multitask finetuned models for fewshot finetuning on target-task data, as shown by a 2–23% relative improvement over fewshot finetuned T0-3B models on 8 datasets. We publicly release our code.1
## 1 Introduction
Finetuning large models with data from a diverse set of tasks, augmented to include brief descriptions of the tasks (i.e., prompts) has been shown to help models generalize to unseen tasks (Wei et al., 2021a; Sanh et al., 2021). This cross-task generalization capability is particularly helpful in cases where it is expensive to collect labeled target task training sets. Prior work trained single
![0_image_0.png](0_image_0.png)
models with as much prompted data as possible —
for example, Sanh et al. (2021) train a model on roughly 11 million instances (counting different prompt variations). The training datasets were selected without using any information about the target tasks, with the goal of allowing models to generalize to new tasks from instructions alone, making the evaluation "zero-shot". However, it is unclear if all the training data is required for good performance on any given single target task. Furthermore, given that neural network models have previously been shown to suffer from negative interference
(wherein training on more datasets results in worse performance on certain downstream tasks) in multitask setups (Aribandi et al., 2022) and benefit from pretraining on domain-relevant data (Gururangan et al., 2020; Phang et al., 2018a), it is possible that training only on relevant prompted data could further improve task generalization while being data-efficient.
Based on this hypothesis, we seek to make use of unlabelled data to find relevant subsets of training data in the massive pool of multitask data, allowing similar-to-better performance than training on the entire pool for a given target task. Manually find1https://github.com/allenai/data-efficient-finetuning ing relevant training data in a massive pool of data is infeasible since it is not obvious which of the source tasks are relevant for a given target task, and which instances are most relevant for target task generalization within a source task dataset (see Section 5.1). Hence we rely on a simple method to automatically select these subsets. Additionally, as only some samples within a given dataset may be relevant to a target task, we select per-instance rather than per-dataset, unlike prior work, which tries to identify useful datasets for transfer learning (Aribandi et al., 2022; Phang et al., 2018a) and train on all data within the chosen datasets. We use a setup similar to work examining retrievalaugmented cross-task generalization (Lin et al.,
2022): we assume access to a small number of unlabeled target task instances and use these to retrieve *cross-task nearest neighbors*—labeled instances from the massive pool of data most similar to our unlabeled target task instances. The similarity is computed as the distance between the representations produced by the encoder of a pretrained seq2seq model. Unlike prior work, we then finetune target task specific models on these neighbors alone, without using any target task specific labeled data or any extra data from the pool of multitask data. We hope that the similarity between the cross-task neighbors and our target task data will enable better generalization to our target task, with dissimilar examples that may cause interference removed from the training mixture. We ultimately aim to produce models that perform at least as well as models trained on the entire multitask pool *despite being trained on a fraction of data*, greatly reducing the cost of training through the use of a few cheap-to-collect unlabelled examples.
We run experiments with T5 (Raffel et al., 2020)
models, and use Public Pool of Prompts (P3; Sanh et al., 2021) as the main pool of prompted multitask data from which to retrieve cross-task nearest neighbors. We evaluate on the 11 datasets originally used to evaluate T0 (a collection of natural language understanding and commonsense tasks),
as well as 3 additional datasets with varied domains
(e.g., legal, NLP domains). We also experiment with the train set of SuperNaturalInstructions (SNI;
Wang et al., 2022) as a pool of multitask data, and evaluate on 12 tasks from SNI's held-out set of test tasks. Our findings are as follows:
of instances retrieved from P3, are much more relevant as training data than the rest of the P3 pool—training T5 models, sometimes even variants smaller than T0-3B, on these subsets yields models with performance 3–30%
better than T0-3B evaluated zero-shot. Similarly, models trained on cross-task neighbors in SuperNaturalInstructions (at most 5% of the pool), perform similarly to state-of-the-art models trained on all available data.
- For some target tasks on which T0-3B performs close to random chance, T5 models of the same size trained using cross-task nearest neighbors perform significantly above chance, supporting our hypothesis that massive multitask prompted training could lead to negative interference between tasks.
- When target task labeled data is available for few-shot finetuning, we find that T5 models trained with cross-task nearest neighbors provide better initialization for parameterefficient finetuning methods than T0-3B, performing 2–23% better than T0-3B with fewshot finetuning across 10 out of 11 datasets.
- An analysis of what relevant data gets retrieved shows that most of the tasks in the massive pool of multitask data are not retrieved for any target tasks, confirming our hypothesis that only a small subset of data within the pool is relevant to any given target task.
- We compare model performance from DEFT
with that from full-finetuning across a variety of labelling budgets and find that DEFT is more effective for smaller labelling budgets.
These findings suggest that instead of training single models on all available data, multi-task data can be used much more efficiently towards improving model performance on specific target tasks by selecting training data relevant to those tasks, even with a simple method for identifying such data.
## 2 Related Work
Multi-task transfer models Training on large multi-task mixtures is a common trend within NLP,
with most existing approaches first training a pretrained language model on a large collection of tasks, and then evaluating these models in either
- For 12 out of 14 target datasets, we find that their cross-task nearest neighbors, at most 2%
zero- or few-shot settings on a collection of heldout datasets (Wei et al., 2021a; Sanh et al., 2021; Khashabi et al., 2020; Mishra et al., 2021; Aribandi et al., 2022). Most approaches do not customise their task selection to downstream tasks and assume no knowledge of the target tasks ahead of time, instead focusing on building a single model most applicable to any arbitrary evaluation task.
In contrast, we show that if we assume access to unlabeled target task instances, we can make much better use of the multitask data, selecting only instances useful to a given task. Relatedly, Vu et al.
(2020) propose a method for using gradients from labelled task data to construct task embeddings for predicting task transferability. Our method instead uses unlabeled data, which is much cheaper and easier to collect, and does not use gradients, making it easier to scale to large models such as T5-XL.
Retrieval-based methods for NLP Adding retrieval components to language models has been shown (Khandelwal et al., 2019; Guu et al., 2020; Lewis et al., 2020) to augment their generalization capabilities by externalizing memorization. In contrast to prior work in this direction that mostly focused on language modeling as the end task, we evaluate on a variety of language understanding tasks. The work from Shi et al. (2022) used retrieval-based methods for classification tasks by heuristically mapping the label space of the endtasks to that of the predicted next words of the nearest neighbors from a language model. We instead finetune the models on the nearest neighbors.
Lin et al. (2022) also use unlabeled examples to retrieve relevant data for improving performance but focus on *further finetuning multi-task models*.
They use representations from the encoder of a multi-task finetuned model (e.g. T0) to retrieve subsets of its training data closest to the instances of a target dataset and further finetune the model to specialize it for the target task. While their results suggest that using a multi-task model is crucial for good retrieval performance, we show gains using a model before multitask finetuning. Our setup allows for data-efficiency via pruning the amount of multi-task data used during training, letting a practitioner who only cares about specific downstream tasks train strong task-specific models using much less data and compute than if they trained on the entire pool of multi-task data.
Parameter-efficient fine-tuning In contrast to work that focused on finetuning fewer parameters in large models to adapt them to new tasks (Houlsby et al., 2019; Hu et al., 2021; Liu et al., 2022), our proposal is a *data-*efficient training method for obtaining task-specific models without using target task labels. Our method is complementary to parameter-efficient methods, and they can be used in conjunction, as shown in section 4.3.
Instance attribution Our approach works by identifying the most relevant training examples for a given data point, which is called *instance attribution*. Prior work (Koh and Liang, 2017; Yeh et al.,
2018; Pruthi et al., 2020; Han and Tsvetkov, 2022)
used instance attribution methods to interpret predictions of neural network models. These methods generally relied on the gradients of the model to identify the effect specific data points, either in the pretraining or the finetuning stage, have on the model's predictions. Our method for identifying cross-task neighbors is simpler because we do not use gradients and we do not even rely on the labels of the data. Results from Pezeshkpour et al. (2021)
show that instance attribution based on similarity between the model's representations is comparable to gradient-based approaches in terms of finding the most important training data points.
Making use of auxiliary data Training on intermediate data has been shown to improve performance on target NLP tasks (Phang et al., 2018b).
Recent work has shown that intermediate datasets can be selected by embedding-based methods (Vu et al., 2020; Poth et al., 2021; Kung et al., 2021).
Most prior work relies on expensive embedding computation methods, either training a model to generate task embeddings, or using methods that are difficult to scale to large models.2In contrast, we use an extremely cheap embedding method (mean-pooling over an encoder), and additionally consider sample-wise selection over a massive pool of tasks, as opposed to selecting entire tasks.
## 3 Data Efficient Finetuning Across Multiple Tasks
Given a large collection of labeled prompted data
(i.e., data converted into a text-to-text form, with task instructions included in the input, e.g., P3), our core hypothesis is that some tasks in this massive 2E.g., the Fisher information matrix used by Vu et al.
(2020).
pool of data are more similar to a given target task than others. Given a target task, we assume we have access to a small amount of *unlabeled* target task data, which is often much easier and cheaper to collect than labeled data (see Section 5.2). Our aim is to find a relevant subset of data from our pool given a single target task, ideally allowing us to train a model using this subset that outperforms a similar model trained on the entire pool of data.
Manually identifying the relevant subsets of these datasets is not feasible since task boundaries are usually not clearly defined in NLP, and it is hard to interpret what skills get transferred when a model is trained on one dataset and evaluated on other. Hence, we use the similarity between the pretrained model's representations to compute relevance. We encode all instances in the large pool of multitask data with a pretrained language model and build a search index over the resulting representations. Given small amounts of unlabeled target task data, we retrieve relevant multitask subsets from the index, which we call **cross-task nearest**
neighbors of the target tasks. We then build taskspecific models by finetuning the pretrained models on the cross-task neighbors. We refer to this approach as Data-Efficient FineTuning (**DEFT**).
We evaluate our approach both in cases where no labeled data is available, and when a few (20–70)
annotated labels are available. In the former case, we simply use the unlabeled data for retrieval and evaluate the resulting DEFT model "zero-shot" on the target task. In the latter case, we first train a DEFT model and then perform parameter-efficient few-shot tuning using IA3 (Liu et al., 2022) to make use of the labeled data.
Retrieving cross-task nearest neighbors To retrieve the most similar instances to a given set of target task instances, we first build an index over the massive pool of multi-task data for efficient retrieval, encoding samples using a pretrained encoder. Then, given a set of query instances Q, we retrieve our subset of similar data by computing a union of the k-nearest neighbors to all q ∈ Q.
Note that there may be an overlap between the sets of nearest neighbors retrieved for different queries, and hence |R| ≤ |Q| · k, where R is the retrieved subset. Empirically, we find |R| tends to be 5–50×
smaller than |Q| · k due to this overlap.
Data-Efficient FineTuning (DEFT) Given a retrieved set of data R, we can then finetune a pretrained language model on the mixture of data using a cross-entropy loss, as all data are in a unified text-to-text prompted format. This training is similar to the multitask prompted training of T0 (Sanh et al., 2021). We refer to models trained on R as DEFT models. In settings where we have no labeled data available, we directly evaluate these models on our target tasks.
Parameter-efficient few-shot finetuning For the case where a few annotated labels are available, we make use of parameter-efficient few-shot finetuning. For this, we take our multi-task trained DEFT
checkpoints and finetune them using IA3 (Liu et al.,
2022) on task-specific few-shot data. Concretely, given a trained transformer model, we introduce three vectors lk, lv, and lff into the attention and feed-forward mechanisms of each layer:
$$\text{Attn}(Q,K,V)=\text{softmax}\left(\frac{Q(l_{\text{k}}\odot K^{T})}{\sqrt{d_{k}}}\right)(l_{\text{v}}\odot V)\tag{1}$$ $$\text{FFN}(x)=(l_{\text{ff}}\odot f(W_{1}x))W_{2}\tag{2}$$
We initialize these vectors with all ones and only update them during the few-shot finetuning. This provides an efficient method of further training our DEFT models on task-specific data and has been shown to outperform full finetuning in the few-shot setting (Liu et al., 2022).
## 4 Experiments 4.1 Setup & Hyperparameters
Indexing P3 We construct an index of P3 data using FAISS (Johnson et al., 2019), a library for efficient similarity search over dense vectors. We use a Hierarchical Navigable Small World index (Malkov and Yashunin, 2020) to approximate the k-nearest neighbor search. To keep the size of the index manageable, we use Product Quantization (Jegou et al.,
2010) and reduce the dimensionality of the encoded representations using an optimized product quantization transform (Ge et al., 2013). We encode our instances using the T5 v1.1 model with extra language model pretraining introduced by Lester et al. (2021). For all experiments, we match the size of the encoder used to index data and the size of downstream models trained on this data (e.g., if we train a T5-XL sized model, we use T5-XL to index and retrieve the data). We use the subset of P3 used to train T0 as our pool of multitask data unless otherwise stated.
DEFT Following T0, we start with the T5 v1.1 model with extra language model pretraining. Unless otherwise stated, we use the 'XL' variant with 3 billion parameters across our experiments. When training on cross-task nearest neighbors, we train for 5 epochs with a batch size of 8 using the Adam optimizer (Kingma and Ba, 2015) and a learning rate of 0.00005. We use a linear warmup schedule for the first 10% of the total training steps and linear decay for the rest of training.
Few-shot training We follow the settings suggested by Liu et al. (2022): training for 1000 steps with a batch size of 8. We use the Adafactor optimizer with a maximum learning rate of 0.003 and a linear decay schedule with 60 warmup steps. We only update the IA3 vectors during training.
Evaluation datasets We evaluate on the set of 11 datasets used to evaluate T0 (RTE, ANLI R1/2/3, CB, HellaSwag, Story Cloze, WinoGrande, WSC,
COPA, WiC), which include natural language inference and commonsense reasoning datasets. In addition to the T0 evaluation datasets, we also evaluate on three additional datasets from diverse domains: CaseHold (Chalkidis et al., 2022; Zheng et al., 2021), a legal QA dataset, DROP (Dua et al.,
2019), a QA dataset that requires discrete operations, and a subtask of Qasper (Dasigi et al., 2021),
a QA dataset over entire NLP papers. Qasper has two subtasks—selecting paragraphs in the paper that provide evidence for answering the questions, and generating the answers. We focus on the former because it was shown to be the more difficult of the two, and convert it into a binary classification task where the inputs are combinations of questions and single paragraphs. We refer to this subtask as *QasperEvidence* henceforth and evaluate model performance in terms of document-level F1 as described by Dasigi et al. (2021). For evaluation and few-shot training, we convert all datasets to a prompted text-to-text format3either using the prompt templates from P3 for the T0 evaluation datasets or an original prompt for the other datasets.
For CaseHold, DROP, and QasperEvidence we randomly split out 1000 examples from the existing validation sets to use for retrieval, and use the remaining data for evaluation. For all other datasets, we retrieve using up to 1000 randomly chosen examples from the training splits (if a dataset has 3For example, ANLI instances were converted to
'{premise} Question: {hypothesis} True, False, or Neither?',
with the answers as 'true', 'false', or 'neither'.
less than 1000 training examples, we use all available training data for retrieval). We provide further details in Appendix B.
Model evaluation Following Sanh et al. (2021)
and Brown et al. (2020), we calculate accuracy on all datasets except DROP using *rank classification*,
where we pick the answer with lowest loss across possible answer choices given the instance input as the model prediction. As DROP is a QA dataset that requires selecting spans or generating numbers, and does not have answer choices, we generate the prediction using greedy decoding.
Baselines For zero-shot evaluation, we primarily compare against 4 baselines: 1) *T0-3B*, trained on about 10% of the P3 data,4 2) *Random*, a model trained on a random selection of P3 data the same size as the subsets selected by DEFT, 3) *T5-XL* not finetuned any further, and 4) *BM25*, using BM255
(Robertson and Zaragoza, 2009) for retrieval instead of dense representations. For few-shot settings, we compare T0-3B with additional few-shot training with DEFT checkpoints trained on subsets chosen using (a) 1000 unlabeled instances and (b)
the instances used in the few-shot training without labels. This means (b) uses no additional data compared to T0-3B with few-shot finetuning.
## 4.2 Data-Efficient Fine-Tuning Vs. Massive Multitask Training
We first assume we have access *only* to unlabeled task-specific data and cannot train on any target task labeled data. We sample 1000 unlabeled instances per dataset and retrieve the 500 nearest neighbors6 of each instance. We then train datasetspecific models on each of the retrieved sets. As seen in Table 1, our DEFT-XL models generally outperform7 T0-3B and other baselines, with a median relative improvement of 13% over T0-3B. We also see that base-sized models also improve over baselines in Table 1—the DEFT-base models have a median relative improvement of 8% over the random baseline. All DEFT models are trained on 4Sanh et al. (2021) report that they train T5-XL on at most 500K instances per prompted dataset in P3, which amounts to about 10% of the pool.
5We use Pyserini (Lin et al., 2021) with default settings for the BM25 index. We retrieve the same amount of data as the subsets retrieved by DEFT.
6We retrieve 2500 nearest neighbours for T5-base as more retrieved neighbors led to better performance.
7The exceptions WSC and RTE have small evaluation sets and large variance (see Appendix C), leading us to believe these differences are not significant.
| Task | DEFT-XL | T0-3B | Rand-XL | Rand-Bal | T5-XL | BM25-XL | DEFT-base | Rand-base | T5-base | Maj. Class |
|------------|-----------|---------|-----------|------------|---------|-----------|-------------|-------------|-----------|--------------|
| CaseHold | 37.2 | 30.9 | 19.0 | 38.7 | 11.4 | 27.9 | 18.9 | 17.5 | 11.4 | 6.6 |
| DROP | 31.0 | 27.4 | 24.3 | 27.6 | 11.3 | 22.6 | 21.3 | 18.0 | 4.0 | - |
| QasperEv. | 28.5 | 19.9 | 17.9 | 23.2 | 8.2 | 20.3 | 15.9 | 11.0 | 8.2 | 19.9 |
| RTE | 74.0 | 70.4 | 78.3 | 78.0 | 53.1 | 74.3 | 61.7 | 61.0 | 52.0 | 53.4 |
| ANLI R1 | 39.8 | 35.0 | 35.3 | 40.0 | 32.9 | 37.5 | 29.6 | 33.3 | 32.9 | 33.4 |
| ANLI R2 | 37.5 | 32.6 | 35.3 | 36.9 | 33.5 | 36.9 | 32.5 | 22.3 | 33.5 | 33.4 |
| ANLI R3 | 41.4 | 35.3 | 38.0 | 41.7 | 33.8 | 41.1 | 31.6 | 33.1 | 32.7 | 33.5 |
| CB | 60.7 | 58.9 | 60.7 | 55.4 | 44.6 | 50.0 | 50.0 | 48.2 | 44.6 | 50.0 |
| HellaSwag | 33.1 | 28.2 | 27.4 | 29.3 | 23.0 | 28.7 | 25.9 | 25.0 | 23.0 | 25.7 |
| StoryCloze | 95.3 | 86.5 | 79.1 | 94.1 | 53.0 | 82.3 | 83.5 | 57.4 | 53.0 | 51.4 |
| WinoGrande | 50.6 | 50.0 | 49.2 | 49.2 | 50.8 | 50.1 | 50.8 | 50.1 | 50.8 | 50.4 |
| WSC | 39.4 | 50.0 | 47.1 | 46.2 | 36.3 | 36.5 | 42.3 | 36.5 | 36.3 | 63.5 |
| COPA | 95.0 | 74.0 | 80.0 | 88.0 | 60.0 | 79.0 | 66.0 | 44.0 | 60.0 | 55.0 |
| WiC | 54.9 | 51.1 | 51.4 | 57.5 | 51.7 | 51.9 | 49.4 | 50.0 | 51.7 | 50.0 |
| Average | 51.3 | 46.5 | 45.9 | 50.4 | 35.9 | 45.7 | 41.4 | 37.0 | 35.3 | - |
subsets of P3 consisting of 0.1–2% of all P3 data.
This confirms our hypothesis that training on a wellchosen subset of P3 is more beneficial for target task performance than training on a uniform sample of all available data. We also note that using dense representations appears crucial, as using BM25 for retrieval underperforms most baselines. Our results suggest that a general language model encoder can still retrieve relevant cross-task neighbors, contrary to the claims made by Lin et al. (2022).
Remarkably, DEFT-XL outperforms the majority baselines on two target datasets (QasperEvidence, ANLI R2) where T0-3B does not, and DEFT-base on one (COPA). This observation further confirms that multitask models trained on uniformly sampled data might be suffering from negative interference between tasks.
We run a similar experiment with SuperNaturalInstructions (SNI; Wang et al., 2022), a recent instruction-tuning dataset, as our pool of multitask data8and evaluate on a set of 12 diverse held-out test tasks. We use the same pool of data used to train Tk-Instruct (Wang et al., 2022), which consists of 100 examples from each English-language task in SNI. Notably, this means that DEFT has a much smaller pool of data to retrieve over compared to P3 (75K vs. 100M examples). We find in Table 2 that DEFT models are able to achieve performance similar to a model trained on all data, 8We use the split of SNI used by Wang et al. (2022) with only 100 train samples per task as our underlying pool for fair comparison with Tk-Instruct.
| Model | Avg. RougeL | Avg. # Training Samples |
|-------------|---------------|---------------------------|
| DEFT-XL | 49.2 | 3523 |
| Rand-XL | 45.7 | 3523 |
| Tk-Instruct | 50.7 | 75317 |
Table 2: Performance of models over 12 held-out tasks from SNI. Models are trained on data retrieved from SNI (DEFT, Rand), or all SNI data (Tk-Instruct).
with each DEFT model only trained on 5% of the total available data. DEFT models also significantly outperform training on randomly-chosen subsets.
See Appendix E for more details.
## 4.3 Few-Shot Finetuning Of Deft Models
Next, we assume we are able to label a small number of task-specific examples, and further train our DEFT models. We reuse the XL-size models trained in Section 4.2 and further train them using the parameter-efficient IA3 on the few-shot data used by Liu et al. (2022). As seen in table 3, DEFT models with few-shot finetuning
('DEFT-Few (1kQ)') perform on average 7% better than T0-3B models with few-shot finetuning ('T03B+IA3'), with statistically significant gains on 5 datasets. This shows that DEFT models serve as better starting points for few-shot finetuning than T0-3B, providing similar or better performance across all datasets despite being exposed to much less training data. Notably, DEFT-Few significantly outperforms T0-3B+IA3 on WinoGrande, for which zero-shot DEFT did not significantly out-
| T0-3B+IA3 | T5+IA3 | Rand+IA3 | Rand-Bal+IA3 | DEFT-Few (1kQ) | DEFT-Few (20-70Q) | |
|-------------|----------|------------|----------------|------------------|---------------------|---------|
| RTE | 77.52.0 | 57.04.3 | 83.31.1 | 82.91.0 | 79.41.3 | 81.31.6 |
| ANLI R1 | 44.93.0 | 39.61.8 | 43.32.3 | 46.50.9 | 47.31.4 | 47.31.5 |
| ANLI R2 | 39.51.7 | 36.51.4 | 40.31.6 | 42.91.8 | 40.82.8 | 42.22.7 |
| ANLI R3 | 40.22.2 | 34.81.1 | 39.32.3 | 44.32.1 | 44.32.1 | 42.91.8 |
| CB | 78.93.9 | 67.92.5 | 81.43.5 | 81.42.0 | 82.52.6 | 84.64.3 |
| HellaSwag | 34.70.6 | 27.51.1 | 38.11.1 | 42.11.6 | 42.52.1 | 45.91.8 |
| StoryCloze | 93.00.6 | 83.03.1 | 92.60.8 | 95.70.3 | 96.20.2 | 96.50.2 |
| WinoGrande | 50.61.3 | 49.80.8 | 51.42.3 | 54.02.6 | 55.93.0 | 55.23.1 |
| WSC | 64.83.5 | 51.01.0 | 55.83.0 | 61.55.3 | 63.35.2 | 59.63.8 |
| COPA | 82.02.7 | 61.64.2 | 86.61.7 | 91.43.0 | 95.41.5 | 92.62.2 |
| WiC | 54.91.9 | 56.63.0 | 54.52.4 | 56.22.2 | 57.72.9 | 57.42.9 |
| Average | 60.1 | 51.4 | 60.6 | 63.6 | 64.1 | 64.1 |
perform zero-shot T0-3B. These results suggest DEFT models are more amenable to few-shot finetuning than T0-3B. We also find that DEFT-few performs statistically significantly better than the strong Rand-Bal baseline with few-shot finetuning, further highlighting that DEFT is preferable for both zero and few-shot settings.
Few-shot retrieval In this experiment, we evaluate DEFT in a setting where we have access only to a small number of target-task labeled examples
(exactly what is available to T0-3B+IA3), and no additional unlabeled examples. We construct 5 fewshot sets for each dataset, for each set retrieve crosstask neighbors using the few-shot data, finetune T5 models on the retrieved data, and then finally finetune using IA3 on the labeled few-shot data itself.
To make up for the smaller query set, we retrieve the closest 2000 neighbors per query instance. As seen in Table 3, this still results in a model that outperforms T0-3B with few-shot tuning ('DEFT-Few
(20-70Q)'), and overall achieves similar performance to DEFT-Few (1kQ). Crucially, this shows that DEFT followed by few-shot finetuning may be a better alternative to few-shot finetuning T0-3B even when both methods have *exactly* the same target-task information available.
## 5 Analysis 5.1 Cross-Task Retrieval
What gets retrieved? We analyse what source datasets get selected during retrieval for each evaluation dataset (see Appendix F, Figure 4). We find that for most target datasets, the majority of source datasets are not selected, further strengthening our hypothesis that much of the massive multitask pool is not relevant to a given target task, and no single mixture of datasets is optimal for all target tasks.
We additionally find that no more than 27% of all instances within any source dataset is retrieved, suggesting that our approach is also effective at finding relevant subsets of data *within* large datasets.
Retrieval hyperparameters When retrieving cross-task data, the amount and quality of data retrieved is highly dependent on the *query size* (i.e.,
the number of task-specific instances used for retrieval) and *number of neighbors* (i.e., the number of cross-task samples retrieved per task-specific instance). In Figure 2, we show the effect of varying both query size (sweeping from 32 to all training data) and the number of neighbors (sweeping from 1 to 5000) on dataset performance on RTE
and CaseHold. We find that increasing the amount of data retrieved, whether through increasing the number of neighbors or query set size, results in improved performance up to a point, and then either plateaus or decreases, providing evidence for our hypothesis that using 'too much' data can result in reduced downstream performance due to negative interference.
What model should you use for retrieval? To determine the effect of of model size on indexing and retrieval, we train models using the cross-task neighbors retrieved by base and XL-size models when the query size and number of neighbors are
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
held constant. We find that using a larger (XL size)
indexing model generally results in better performance, but this gap is much larger when training a base size model (8%) than when training XL-size models (1%), suggesting that smaller models benefit more from larger retrieval models. We provide detailed results in Appendix D.
Are prompts useful for retrieval? All P3 data is in a prompted format, where the input is made up of (a) the input instance and (b) a prompt that contains information about the task. Training on prompted data greatly aids zero-shot generalisation (Wei et al., 2021b; Sanh et al., 2021), but it is unclear how useful it is for retrieval. To examine this, we run experiments using SuperNaturalInstructions. We index and retrieve the data with and without instructions in the input and compare the performance after training on retrieved subsets.9 We find that retrieving **without** instructions outperforms retrieving with instructions by a small 9We add instructions back into samples without them in order to isolate the effect of instructions on retrieval separate from their effect during finetuning.
margin, suggesting that DEFT relies more on instance information rather than task information for retrieval. We provide details in Appendix E.
## 5.2 Practicality Of Assuming Access To Unlabeled Data
Contrary to prior work, our approach assumes access to unlabeled data. This is a practical assumption given that unlabeled data is often readily available or is far cheaper to acquire than labeled data.
This is especially true for tasks such as Qasper or CaseHold, which require experts to carefully read (sometimes quite long) texts to provide labels.
We argue that DEFT's use of unlabeled data can make it a cost-efficient method to obtain a wellperforming task-specific model when the data labeling budget is limited.
We examine this by studying a scenario where QasperEvidence data was collected and assume we have access to P3 and DEFT to make efficient use of it. Obtaining labeled instances for QasperEvidence cost 3.25 times acquiring unlabeled (question-paragraph) instances.10 We com10Based on an estimate provided by the authors of the pare (Figure 3) performance on the test set of a T5-XL model trained on a varying number of labeled instances with a DEFT-XL model trained on cross-task nearest neighbors of 3.25 as many unlabeled instances. DEFT yields better results for smaller annotation budgets (< 1000 labelled examples), and underperforms models trained on thousands of labelled examples. This confirms our suggestion that DEFT is preferable to regular finetuning for limited data budgets. We also note the DEFT setup makes it easy to use target-task labeled data when available, as shown in Section 4.3.
## 6 Conclusion
In this work, we propose Data-Efficient FineTuning, a novel method for efficiently using multitask data by training task-specific models using only a small amount of unlabeled target task data. We use the unlabeled data to select subsets of the multitask data, and train models on these subsets. Our approach performs strongly even when as few as only 20 unlabeled examples are available, and is more effective than full-finetuning on labelled data when it is expensive to gather labelled data, or few (< 3000)
labelled data points are available. DEFT models can outperform same-sized models trained on all available data (e.g., T0), despite being trained on significantly less data. Overall, our results strongly suggest that training on all available data, even with large models, is not always the optimal choice and that focusing on ways to better curate higherquality, smaller datasets is a better path forward.
## Limitations
Our approach is based on the assumption of a limited data budget, and the observation that general multi-task training may not be the most efficient method when one cares about single target tasks.
As such, DEFT is not applicable to "true" zero-shot settings where one has no information about the target task, since it relies on the existence of at least some unlabelled examples. Furthermore, for some tasks it may be possible to cheaply gather many examples for finetuning beyond the point where DEFT is useful. In some cases, gathering unlabelled examples may not be so much cheaper than gathering labelled examples that it is worth considering whether to gather unlabelled or labelled examples. Additionally, the recent rise of sparse dataset. Questions were written after reading paper abstracts, and evidence selection required reading entire papers.
mixture-of-expert models (Shazeer et al., 2017; Fedus et al., 2022) may reduce the negative interference effect observed throughout our work, where DEFT models often outperform models trained on all multitask data and random subsets of the multitask data. Finally, we note that in pilot experiments we found that task diversity was a key element of strong held-out task performance. However, DEFT
does not explicitly correct for task diversity, and we leave further exploration for extending DEFT
to account for this to future work.
## Ethics Statement
We believe that the impact of our work is largely positive, showing a case where we are able to achieve good results with significant reductions in the amount of data used to train a model. We hope that this encourages future work in *data-efficiency*,
where we attempt to reduce the amount of data required to train an effective NLP model. Such research could aid in making the analysis of the data used to train models easier and cheaper, and reduce the training time and associated carbon cost
(Strubell et al., 2020) of models. However, we note also that our work currently assumes access to a large pool of multitask data, making it data-efficient only when it comes to training models, and relies on large language models already pretrained over massive datasets.
## References
Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. 2022. Ext5: Towards extreme multi-task scaling for transfer learning. In *International Conference on Learning Representations*.
Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second PASCAL recognising textual entailment challenge.
Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth PASCAL recognizing textual entailment challenge.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack
Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras. 2022. Lexglue: A benchmark dataset for legal language understanding in english. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*,
Dubln, Ireland.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, pages 177–190.
Springer.
Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, and Matt Gardner. 2021. A dataset of information-seeking questions and answers anchored in research papers. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4599–4610, Online. Association for Computational Linguistics.
Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The CommitmentBank: Investigating projection in naturally occurring discourse. To appear in proceedings of Sinn und Bedeutung 23. Data can be found at https://github.com/mcdm/CommitmentBank/.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019.
DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics.
William Fedus, Barret Zoph, and Noam Shazeer. 2022.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1–39.
Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. 2013.
Optimized product quantization for approximate nearest neighbor search. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*,
pages 2946–2953.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9. Association for Computational Linguistics.
Suchin Gururangan, Ana Marasovic, Swabha ´
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In *International Conference on Machine Learning*, pages 3929–3938.
PMLR.
Xiaochuang Han and Yulia Tsvetkov. 2022. Orca: Interpreting prompted language models via locating supporting data evidence in the ocean of pretraining data. *arXiv preprint arXiv:2205.12600*.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pages 2790–2799. PMLR.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
Herve Jegou, Matthijs Douze, and Cordelia Schmid.
2010. Product quantization for nearest neighbor search. *IEEE transactions on pattern analysis and* machine intelligence, 33(1):117–128.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019.
Billion-scale similarity search with GPUs. *IEEE*
Transactions on Big Data, 7(3):535–547.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. *arXiv preprint arXiv:1911.00172*.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In *Findings of the Association for Computational Linguistics:*
EMNLP 2020, pages 1896–1907, Online. Association for Computational Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *ICLR (Poster)*.
Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR.
Po-Nien Kung, Sheng-Siang Yin, Yi-Cheng Chen, TseHsuan Yang, and Yun-Nung Chen. 2021. Efficient multi-task auxiliary learning: Selecting auxiliary data
by feature similarity. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 416–428, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, volume 46, page 47.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. *Advances in Neural Information Processing Systems*, 33:9459–9474.
Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, and Xiang Ren. 2022. Unsupervised crosstask generalization via retrieval augmentation. *ArXiv*,
abs/2204.07937.
Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, JhengHong Yang, Ronak Pradeep, and Rodrigo Nogueira.
2021. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In *Proceedings of the 44th Annual* International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR
2021), pages 2356–2362.
Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel.
2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. arXiv preprint arXiv:2205.05638.
Yu A. Malkov and D. A. Yashunin. 2020. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 42(4):824–836.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Natural instructions:
Benchmarking generalization to new tasks from natural language instructions. *CoRR*, abs/2104.08773.
Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. 2017. Lsdsem 2017 shared task: The story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pages 46–51.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 4885–4901, Online. Association for Computational Linguistics.
Pouya Pezeshkpour, Sarthak Jain, Byron C Wallace, and Sameer Singh. 2021. An empirical comparison of instance attribution methods for nlp. arXiv preprint arXiv:2104.04128.
Jason Phang, Thibault Févry, and Samuel R. Bowman.
2018a. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. *ArXiv*,
abs/1811.01088.
Jason Phang, Thibault Févry, and Samuel R. Bowman.
2018b. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. *arXiv* preprint arXiv:1811.01088v2.
Mohammad Taher Pilehvar and Jose Camacho-Collados.
2019. WiC: The word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of NAACL-HLT.
Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, and Iryna Gurevych. 2021. What to pre-train on? Efficient intermediate task selection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10585–10605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. *Advances in Neural* Information Processing Systems, 33:19920–19930.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21(140):1–67.
Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. 3(4):333–389.
Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *2011 AAAI Spring Symposium Series*.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. *Commun.*
ACM, 64(9):99–106.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun
Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207.
Noam Shazeer, *Azalia Mirhoseini, *Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.
In *International Conference on Learning Representations*.
Weijia Shi, Julian Michael, Suchin Gururangan, and Luke Zettlemoyer. 2022. Nearest neighbor zero-shot inference. *arXiv preprint arXiv:2205.13792*.
Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2020. Energy and policy considerations for modern deep learning research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09):13693–13696.
Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew MattarellaMicke, Subhransu Maji, and Mohit Iyyer. 2020. Exploring and predicting transferability across NLP
tasks. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 7882–7926, Online. Association for Computational Linguistics.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. *arXiv preprint 1905.00537*.
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. 2022.
Super-naturalinstructions:generalization via declarative instructions on 1600+ tasks. In *EMNLP*.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021a. Finetuned language models are zero-shot learners. *arXiv preprint* arXiv:2109.01652.
Jason Wei, Chengyu Huang, Soroush Vosoughi, Yu Cheng, and Shiqi Xu. 2021b. Few-shot text classification with triplet networks, data augmentation, and curriculum learning. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5493–5500, Online.
Association for Computational Linguistics.
Chih-Kuan Yeh, Joon Kim, Ian En-Hsu Yen, and Pradeep K Ravikumar. 2018. Representer point selection for explaining deep neural networks. Advances in neural information processing systems, 31.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In Proceedings
of the 57th Annual Meeting of the Association for Computational Linguistics.
Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, and Daniel E. Ho. 2021. When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, ICAIL
'21, page 159–168, New York, NY, USA. Association for Computing Machinery.
## A Compute Resources
We ran all experiments on a server with 8 80GB
A100 GPUs. Most models took 7-10 hours to train on a single 80GB A100 GPU.
## B Dataset Details
Sizes and Splits For each dataset used, we provide the number of retrieval and validation examples used in Table 4. We also indicate if the retrieval data was split from the validation or training split. Note any data used to retrieve is held out of the validation split to avoid information leakage.
We additionally provide the number of shots used for each dataset. We follow the number of splits used by Liu et al. (2022) and use the data shared by the authors (available at https://github.com/rthree/t-few/tree/master/data/few_shot).
Prompts We list the prompts used for each dataset. {x} indicates a space that is filled in by instance data.
- **CaseHold**: What is the correct holding statement for the following text? Text: {context}
(A): {ending 1} (B): {ending 2} (C): {ending 3} (D): {ending 4} (E): {ending 5}
- **DROP**: Passage: {passage} Question: {question} Answer:
- **QasperEvidence**: Question: {question} Paragraph: {paragraph} Is the answer to the question in the paragraph? Answer Yes or No.
- RTE: {premise} Question: Does this imply that "{hypothesis}"? Yes or no?
- **ANLI**: {premise} Question: {hypothesis}
True, False, or Neither?
- CB: {premise} Question: {hypothesis} True, False, or Neither?
| Dataset | Retrieval | Eval | #Shots | Retrieval from |
|-------------------------------------------|-------------|--------|----------|------------------|
| CaseHold (Zheng et al., 2021) | 1000 | 2900 | - | Validation |
| DROP (Dua et al., 2019) | 1000 | 8535 | - | Validation |
| QasperEvidence (Dasigi et al., 2021) | 1000 | 43673 | - | Validation |
| RTE* | 1000 | 277 | 32 | Train |
| ANLI R1 (Nie et al., 2020) | 1000 | 1000 | 50 | Train |
| ANLI R2 (Nie et al., 2020) | 1000 | 1000 | 50 | Train |
| ANLI R3 (Nie et al., 2020) | 1000 | 1000 | 50 | Train |
| CB (De Marneffe et al., 2019) | 250 | 56 | 32 | Train |
| HellaSwag (Zellers et al., 2019) | 1000 | 10003 | 20 | Train |
| StoryCloze (Mostafazadeh et al., 2017) | 1000 | 1871 | 70 | Train |
| WinoGrande (Sakaguchi et al., 2021) | 1000 | 1767 | 50 | Train |
| WSC (Levesque et al., 2011) | 554 | 104 | 32 | Train |
| COPA (Roemmele et al., 2011) | 400 | 100 | 32 | Train |
| WiC (Pilehvar and Camacho-Collados, 2019) | 1000 | 638 | 32 | Train |
Table 4: Size of splits used for experiments across datasets. '\#Shots' indicates the number of shots used in few-shot experiments, and 'retrieval from' indicates which split we selected retrieval data from. *Following SuperGLUE
(Wang et al., 2019), RTE data is from RTE 1/2/3/5 (Dagan et al., 2006; Bar Haim et al., 2006; Giampiccolo et al.,
2007; Bentivogli et al., 2009).
- **HellaSwag**: Complete the description with an appropriate ending: First, {context a} Then,
{context b} ... (a) {ending 1} (b) {ending 2}
(c) {ending 3} (d) {ending 4}
- **StoryCloze**: {input sentence 1} {input sentence 2} {input sentence 3} {input sentence 4}
What is a possible continuation for the story given the following options ? - {answer 1} -
{answer 2}
- **WinoGrande**: {sentence} What does the _
in the above sentence refer to? {option1} or
{option2}?
- WSC: Passage: {text} Question: In the passage above, does the pronoun '{span 1}' refer to '{span 2}'? Answer:
- **COPA**: {premise} As a consequence... Help me pick the more plausible option: - {choice 1} - {choice 2}
- WiC: {sentence 1} {sentence 2} Question: Is the word '{word}' used in the same sense in the two sentences above? Yes, No?
## C Few-Shot Results Without Ia3
For 'DEFT-Few (20-70Q)' in Table 3, we trained 5 models using DEFT (as we used 5 few-shot sets per dataset). In Table 5 we report the performance of these models *without IA3 training*. Note we did not train few-shot models for CaseHold, QasperEvidence, or DROP, and so do not report results on these datasets. Notably, RTE, CB, and WSC all have quite large standard deviation (> 3.0), which suggests our improvements (or deterioration, for WSC) over T0-3B for these datasets may not be significant.
## D Index Model Size Experiments
We explored mismatching the index model sizes, training XL size models on cross-task neighbor splits indexed and retrieved using T5-base, and vice-versa. We use a query size of 1000 and retrieve 500 neighbors per query instance. We present the results in Table 6.
## E **Supernaturalinstructions Experiments**
We use version 2.7 of the SuperNaturalInstructions dataset and use the official splits provided, with 100 samples per train and evaluation tasks. This results in a pool of 75,317 train examples. For evaluation, we randomly select one task per evaluation category in Table 5 of Wang et al. (2022).
Task names are given in Table 7. We then generate two indices for retrieval: one where each sample is encoded including the task instruction, and one where each sample is encoded without any instruction. We then retrieve using the 100 unlabeled test instances from each chosen evaluation task, matching the format used for the index (i.e., if we retrieve from the index with instructions, we encode our query data with instructions included). In order to isolate the effect of instructions on retrieval, after retrieving examples, we always train on the corresponding examples with instructions included (i.e.,
when we retrieve examples without using instructions, we add the instructions back into the inputs before finetuning). On average, we retrieve 3.5k training examples, roughly 5% of the total training data. Additionally, we finetune a T5-XL model using all available training data ('Tk-instruct'), and a random baseline using random subsets of the training data of the same size as the retrieved subsets
('Rand-XL').
We present our results in Table 8. We find that the instruction-augmented and no-instruction retrieval DEFT models achieve similar performance on average, although the no-instruction variant performs slightly higher. Both DEFT models significantly outperform the Rand-XL baseline, suggesting that the retrieval is still effective even when using a large pool of multitask data without instructions or prompts. However, we find that neither DEFT model significantly outperforms Tk-instruct, which we hypothesise is related to the significantly smaller size of SuperNaturalInstructions compared to P3. However, we note that our DEFT-XL models are trained on significantly less data than Tkinstruct, and training all 12 DEFT models is still cheaper than training the Tk-instruct model, using roughly 42,000 examples overall, roughly 56% of the data used to train Tk-instruct.
## F Retrieved Data
We present a breakdown of the data retrieved for each task using DEFT in Figure 4.
| Task | DEFT-Few (20-70Q) |
|------------|---------------------|
| RTE | 73.24.0 |
| ANLI R1 | 36.13.0 |
| ANLI R2 | 34.10.9 |
| ANLI R3 | 40.62.0 |
| CB | 58.210.5 |
| HellaSwag | 34.10.7 |
| StoryCloze | 95.10.3 |
| WinoGrande | 50.61.2 |
| WSC | 51.05.1 |
| COPA | 87.81.1 |
| WiC | 50.81.7 |
| Average | 55.6 |
Train Model Size Base XL Index Model Size Base XL Base XL
CaseHold 14.8 **15.8** 32.6 **37.2** DROP 20.8 **21.3** 30.4 **31.0**
Qasper 15.7 **18.0** 23.3 **28.5**
RTE 53.4 **61.7 77.3** 74.0 ANLI R1 **33.3 33.3** 39.5 **39.8** ANLI R2 **33.4** 32.8 35.3 **37.5** ANLI R3 33.2 **33.3 42.5** 41.4 CB **50.0 50.0 75.0** 60.7 HellaSwag 26.0 **27.9** 31.7 **33.1**
StoryCloze 74.0 **76.8** 94.4 **95.3** WinoGrande 49.5 **50.4 51.4** 50.6 WSC 41.4 **42.3 43.3** 39.4 COPA **63.0** 60.0 85.0 **95.0**
WiC **48.8** 48.3 49.5 **54.9** Average 39.8 **42.8** 50.8 **51.3**
Table 6: Performance of DEFT models trained on crosstask neighbors retrieved using different-size models.
| Evaluation Category | Task |
|-----------------------------|------------------------------------------------------|
| Answerability | task020_mctaco_answerability_classification |
| Cause Effect Classification | task391_cod3s_cause_effect_classification |
| Coreference | task1391_winogrande_coreference_resolution |
| Data to Text | task957_e2e_data_to_text |
| Dialogue Act Recognition | task879_schema_guided_dstc8_dialogue_act_recognition |
| Entailment | task937_defeasible_nli_atomic_textual_entailment |
| Grammar Error Correction | task1557_jfleg_grammar_error_correction |
| Keyword Tagging | task613_liar_keyword_tagging |
| Overlap | task039_qasc_overlap_extraction |
| Question Rewriting | task670_ambigqa_question_rewriting |
| Title Generation | task1356_xlsum_title_generation |
| Word Analogy | task1155_bard_word_analogy |
Table 7: List of tasks used for each evaluation category given in Table 8.
| DEFT-XL | | | | |
|-----------------------------|--------|-----------|------|-------------|
| Evaluation Category | Instr. | No Instr. | Rand | Tk-Instruct |
| Answerability | 48.0 | 48.0 | 49.0 | 47.0 |
| Cause Effect Classification | 83.3 | 83.3 | 84.7 | 87.7 |
| Coreference | 61.0 | 51.0 | 43.0 | 83.0 |
| Data to Text | 34.0 | 34.4 | 33.4 | 37.9 |
| Dialogue Act Rec. | 65.0 | 61.0 | 59.0 | 68.0 |
| Entailment | 50.0 | 68.0 | 13.0 | 19.0 |
| Grammar Error Correction | 86.3 | 84.8 | 84.7 | 84.8 |
| Keyword Tagging | 17.4 | 17.6 | 19.2 | 13.3 |
| Overlap | 17.7 | 20.2 | 22.3 | 17.8 |
| Question Rewriting | 45.8 | 64.0 | 59.9 | 68.8 |
| Title Generation | 21.4 | 20.9 | 20.3 | 20.4 |
| Word Analogy | 60.0 | 41.0 | 60.0 | 61.3 |
| Average | 49.2 | 49.5 | 45.7 | 50.7 |
Table 8: Performance of XL-size models on 12 tasks from evaluation categories in Wang et al. (2022). All results are in RougeL. 'Instr.' and 'No Instr.' variants of DEFT-XL refer to models trained using subsets of SuperNaturalInstructions that were retrieved using instructions and without using instruction respectively.
| qasper rte | story_cloze wic | winogrande | | | | | | | | | | | | | |
|-------------------------------------|---------------------|--------------|------|------|------|------|------|------|------|------|------|------|------|------|------|
| anli_r1 anli_r2 anli_r3 casehold cb | copa drop hellaswag | | | | | | | | | | | | | | |
| adversarial_qa | 0.28 | 0.28 | 0.11 | 0.15 | 0.04 | 0.00 | 0.47 | 0.07 | 0.24 | 0.09 | 0.00 | 0.01 | 0.00 | 0.03 | 0.02 |
| ag_news | 0.02 | 0.03 | 0.15 | 0.00 | 0.03 | 0.00 | 0.01 | 0.00 | 0.00 | 0.27 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| amazon_polarity | 0.04 | 0.03 | 0.03 | 0.01 | 0.02 | 0.00 | 0.00 | 0.08 | 0.04 | 0.02 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| app_reviews | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| cnn_dailymail_3.0.0 | 0.01 | 0.00 | 0.08 | 0.27 | 0.02 | 0.00 | 0.05 | 0.05 | 0.01 | 0.08 | 0.01 | 0.00 | 0.00 | 0.00 | 0.04 |
| common_gen | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.03 | 0.05 | 0.01 | 0.04 |
| cos_e_v1.11 | 0.00 | 0.00 | 0.00 | 0.00 | 0.02 | 0.53 | 0.00 | 0.01 | 0.01 | 0.00 | 0.00 | 0.01 | 0.02 | 0.01 | 0.01 |
| cosmos_qa | 0.00 | 0.00 | 0.07 | 0.00 | 0.26 | 0.02 | 0.00 | 0.22 | 0.01 | 0.00 | 0.32 | 0.00 | 0.02 | 0.18 | 0.04 |
| dbpedia_14 | 0.09 | 0.10 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| dream | 0.00 | 0.00 | 0.00 | 0.00 | 0.16 | 0.02 | 0.00 | 0.01 | 0.01 | 0.00 | 0.01 | 0.01 | 0.02 | 0.03 | 0.00 |
| duorc_ParaphraseRC 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | 0.04 | |
| duorc_SelfRC | 0.01 | 0.01 | 0.00 | 0.04 | 0.00 | 0.00 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| gigaword | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| glue_mrpc | 0.00 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.03 | 0.00 | 0.05 | 0.02 | 0.02 | 0.00 |
| glue_qqp | 0.00 | 0.00 | 0.01 | 0.00 | 0.01 | 0.01 | 0.00 | 0.01 | 0.21 | 0.03 | 0.00 | 0.60 | 0.00 | 0.01 | 0.04 |
| imdb | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| kilt_tasks_hotpotqa | 0.18 | 0.19 | 0.06 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| multi_news | 0.00 | 0.00 | 0.02 | 0.30 | 0.01 | 0.00 | 0.02 | 0.02 | 0.02 | 0.03 | 0.00 | 0.00 | 0.00 | 0.00 | 0.03 |
| paws_labeled_final | 0.08 | 0.06 | 0.03 | 0.00 | 0.02 | 0.00 | 0.00 | 0.00 | 0.05 | 0.09 | 0.00 | 0.11 | 0.09 | 0.13 | 0.04 |
| qasc | 0.00 | 0.00 | 0.01 | 0.00 | 0.03 | 0.02 | 0.00 | 0.00 | 0.03 | 0.02 | 0.00 | 0.06 | 0.06 | 0.05 | 0.01 |
| quail | 0.00 | 0.00 | 0.00 | 0.08 | 0.03 | 0.00 | 0.00 | 0.12 | 0.03 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.02 |
| quarel | 0.00 | 0.00 | 0.02 | 0.00 | 0.01 | 0.01 | 0.00 | 0.00 | 0.03 | 0.00 | 0.02 | 0.03 | 0.05 | 0.10 | 0.00 |
| quartz | 0.00 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | 0.01 | 0.00 | 0.02 | 0.02 | 0.02 | 0.00 |
| quoref | 0.01 | 0.01 | 0.00 | 0.03 | 0.00 | 0.00 | 0.21 | 0.01 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.02 |
| ropes | 0.02 | 0.02 | 0.04 | 0.01 | 0.02 | 0.00 | 0.04 | 0.27 | 0.08 | 0.01 | 0.07 | 0.00 | 0.01 | 0.03 | 0.01 |
| rotten_tomatoes | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 |
| samsum | 0.00 | 0.00 | 0.01 | 0.00 | 0.12 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.05 | 0.00 | 0.00 | 0.00 | 0.01 |
| sciq | 0.01 | 0.00 | 0.01 | 0.00 | 0.00 | 0.01 | 0.00 | 0.02 | 0.02 | 0.00 | 0.00 | 0.01 | 0.00 | 0.01 | 0.01 |
| social_i_qa | 0.00 | 0.00 | 0.05 | 0.00 | 0.17 | 0.39 | 0.00 | 0.05 | 0.01 | 0.01 | 0.52 | 0.03 | 0.62 | 0.32 | 0.02 |
| trec | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.03 | 0.00 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 |
| wiki_bio | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| wiki_hop_original | 0.04 | 0.04 | 0.01 | 0.01 | 0.00 | 0.00 | 0.11 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| wiki_qa | 0.17 | 0.18 | 0.11 | 0.01 | 0.02 | 0.00 | 0.00 | 0.00 | 0.15 | 0.13 | 0.00 | 0.01 | 0.00 | 0.01 | 0.01 |
| wiqa | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.03 |
| xsum | 0.01 | 0.01 | 0.13 | 0.07 | 0.01 | 0.00 | 0.07 | 0.01 | 0.01 | 0.15 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
| yelp_review_full | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.04 |
## G Retrieved Examples
For a single query from each dataset, we present the top two closest datapoints retrieved below. **Content warning: some of these datapoints reference sensitive topics.** Queries are chosen randomly. Answers are in *italics*.
RTE
Query: Thanks to a global ban on the ivory trade that was passed in 1989 by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), the African elephant population may be reversing its spiral toward extinction\n Question: Does this imply that "The ban on ivory trade has been effective in protecting the elephant from extinction."? Yes or no?
Retrieved \#1: Title: Dissappointed\n Review: The software works OK, but haven't gotten any more than three numbers on a draw six lottery after 8 months of trying. The biggest thing to watch out for is support, or lack of. If you rebuild your computer or buy a new one and have to re-install their software, you have to get another product ID from them. It took me almost two weeks of begging and a phone call (just an answering machine on their end) to get a response from them. I am coming up on a week of trying to get a response from them for a product ID for my new computer. Funny, because they responded the next day when I first baught the program and they had my money in hand!\n Does this product review convey a negative or positive sentiment? *Negative* Retrieved \#2: You are considering whether to buy a product. You look at the reviews. Would the following review decrease or increase the chances of you buying the product?\n Review title:
Amazon Rip Off\n Product review: What a huge waste of money. I paid $$$ on this very site not but a month ago, now it is
$$. Got it home, followed the instructions and the silly thing will not get but about a foot off the ground if that, and then it just falls over and beats itself into the ground. Don't waste your cash on this, give your kid a fifty dollar bill and let them light it on fire, they'll have for fun. *decrease* ANLI R1 Query: Secrets of the Cryptkeeper's Haunted House was a childrens Saturday- ´
morning game show that ran on CBS.
It premiered on September 14, 1996 and lasted until August 23, 1997. It featured the Cryptkeeper of "Tales from the Crypt" (with John Kassir as the voice)
now serving as an announcer. It is the last TV series in the "Tales From the Crypt" franchise.\n Question: The Secrets of the Crypt Keepers House television ´
show aired on CBS until 1997, and then was picked up and aired on NBC for an additional season. True, False, or Neither?
Retrieved \#1: Is there a negative or positive tone to this product review?\n
===\n Title: Not quite as good as some others\n Review: This is a fair book, but it is not near as good as Peter O.
Steiner's "Thursday Night Poker." Andy Nelson's book can't decide whether it is for beginners or advanced, so it tries to fit advanced technique into too short of space.
It barely scratches the surface of any of the topics it brings up. When it doesn't do that, it simply says, "Play so tight that you don't even have to think. Fold 99% of your hands." That does not make for a fun night, in my opinion.\n Answer: *Negative* Retrieved \#2: a delegation from the islamic resistance movement -lrb- hamas
-rrb- left the gaza strip monday morning ,
heading for egypt to hear israel 's response regarding a cairo - mediated ceasefire . In a nutshell, *hamas leaders leave to cairo* for final ceasefire discussions
## Anli R2
Query: The Sea Wall (French: Un barrage contre le Pacifique ) is a 2008 film by Cambodian director Rithy Panh in a French/Cambodian/Belgian co-production.
The film opened on 7 January 2009 in France. It was adapted from the 1950 novel "The Sea Wall" by Marguerite Duras. The novel had previously been adapted as "This Angry Age" by René Clément in 1958.\n Question: Marguerite Duras directed the film. True, False, or Neither?
Retrieved \#1: Title: Exactly what I
had been looking for!\n Review: I've gone through two other iPod FM transmitters that I ended up giving away because the quality was less than desirable. After seeing this one pop up in my Quick Picks last week I decided to give it a try. I used it the very first evening I received it and I'm happy to say my search is over. As others noted, use a low FM frequency for the best results (87.9 in my area works well). I don't receive any interference and the music on my iPod comes through just like I expected. For the price, this is definitely the best deal out there.\n Is this product review negative? No Retrieved \#2: Based on this review, would the user recommend this product?\n===\n Review: My friend tried to commit suicide, and while he was bleeding to death, he was watching mtv, and the video for "Hold On" was playing, and he was like "yeah" and after he was done rocking out he got all inspired and called for an ambulance. And now he's still here, and he takes pills that make him tired, and everyone is careful to be very nice to him and be his best friend, even though we all secretly pity him. Thank you so much.\n Answer: No ANLI R3 Query: Well, I think during the campaign, particularly now during this difficult period, we ought to be speaking with one voice, and I appreciate the way the administration has worked hard to calm the tensions. Like the vice president, I call on Chairman Arafat to have his people pull back to make the peace.\n Question:
Chairman Arafat needs to pull back his people during this difficult time. True, False, or Neither?
Retrieved \#1: Title: clinton pushes for greater diversity on wall street\n\n===\n\n Write an article with the given title: u.s.
president bill clinton urged wall street brokers to pursue business in america 's economically distressed cities , saying it
's an untapped market with more buying power than mexico .
Retrieved \#2: You are considering whether to buy a product. You look at the reviews. Would the following review decrease or increase the chances of you buying the product?\n Review title:
Mistake\n Product review: I didn't want to "purchase" Bars and Tones". It was a mistake to click on it. This review doesn't deserve so many words.\n *decrease* WiC
Query: It may rain in which case the picnic will be canceled.\n A window case.\n Question: Is the word 'case' used in the same sense in the two sentences above? Yes, No?
Retrieved \#1: Title: remains of \#\#
exhumed from mass graves in eastern croatia\n \n===\n \n Write an article with the given title: *thirty bodies believed to* be croats killed by ethnic serbs at the outbreak of the \#\#\#\#-\#\# serbo-croatian war in former yugoslavia have been exhumed from two mass graves in eastern croatia , an official said tuesday .
Retrieved \#2: You are considering whether to buy a product. You look at the reviews. Would the following review decrease or increase the chances of you buying the product?\n Review title: For the 50-cent table\n Product review: My favorite author has run out of steam!
His co-author does not, repete, does not have the Paterson style. After sampling this "tandemly"-wriiten book, it becomes obvious that this is a time-waster. Even the editing is bad. I didn't feel guilty about not finishing it. It's headed for the community library's monthly book sale–fifty cent table.\n *decrease* COPA
Query: The woman filed a restraining order against the man. As a consequence...
\n Help me pick the more plausible option:\n- The man called her.\n- The man stalked her.
Retrieved \#1: First sentence of the article: when christopher darden got a recent early-morning call from his publisher that his book " in contempt "
had become no. \# on the new york times best-seller list , he mumbled something like " ok , " then rolled over and went back to sleep .\n\n Title: contempt does n't fill christopher darden Retrieved \#2: "Extract the answer to the following question from the movie plot.
If the question isn't answerable, please output "Can't answer".\n Question: Who is the toy's leader and Andy's favorite toy?\n Title: Toy Story\n Movie plot:
A boy called Andy Davis (voice: John Morris) uses his toys to act out a bank robbery. The bank is a cardboard box, the robber is Mr. Potato Head (voice: Don Rickles) assisted by Slinky Dog (voice:
Jim Varney), and the bystanders include Bo Peep (voice: Annie Potts) and her sheep. The day is saved by cowboy doll Woody (voice: Tom Hanks) playing the sheriff, with help from Rex the dinosaur
(voice: Wallace Shawn). Woody is the only toy who gets to say his own lines because he has a pull-string that makes him say things like "Reach for the sky!"
and "You're my favorite deputy!"During the opening credits (soundtrack: Randy Newman's "You've Got a Friend in Me"),
Andy takes Woody downstairs to find his mother (voice: Laurie Metcalf) decorating the dining room for his birthday party. He asks if they can leave the decorations up until they move, and his mom agrees. She says the guests will arrive soon and sends him back upstairs to get his baby sister Molly (voice: Hannah Unkrich), whose crib is in his room. Andy tosses Woody onto his bed before he pulls Molly out of her crib and carries her away.Woody and the other toys have seemed limp and inanimate up to this point, but as soon as Andy leaves the room, Woody sits up and expresses surprise that the birthday party is today. <cut for space> ...\n *Woody* WSC
Query: Passage: Dan took the rear seat while Bill claimed the front because his
"Dibs!" was quicker. \n Question: In the passage above, does the pronoun "his" refer to Dan?\n Answer:
Retrieved \#1: Title: I want to READ it on my Kindle\n Review: Why can't I get the readable version of night for my kindle?
I don't want the auidio version...Help! I
downloaded it thinking that I would have the choice to read it or to listen to it but that was not the case at all. I'm extremely disappointed.\n Does this product review convey a negative or positive sentiment?
Negative Retrieved \#2: You are considering whether to buy a product. You look at the reviews. Would the following review decrease or increase the chances of you buying the product?\n Review title: Look weird - feel great!\n Product review:
These look so weird and also feel weird when you first put them on but they are so much fun. I love them for my yoga class, and sometimes wear them at night watching TV because the separation they give your toes is good for your feet overall.
Try them... you'll become a fan too!\n increase WinoGrande Query: The phone of Donald is a lot better than Adam's because _ paid extra for his phone.\n What does the _ in the above sentence refer to? Donald or Adam?
Retrieved \#1: Title: more than you expect\n Product review: The thing about these tillers is that they do things you might not think about. For instance, they're great for dealing with long-rooted weeds. You can hack your way down to the root, then pull up the plant and not leave a huge hole in the ground.\n Would you say this review depicts the product in a flattering or unflattering light?\n flattering Retrieved \#2: Title: purported statement from al-qaida-linked group says ultimatum against italy ends threatens attacks\n\n===\n\n Write an article with the given title: a statement released sunday in the name of an al-qaida-linked group said the italian government has " dug its grave by its own hands " after it ignored a warning to withdraw its troops from iraq by aug. \#\# .
HellaSwag Query: Complete the description with an appropriate ending:\n First, [header] How to make a butterfly out of plastic spoons [title] Gather the materials you will need for this project, listed below. [title] Put a craft cloth or some newspaper down on your working surface. [title] Cut the top portion of the four spoons off (leaving about half an inch of the handle left. Then, ...
Retrieved \#1: Title: hmm...\n Review: I
bought this costume in hopes of wearing for Halloween ( last year). I had even separately purchased the duster ( which I am now using to really dust things). Uhh... I tried it on ( I got a X-Small) and its just big... the net piece ( part of the dress with the dots) go all the way down to almost my knees. Which makes it awkward and not sexy at all- its just weird I tried tucking the net part in to my undies to hold it, but it just becomes supper puffyagain looks weird. I never wore it and its still brand new sitting in my closet somewhere.Maybe its just for my body- I am not sure, but the material isn't as great either compared to the picture. Def. does not look anything close to how the model looks in it.Sorry- this was not a good buy at all. The model sure looks good in it.\n Does this product review convey a negative or positive sentiment? *Negative* Retrieved \#2: What type of details about adolf heeb\n can be gathered from the following bio?\n\n Bio: adolf heeb -lrb- born 11 july 1940 -rrb- is a former cyclist and politician from liechtenstein .\n he competed in the individual road race at the 1960 summer olympics .\n he later served as a member of the landtag of liechtenstein and leader of the patriotic union party.
CB
Query: B: boy, he's a big one. A: he's pretty big. That's why it really surprises me, you know, that he hasn't come back, because, like I said, he's never gone away like this before, and, I would think, you know, I mean, he might could get hurt by a car or something. I don't know that he could really get killed that easily because he is so big.\n Question: he could really get killed that easily True, False, or Neither?
Retrieved \#1: Summarize this document: Glen Water Limited also paid costs of \u00a31,600 to restore fish stocks in the Tall River near Richhill.\n About 250 metres of the river was affected when untreated sewage was discharged into it.\n It caused what was described as a moderate ¨¨fish kill.\n Inspectors found a plume of untreated sewage coming from a discharge pipe at Richhill waste water treatment works in 2014.\n An investigation found that an üninterruptable power sourceät the plant had failed.\n In addition, a power cut to the alarm system meant staff were unaware of the problem.\n Glen Water Limited is based at Dartford in Kent.\n Under a 25-year public private partnership it has the contract for 25% of Northern Ireland's waste water treatment capacity.\n It operates and maintains nine treatment works or pumping stations up to 2032 in return for monthly payments.\n Summary:
A company which treats sewage for NI
Water under a public private partnership contract has been fined \u00a32,500 for polluting a County Armagh river.
Retrieved \#2: Title: Good\n Review:
Well, I'd say all of these songs are well constructed, dope lyrics whatever... but wth? all the basslines sound the same or what? Personally i prefer Violent By Design over this.\n Is this product review negative? No StoryCloze Query: Andy had always wanted a big kids bike. When he turned six Year's old he asked for a bike for his birthday. He did not know how to ride a bike. On Andy's birthday his mother gave him a bike. What is a possible continuation for the story given the following options ?\n -
Andy cried for hours.\n - His dad taught him how to ride it.
Retrieved \#1: Based on this review, would the user recommend this product?\n
===\n Review: I love most Neil Young but every fan knows that about one in three of his albums really sucks. After Greendale and Greatest hits, I'm very disapointed.\n Answer: No Retrieved \#2: hong kong share prices rose a mere \#.\#\# percent on late overseas buying thursday despite early profit-taking
, dealers said .\n \n ===\n \n Given the above sentence, write its title: hong kong shares close \#.\#\# percent firmer CaseHOLD
Query: What is the correct holding statement for the following text?\n Text:
component of the res judicata doctrine.
The Ohio Supreme Court held that the original criminal proceedings in Krahn were insufficient to invoke collateral estoppel in the later malpractice case because the claimed error by Krahn's criminal lawyer in plea negotiations was not " 'actually and necessarily litigated and determined' in the denial of her motion to vacate the criminal judgment against her." Krahn, 43 Ohio St.3d at 108, 538 N.E.2d 1058, quoting Goodson v.
McDonough Power Equip., Inc. (1983),
2 Ohio St.3d 193, 195, 2 OBR 732, 443 N.E.2d 978. The Supreme Court by no means suggested that collateral estoppel was completely inapplicable in the context of a criminal conviction when, as here, matters genuinely were litigated and determined. Id. at 107, 538 N.E.2d 1058 (<HOLDING>). Decisions in Ohio other than Krahn relative \n (A):
recognizing the doctrine of collateral estoppel in agency proceedings\n (B):
holding that the facts prevent the invocation of collateral estoppel as a bar to krahns cause of action in this case\n (C):
holding collateral estoppel elements met considering changed circumstances in the context of an exception to the general rule of collateral estoppel\n (D): recognizing the cause of action\n (E): holding that collateral estoppel applies to 1983 claims Retrieved \#1: Is there a negative or positive tone to this product review?\n
===\n Title: Too steep\n Review: I bought this for my dog who had back problems, it was way too steep and my dog had to jump about 3/4's of the way up to my bed because the measurement of the ramp on the description was incorrect. It totally defeated the purpose of my dog having to not jump. I had to go back to the stairs I
had been using\n Answer: *Negative* Retrieved \#2: Write a title for this sentence: the fate of president barack obama 's top domestic priority - a remake of the u.s. health care system - now rests in the hands of a pivotal but deeply divided senate committee . \n \n Title:
toughest test coming up for health care overhaul DROP
Query: Passage: Coming off their overtime win at San Diego, the Broncos traveled to the Mall of America Field at the Hubert H. Humphrey Metrodome for an interconference duel with the Minnesota Vikings. The game's first points came from the Vikings, when defensive end Jared Allen tackled running back Willis McGahee in the end zone for a safety. The Broncos grabbed the lead when linebacker Mario Haggan returned an interception off Vikings' quarterback Christian Ponder 16 yards for a touchdown ... <cut for space> ... On the Broncos' next possession, McGahee rushed 24 yards for a touchdown and Tebow scrambled for a two-point conversion to tie the game at 29. The Vikings subsequently reclaimed the lead on Longwell's 39-yard field goal with 3:06 left in the game.
The Broncos answered with kicker Matt Prater's 46-yard field goal with 1:33 left to tie the game at 32. On the Vikings' ensuing possession, Broncos' cornerback André Goodman returned an interception off Ponder to the Vikings' 15-yard line. Six plays later, Prater nailed the game-winning 23-yard field goal as time expired to give the Broncos their fifth consecutive win.\n Question: how many yards did longwell make?\n Answer:
Retrieved \#1: Make a title for this article: andy roddick hit a record-breaking \#\#\# mph -lrb- \#\#\#.\# kph -rrb- serve friday in a lopsided win over stefan koubek as the united states took a \#-\# davis cup lead over austria . \n \n roddick ginepri give united states \#-\# lead over austria Retrieved \#2: Orton does not start against Ohio State Purdue quarterback Kyle Orton did not start Saturday \#39;s game against Ohio State, though he was listed as available to play. Orton has been bothered by a right hip injury for the last month. \n
\n Which of the following sections of a newspaper would this article likely appear in? World News, Sports, Business, or Science and Technology? *Sports* Qasper Query: Question: How big is Augmented LibriSpeech dataset? Paragraph: We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We also provide baseline results, including, to our knowledge, the first end-to-end many-to-one multilingual model for spoken language translation. CoVoST is free to use with a CC0 license, and the additional Tatoeba evaluation samples are also CC-licensed. Is the answer to the question in the paragraph? Answer Yes or No.
Retrieved \#1: Title: make your july \# celebration sizzle\n \n ===\n \n Write an article with the given title: you have less than a week to get your fourth of july cookout menu set and we thought we
'd help .
Retrieved \#2: Title: A good idea...\n Review: that went terribly bad. I cannot comprehend how some of these "artists" were chosen for this. "Atlantic City" and "State Trooper" are embarrasing to say the least, but they sadly showcase what is now Nashville's finest. If Johnny Cash and Dar Williams recordings had not appeared on this CD, one star would have been too many. Thankfully, these mostly pathetic renderings cannot tarnish the greatness of Mr. Springsteen or his amazing album. Go get the original. You won't be sorry.\n Does this product review convey a negative or positive sentiment?
Negative
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section after conclusion (non-numbered).
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement, non-numbered section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Section 1 (introduction).
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
Section 4, in which we discuss the models and indexes we create.
✓ B1. Did you cite the creators of artifacts you used?
Primarily section 4.1, where we discuss experimental details.
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
All artefacts used were created and shared for research purposes, which we use them for. We refer readers to the papers introducing these artefacts for more details.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
We use these artefacts only for research purposes in this work.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
We do not create any new datasets or significantly alter the datasets we use, and make use only of extremely popular existing datasets.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Some details are in Section 4.1, and more in Appendix B and F. We mainly report what tasks are in the multitask datasets we use.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Some details are in Section 4.1, and more in Appendix B.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
## C ✓ **Did You Run Computational Experiments?** Primarily Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 contains experiments and their details, with details on the computing infrastructure in Appendix A.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We report mean and standard deviation values for the few-shot experiments we run, and report this in section 4, and note statistically significant values. Due to the compute cost of full-finetuning 3B
parameter models, we do not do this for our full-finetuning experiments
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4.1 includes details on the packages and steps used for retrieval of the datasets we train on.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
r-menon-etal-2023-coaug | {C}o{A}ug: Combining Augmentation of Labels and Labelling Rules | https://aclanthology.org/2023.findings-acl.577 | Collecting labeled data for Named Entity Recognition (NER) tasks is challenging due to the high cost of manual annotations. Instead, researchers have proposed few-shot self-training and rule-augmentation techniques to minimize the reliance on large datasets. However, inductive biases and restricted logical language lexicon, respectively, can limit the ability of these models to perform well. In this work, we propose CoAug, a co-augmentation framework that allows us to improve few-shot models and rule-augmentation models by bootstrapping predictions from each model. By leveraging rules and neural model predictions to train our models, we complement the benefits of each and achieve the best of both worlds. In our experiments, we show that our best CoAug model can outperform strong weak-supervision-based NER models at least by 6.5 F1 points. | # Coaug**: Combining Augmentation Of Labels And Labelling Rules**
Rakesh R Menon 1∗ Bingqing Wang 2 **Jun Araki** 2 Zhengyu Zhou 2 Zhe Feng 2 **Liu Ren** 2 1 UNC Chapel-Hill 2 Bosch Research North America & Bosch Center for Artificial Intelligence (BCAI)
[email protected]
{bingqing.wang, jun.araki, zhengyu.zhou2, zhe.feng2, liu.ren}@us.bosch.com
## Abstract
Collecting labeled data for Named Entity Recognition (NER) tasks is challenging due to the high cost of manual annotations. Instead, researchers have proposed few-shot self-training and rule-augmentation techniques to minimize the reliance on large datasets. However, inductive biases and restricted logical language lexicon, respectively, can limit the ability of these models to perform well. In this work, we propose **CoAug**, a co-augmentation framework that allows us to improve few-shot models and ruleaugmentation models by bootstrapping predictions from each model. By leveraging rules and neural model predictions to train our models, we complement the benefits of each and achieve the best of both worlds. In our experiments, we show that our best **CoAug** model can outperform strong weak-supervision-based NER models at least by 6.5 F1 points on the BC5CDR, NCBI-Disease, WikiGold, and CoNLL-2003 datasets.1
## 1 Introduction
Named Entity Recognition (NER) is the task of identifying entity spans of specific types in a given document. While deep learning has led to the development of highly performant supervised NER
models (Ma and Hovy, 2016; Lample et al., 2016; Devlin et al., 2019), their performance is contingent on the availability of high-quality large labeled datasets, which is often expensive to collect. Moreover, it is impractical to assume the availability of large datasets for all domains. Hence, learning from limited labeled data is a pressing challenge in named entity recognition research. The majority of research in this area can be broadly classified into two distinct paradigms: few-shot learning with pre-trained language models (LMs) and weak supervision methods that utilize heuristic rules for entity extraction.
∗ Work done during an internship at Bosch Research.
1 Code: https://github.com/boschresearch/CoAug
![0_image_0.png](0_image_0.png)
In few-shot learning, models are trained to identify novel entities given just a few labeled examples for each entity type. While pretrained LMs have been explored for this setting, their susceptibility to overfitting on small datasets results in poor performance. Consequently, recent works improve recognition using *prototypical networks*
(**ProtoBERT**, Tänzer et al., 2022), improved representations from self-supervised pre-training of LMs
(**QuIP**, Jia et al., 2022), and self-training (Huang et al., 2021). In the iterative learning process of self-training, many candidate entities are extracted and added into the training set for future iterations.
However, premature models from initial iterations also add erroneous entities to the training set, resulting in models whose performance lags behind fully-supervised models that utilize large labeled datasets.
On the other hand, rule-based weak supervision methods utilize heuristic rules and manual lexicons
(Shang et al., 2018; Peng et al., 2019) developed by domain experts to supervise entity recognition models. However, experts may find it challenging to enumerate all possible heuristics, which can limit the diversity of identified entities in docu9062 ments. In recent work, **TaLLOR** (Li et al., 2021)
overcomes this limitation by automatically learning rules given unlabeled data and an initial set of seed rules (tens of rules). Nonetheless, while rule-based methods offer high precision, their performance is constrained by the logical language specified by the developer, which limits the set of identifiable entities. Moreover, learning rules can fail to identify entities in new linguistic contexts that would otherwise be known.
We hypothesize that the two paradigms of fewshot learning and rule-based weak supervision can effectively complement each other, as neural models are skilled at identifying candidates from different linguistic contexts but lack precision, while rulebased methods can identify accurate candidates with precision but lack the flexibility to identify entities in different contexts. Therefore, in this work, we propose *Co-Augmentation* (**CoAug**), as shown in Figure 1, an iterative bootstrapping framework that effectively combines neural models, rule-based weak supervision methods, and unlabeled data.
Our proposed framework draws inspiration from co-training (Blum and Mitchell, 1998), but it has its own unique approach. Like co-training, **CoAug**
aims to combine two distinct inductive biases in limited labeled data settings. Unlike co-training, instead of improving two models that use different feature sets individually by bootstrapping labels from each other, **CoAug** accomplishes the same goal by using two models that use different forms of supervision to expand the same label set. Additionally, in each iteration of **CoAug**, both classifiers are trained with the predictions made by both models, rather than just one. Our choice allows the framework to function from really small initial training sets for the individual models.
We evaluate our approach on four named entity recognition datasets that span general and science domains. Our results indicate that (a) **CoAug** consistently improves performance over self-training ruleaugmentation and few-shot models while being highly precise, (b) utilizing stronger pre-training for the neural models leads to improved performance of models in our framework.
In summary, our contributions are as follows:
- We present **CoAug**, a co-augmentation framework that leverages both rule-augmentation and label-augmentation approaches for NER.
- Experimental results show that **CoAug** can perform better than prior rule-based methods on
four datasets in two domains.
- We provide a brief analysis of factors that contribute towards the success of **CoAug**.
## 2 Coaug
In this work, we consider a setting where we have access to an initial set of seed rules, S, and a large unlabeled corpus, U, to perform the named entity recognition task. Applying the rules, S, on U provides the initial set of labeled examples, L, to train models in our framework.
Our framework, **CoAug** (short for Co-Augmentation), iteratively improves the performance of two models by leveraging the bootstrapped predictions on unlabeled data by each model. Given that prior work in low-resource NER focuses on two parallel tracks of rule-augmentation and few-shot learning methods that do not interact with each other, we instantiate **CoAug** with a rule-augmentation model and a few-shot model to leverage the best of both paradigms. We refer to these components of our framework as Rule Augmenter and Label Augmenter (Figure 1).
In the subsections below, we describe the Rule Augmenter and Label Augmenter modules.
## 2.1 Rule Augmenter Algorithm 1 Tallor
Require: U = {x1:N } unlabeled examples Require: R = {S} rules initialized with seed rules Require: C = {c1:M} candidate rules
Initialize: L = {}
![1_image_0.png](1_image_0.png)
end for
The primary function of the Rule Augmenter is to automatically learn labeling rules from unlabeled data and use them to generate weak labels for training a neural model. In this work, we instantiate the rule augmenter module using the **TaLLOR**
framework. Accordingly, our rule augmenter has the following subcomponents: (a) RULE APPLIER
that applies rules over unlabeled data to generate weak labels, (b) LABEL SELECTOR that filters the most accurate examples based on the similarity of averaged token-level BERT (Devlin et al., 2019)
representations of proposed entities to the representations of previously identified entities of the same label in the training set, (c) NEURAL NER
MODEL that is trained on the accurate instances and proposes new entities in the unlabeled data that can be used to develop new rules, and (d)
RULE SELECTOR that scores candidate labeling rules and selects high-precision rules that satisfy the predictions from the NEURAL NER MODEL.
We summarize the iterative process of automatic rule identification by **TaLLOR** in Algorithm 1.
## 2.2 Label Augmenter
The Label Augmenter module consists of a NEU-RAL MODEL that learns to perform entity recognition with minimal supervision and LABEL SELEC-TOR that selectively adds the weak labels proposed by the NEURAL MODEL into the training set for the next iteration.
Algorithm 2 Label Augmenter Require: U = {x1:N } unlabeled examples Require: L = {S} rules initialized with seed rules Require: β0*, β1* ▷ initial threshold and increment Initialize: L = R(U)
for t in (1, *. . .* , T) do
// Train N**EURAL MODEL**
M ← TRAIN(M,L)
// Label using N**EURAL MODEL**
LM ← PREDICT(M, U)
// **Select Examples Using Adaptive Threshold**
LM ← LABELSELECTOR(LM, β0 + t × β1)
L = *L ∪ L*M
end for In this work, we experiment with two instantiations of the NEURAL MODEL using recent few-shot NER models, namely, **ProtoBERT** and QuIP. We use an adaptive threshold for the Label Selector to filter out low-quality, weakly labeled instances. Initially, we add 20% of the proposed instances from the Neural Model to the training set.
Then, as the model becomes more confident in its predictions over iterations, we gradually increase the proportion of instances incorporated, with a 5% increase per iteration. We summarize the label augmenter algorithm in Algorithm 2.
We provide an outline for the **CoAug** algo-
Algorithm 3 **CoAug** algorithm Require: U = {x1:N } unlabeled examples
![2_image_0.png](2_image_0.png)
![2_image_1.png](2_image_1.png)
Require: R = {S} rules initialized with seed rules Require: RuleAugmenter M1, LabelAugmenter M2 L = R(U)
for t in (1, *. . .* , T) do U = *U \ L*
// **Training the Rule Augmenter section**
M1 ← TRAIN(M1, L) R ← R∪ UPDATERULES(M1)
▷ Select high-precision rules L = *L ∪ R*(U) ▷ Add examples after applying rules
// **Training the Label Augmenter section**
M2 ← TRAIN(M2, L) W ← HIGHCONFWEAKLABEL(M2, U)
▷ Select high-confident weak-labels L = *L ∪ W*
end for rithm in Algorithm 3. In each training iteration, we alternatively train the Rule Augmenter and Label Augmenter models. Different from cotraining (Blum and Mitchell, 1998), in **CoAug**,
the Rule-Augmenter (Label-Augmenter) utilizes the examples that have been labeled by the Rule-Augmenter (Label-Augmenter) and the Label-Augmenter (Rule-Augmenter) to improve its entity recognition performance over iterations.
## 3 Experiments 3.1 Experimental Settings
We evaluate our framework on four popular datasets that are composed of two science-domain and two general-domain datasets. Following Li et al. (2021), we utilize the training data without labels as our unlabeled data. Further, for all experiments, we use a set of 20 initial seed rules. These rules specify highly frequent entities for each category within a dataset.
BC5CDR (Li et al., 2016) contains 1,500 PubMed abstracts with manual annotations for disease and chemical entity mentions. The abstracts are split equally among train, dev, and test sets
(500/500/500).
NCBI-Disease (Dogan et al. ˘ , 2014) contains 793 PubMed abstracts with manual annotations for disease entity mentions. The abstracts are split as 593/100/100 for train, dev, and test sets.
CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) contains about 20,744 sentences from Reuters news articles. We split the data into 14,987/3,469/3,685 sentences for the train, dev, and test set. Additionally, for our experiments, we only
Method BC5CDR CoNLL-2003 NCBI-Disease WikiGold
TaLLOR (Li et al., 2021) 59.4(3.2) 50.3(9.6) 39.3(1.5) 23.7(4.3)
ProtoBERT (Tänzer et al., 2022) 33.1(3.5) 47.3(2.9) 25.5(4.4) 37.3(3.8)
CoAug (TaLLOR + **ProtoBERT**) 64.4(1.5) 65.0(0.8) 46.8(3.5) 50.6(2.1) QuIP (Jia et al., 2022) 64.9(1.7) 70.6(3.7) 75.3(0.7) 43.6(2.3)
CoAug (TaLLOR + **QuIP**) 65.9(1.5) 76.8(2.0) 50.5(4.9) 51.8(2.8)
consider the Person, Location, and Organization entities2following Li et al. (2021).
WikiGold (Balasuriya et al., 2009) contains 1,696 sentences from Wikipedia articles with annotations for Person, Location, and Organization entity categories similar to CoNLL2003. We split the dataset into 1,142/280/274 sentences for the train, dev, and test sets.
We evaluate two instantiations of the **CoAug**
framework where the Rule Augmenter uses TaLLOR, and the Label Augmenter uses either ProtoBERT/**QuIP**. For baselines, our main experiments compare **CoAug** against **TaLLOR**, self-trained ProtoBERT, and self-trained **QuIP**. Our code is implemented in Pytorch (Paszke et al., 2019) using the Huggingface library (Wolf et al., 2020). For the Rule Augmenter section, all experimental hyperparameters follow that from Li et al. (2021).
Notably, we use the same hyperparameters for the NCBI-Disease, and WikiGold datasets as Li et al.
(2021) did for BC5CDR and CoNLL2003. For science-domain datasets, we utilize SciBERT-base
(Beltagy et al., 2019) as the base for the **ProtoBERT**
model and BERT-base (Devlin et al., 2019) otherwise. We do not make any such distinctions for QuIP as it is a specially fine-tuned RoBERTa-large
(Liu et al., 2019) model designed to perform well on extraction-based tasks (Jia et al., 2022). We report the hyperparameters used for all experiments in more detail in Appendix C.
## 3.2 Results And Analysis 3.2.1 Main Results
Table 1 reports the test set F1 scores for all models on each of the four datasets. We observe that **CoAug**
with QuIP/**ProtoBERT** outperforms **TaLLOR** on all 4 datasets substantially (average F1 on WikiGold for 2skipping entities from the Miscellaneous category.
CoAug is more than 2× **TaLLOR**). Further, we also observe that utilizing the co-augmentation framework as opposed to self-training also aids models to produce similar results more reliably, as indicated by the variance of the results (in 3 out of 4 datasets). Further, we also observe that utilizing larger few-shot models, such as **QuIP** (which has a RoBERTa-large base), is complementary to our framework and continues to push the NER performance further. On comparing with **QuIP**, we observe that **CoAug** with **QuIP** performs better on 3 out of 4 datasets.
However, on the NCBI-Disease dataset, we observe that **QuIP** outperforms **CoAug** by a considerable margin. On analysis, we identify that **QuIP**
adds too many incorrect instances during the initial few iterations for this dataset. Consequently, the rule augmenter selects rules that lose precision, and the overall quality of examples in **CoAug** deteriorates. Nonetheless, since entity recognition for this dataset is hard for **TaLLOR** as well, we observe some improvement from using **CoAug**. Future work should look to address the issue of controlling candidates from neural models in order to maintain the reliability of the high-precision set.
In Figure 2, we identify that the success of **CoAug**
over high-precision rule-augmentation approaches, such as **TaLLOR**, lies in its ability to identify more instances in the unlabeled that improve precision as well as recall over **TaLLOR**.
## 3.2.2 Effect Of Task-Aligned Pre-Training
In this subsection, we analyze the contribution of pre-training strategies towards the performance of CoAug. Specifically, we ablate the effect of changing the pre-training initialization from **QuIP** to that of RoBERTa-large, the base model for **QuIP**. As shown in Table 2, the performance of **CoAug** with RoBERTa-large lags far behind the performance
![4_image_0.png](4_image_0.png)
| Model | BC5CDR | CoNLL2003 |
|--------------------------|-----------|-------------|
| CoAug (TaLLOR + RoBERTa) | 45.6(0.1) | 64.4(0.2) |
| CoAug (TaLLOR + QuIP) | 65.9(1.5) | 76.8(2.0) |
of **CoAug** with **QuIP**. On BC5CDR, we observe that the **CoAug** with RoBERTa-large performs poorly in comparison to **TaLLOR** as well. This indicates that any form of task-aligned pre-training, such as QuIP, can help design NER models for a diverse domain of tasks which corroborates some of the earlier work in task-adaptive pre-training (Gururangan et al., 2020).
## 4 Conclusion
In this work, we introduce **CoAug**, a coaugmentation framework that utilizes unlabeled data to train rule-augmentation and neuralaugmentation models to become better NER taggers. Our results on datasets from two domains demonstrate the effectiveness of **CoAug** for lowresource domains. Our analysis reveals that **CoAug**
is able to perform better than weak-supervision methods like **TaLLOR** because of an ability to find more positive instances while maintaining high precision. Further analysis shows the importance of factors such as the strength of pre-training that can contribute towards the success of models in domain-specific datasets.
## Limitations
We observe that although **CoAug** outperforms baselines on multiple datasets, it is still prone to errors that emerge from the bootstrapping process. Specifically, our framework utilizes models to augment weak labels to the training set, and if the proposals are extremely noisy, training on noisy examples in future iterations will further exacerbate the ability of the framework to identify entities with high precision. Incorporating constraints to preserve the quality of the pseudo-labeled data (Shrivastava et al., 2012) is an exciting direction for future work in low-resource named-entity recognition.
## Ethics Statement
All our experiments are performed over publicly available datasets. We do not use any identifiable information about crowd workers who provide annotations for these datasets. Neither do we perform any additional annotations or human evaluations in this work. We do not foresee any risks using **CoAug**
if the inputs to our model are designed as per our procedure. However, our models may exhibit unwanted biases that are inherent in pre-trained language models. This aspect is beyond the scope of the current work.
## Acknowledgement
We would like to express our appreciation to Haibo Ding for the insightful discussions and helpful suggestions during the initial phases of this project.
## References
Stephen H Bach, Bryan He, Alexander Ratner, and Christopher Ré. 2017. Learning the structure of generative models without labeled data. In International Conference on Machine Learning, pages 273–282.
PMLR.
Dominic Balasuriya, Nicky Ringland, Joel Nothman, Tara Murphy, and James R. Curran. 2009. Named entity recognition in Wikipedia. In *Proceedings of the*
2009 Workshop on The People's Web Meets NLP:
Collaboratively Constructed Semantic Resources
(People's Web), pages 10–18, Suntec, Singapore. Association for Computational Linguistics.
Maria-Florina Balcan, Avrim Blum, and Ke Yang. 2004.
Co-training and expansion: Towards bridging theory and practice. In *Advances in Neural Information* Processing Systems 17 (NIPS 2004), pages 89–96.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615–
3620, Hong Kong, China. Association for Computational Linguistics.
Avrim Blum and Tom. Mitchell. 1998. Combining labeled and unlabeled data with co-training. In *COLT'*
98: Proceedings of the eleventh annual conference on Computational learning theory, pages 92–100.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong ˘
Lu. 2014. Ncbi disease corpus: a resource for disease name recognition and concept normalization.
Journal of biomedical informatics, 47:1–10.
Sally A. Goldman and Yan Zhou. 2000. Enhancing supervised learning with unlabeled data. In *ICML'00:*
Proceedings of the Seventeenth International Conference on Machine Learning, pages 327–334.
Suchin Gururangan, Ana Marasovic, Swabha ´
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics.
Jiaxin Huang, Chunyuan Li, Krishan Subudhi, Damien Jose, Shobana Balakrishnan, Weizhu Chen, Baolin Peng, Jianfeng Gao, and Jiawei Han. 2021. Fewshot named entity recognition: An empirical baseline study. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 10408–10423, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Robin Jia, Mike Lewis, and Luke Zettlemoyer. 2022.
Question answering infused pre-training of generalpurpose contextualized representations. In Findings
of the Association for Computational Linguistics:
ACL 2022, pages 711–728, Dublin, Ireland. Association for Computational Linguistics.
Hyunjae Kim, Jaehyo Yoo, Seunghyun Yoon, Jinhyuk Lee, and Jaewoo Kang. 2021. Simple questions generate named entity recognition datasets. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6220–6236.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015.
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016.
Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics.
Hunter Lang, Monica Agrawal, Yoon Kim, and David Sontag. 2022. Co-training improves prompt-based learning for large language models. In Proceedings of the 39th International Conference on Machine Learning, pages 11985–12003.
Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021. Learning dense representations of phrases at scale. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6634–6647, Online. Association for Computational Linguistics.
Jiacheng Li, Haibo Ding, Jingbo Shang, Julian McAuley, and Zhe Feng. 2021. Weakly supervised named entity tagging with learnable logical rules. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 4568–4581, Online.
Association for Computational Linguistics.
Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. In Database (Oxford), volume 2016, pages 1–10.
Pierre Lison, Jeremy Barnes, Aliaksandr Hubin, and Samia Touileb. 2020. Named entity recognition without labelled data: A weak supervision approach. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1518–
1533, Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations, ICLR 2019*.
Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF.
In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), pages 1064–1074, Berlin, Germany.
Association for Computational Linguistics.
Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011, Suntec, Singapore. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems 32 (NeurIPS 2019)*, volume 32. Curran Associates, Inc.
Minlong Peng, Xiaoyu Xing, Qi Zhang, Jinlan Fu, and Xuanjing Huang. 2019. Distantly supervised named entity recognition using positive-unlabeled learning.
In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 2409–
2419, Florence, Italy. Association for Computational Linguistics.
Esteban Safranchik, Shiying Luo, and Stephen Bach.
2020. Weakly supervised sequence tagging from noisy rules. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 5570–
5578.
Jingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, and Jiawei Han. 2018. Learning named entity tagger using domain-specific dictionary. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2054–
2064, Brussels, Belgium. Association for Computational Linguistics.
Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. 2012. Constrained semi-supervised learning using attributes and comparative attributes. In European Conference on Computer Vision, pages 369–383. Springer.
Michael Tänzer, Sebastian Ruder, and Marek Rei. 2022.
Memorisation versus generalisation in pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7564–7578, Dublin, Ireland. Association for Computational Linguistics.
Erik F. Tjong Kim Sang and Fien De Meulder.
2003. Introduction to the CoNLL-2003 shared task:
Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–
147.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Tao Yu and Shafiq Joty. 2021. Effective fine-tuning methods for cross-lingual adaptation. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8492–8501, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
## A Related Work
Weakly-supervised NER. Utilizing distant supervision in the form of knowledge bases or typed lexicons dates back to the work of Mintz et al.
(2009). However, obtaining pre-defined lexicons for all domains is challenging. Therefore, more recent work proposes to use manually-defined labeling functions to provide weak labels for documents at scale (Bach et al., 2017). Safranchik et al.
(2020); Lison et al. (2020) have proposed improved techniques for leveraging such labeling functions to derive weak labels for entities. However, exhaustively defining labeling functions to identify entities can be a cumbersome task, even for domain experts. Hence, **TaLLOR** (Li et al., 2021) introduces an automatic technique for learning rules through an iterative approach of proposing weak labels and new rules for entity extraction. More recently, GeNER (Kim et al., 2021) utilizes DensePhrases
(Lee et al., 2021) to query Wikipedia for documents that contain entities from desired categories.
However, Wikipedia may not contain enough information for new emerging domains or even new languages. In contrast, **CoAug** can be applied in such situations as well using few rules and some unlabeled datasets.
Co-training. In co-training (Blum and Mitchell, 1998), given two views of an input that are conditionally independent of each other given the true label, classifiers learned over both views can be improved by bootstrapping the performance of each view iteratively with unlabeled data. Some recent studies suggest, however, that the conditional independence assumption of the views can be relaxed when the models are "different enough"
(Balcan et al., 2004; Goldman and Zhou, 2000).
Within language processing methods, co-training has been used for cross-lingual adaptation (Yu and Joty, 2021) and improving prompt-based techniques (Lang et al., 2022). In our work, we improve named entity recognition with a combination of rule-augmentation and neural-augmentation techniques.
## B Background
Re-iterating, we consider a setting where we have access to an initial set of seed rules, S, and a large unlabeled corpus, U, to perform the named entity recognition tasks. Applying the initial set of rules on U provides an initial set of labeled examples, L,
to train models in our framework.
## B.1 Tallor
Our work primarily builds on top of the **TaLLOR**
framework introduced in Li et al. (2021). In TaLLOR, a neural model is trained on L to provide weak labels for the potential entities present in U.
Based on the computed weak labels, a Rule Selector module proposes new labeling rules that align well with the weak labels while maintaining high precision for entity recognition. Finally, the newly proposed rules are used to label more examples in U, and the process is repeated over many iterations.
At the end of training, **TaLLOR** is evaluated by the neural model's performance on the test set of the corresponding task. For more details on **TaLLOR**,
we refer the reader to (Li et al., 2021).
## C Experiment Hyperparameters
Across all datasets, we limit the span of the entities to 5 tokens.
Following Li et al. (2021), the neural model in the Rule-Augmentation model is initialized with a BERT-base/ SciBERT-base model depending on the domain of the dataset. During training, we use a minibatch size of 32 with the Adam optimizer
(Kingma and Ba, 2015), a learning rate of 2e−5, and perform gradient clipping (clipped at norm of 5.0) to stabilize training.
| Dataset | Category | Question Prompt |
|---------------------|---------------------|------------------------------|
| BC5CDR | Chemical | What is a chemical compound? |
| Disease | What is a disease? | |
| Person | Who is a person? | |
| Location | What is a location? | |
| Organization | What is an organization? | |
| NCBI-Disease | Disease | What is a disease? |
| CoNLL2003/ WikiGold | | |
For the Label-Augmenter model, we utilize two models: **ProtoBERT** and **QuIP**. Since these models have different characteristics, we utilize a different set of hyperparameters to fine-tune each model for our task. Specifically, for the **ProtoBERT**
model, we use the AdamW (Loshchilov and Hutter, 2019) optimizer, a learning rate of 1e−4, and apply weight decay of 1e−2 for all parameters except the layer-norm weights. For **QuIP**, we follow the recommendations from Jia et al. (2022) and adopt a learning rate of 2e−5 with the AdamW optimizer for fine-tuning. Further, we initialize the token prediction head for the NER task using question prompt embeddings from this model. The set of questions we use for the different datasets has been summarized in Table 3.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
hu-etal-2023-entity | Entity-to-Text based Data Augmentation for various Named Entity Recognition Tasks | https://aclanthology.org/2023.findings-acl.578 | Data augmentation techniques have been used to alleviate the problem of scarce labeled data in various NER tasks (flat, nested, and discontinuous NER tasks). Existing augmentation techniques either manipulate the words in the original text that break the semantic coherence of the text, or exploit generative models that ignore preserving entities in the original text, which impedes the use of augmentation techniques on nested and discontinuous NER tasks. In this work, we propose a novel Entity-to-Text based data augmentation technique named EnTDA to add, delete, replace or swap entities in the entity list of the original texts, and adopt these augmented entity lists to generate semantically coherent and entity preserving texts for various NER tasks. Furthermore, we introduce a diversity beam search to increase the diversity during the text generation process. Experiments on thirteen NER datasets across three tasks (flat, nested, and discontinuous NER tasks) and two settings (full data and low resource settings) show that EnTDA could bring more performance improvements compared to the baseline augmentation techniques. | # Entity-To-Text Based Data Augmentation For Various Named Entity Recognition Tasks
Xuming Hu1∗, Yong Jiang2, Aiwei Liu1, Zhongqiang Huang2**, Pengjun Xie**2, Fei Huang2, Lijie Wen1, **Philip S. Yu**1,3 1Tsinghua University, 2Alibaba DAMO Academy, 3University of Illinois at Chicago
{hxm19,liuaw20}@mails.tsinghua.edu.cn
{yongjiang.jy,chengchen.xpj}@alibaba-inc.com, [email protected],[email protected]
## Abstract
Data augmentation techniques have been used to alleviate the problem of scarce labeled data in various NER tasks (flat, nested, and discontinuous NER tasks). Existing augmentation techniques either manipulate the words in the original text that break the semantic coherence of the text, or exploit generative models that ignore preserving entities in the original text, which impedes the use of augmentation techniques on nested and discontinuous NER tasks.
In this work, we propose a novel *Entity-toText* based data augmentation technique named ENTDA to add, delete, replace or swap entities in the entity list of the original texts, and adopt these augmented entity lists to generate semantically coherent and entity preserving texts for various NER tasks. Furthermore, we introduce a diversity beam search to increase the diversity during the text generation process.
Experiments on thirteen NER datasets across three tasks (flat, nested, and discontinuous NER
tasks) and two settings (full data and low resource settings) show that ENTDA could bring more performance improvements compared to the baseline augmentation techniques.
## 1 Introduction
Recent neural networks show decent performance when a large amount of training data is available.
However, these manually labeled data are laborintensive to obtain. Data augmentation techniques
(Shorten and Khoshgoftaar, 2019) expand the training set by generating synthetic data to improve the generalization and scalability of deep neural networks, and are widely used in NLP (Feng et al.,
2021; Li et al., 2022a). One successful attempt for data augmentation in NLP is manipulating a few words in the original text, such as word swapping (¸Sahin and Steedman, 2018; Min et al., 2020)
and random deletion (Kobayashi, 2018; Wei and
∗*Work done during an internship at Alibaba DAMO
Academy.
![0_image_0.png](0_image_0.png)
Figure 1: Comparison of augmented cases generated by Rule-based model and *Text-to-Text* based generative model vs. Our *Entity-to-Text* based generative model.
Zou, 2019). These methods generate synthetic texts effortlessly without considering the semantic coherence of sentences. More importantly, these augmentation approaches work on sentence-level tasks like classification but cannot be easily applied to fine-grained and fragile token-level tasks like Named Entity Recognition (NER).
Named Entity Recognition aims at inferring a label for each token to indicate whether it belongs to an entity and classifies entities into predefined types. Due to transformations of tokens that may change their labels, Dai and Adel (2020) augment the token-level text by randomly replacing a token with another token of the same type. However, it still inevitably introduces incoherent replacement and results in syntax-incorrect texts. DAGA (Ding et al., 2020) and MELM (Zhou et al., 2022) investigate the Text-to-Text data augmentation technique using generative methods that preserve semantic coherence and recognize entities through entity tagging during text generation. However, since it is difficult to use flat ⟨B − *T ype*⟩ and ⟨I − *T ype*⟩
labels to mark nested and discontinuous entities during text generation, these methods can only be used for flat NER tasks. In addition, only the entities are masked during the generation process, so that the diversity of generated texts is also limited. For example, as shown in Figure 1, rule-based models replace tokens or shuffle segments, such as "with" and "cancer may" are shuffled, which makes the augmented text no longer semantically coherent, and even modifies the semantic consistency of the text to affect the prediction of entity labels. The Text-to-Text based generative models cannot leverage flat ⟨B − *T ype*⟩ and ⟨I − *T ype*⟩
labels to mark the "stomach" token in the discontinuous entities: "stomach discomfort" and "stomach pain", thus limiting the application of this method to nested and discontinuous NER tasks.
To maintain text semantic coherence during augmentation and preserve entities for various NER
tasks, in this work, we propose a novel **Entity-toText** instead of **Text-to-Text** based data augmentation approach named ENTDA. As illustrated in Figure 2, we first obtain the entity list [EU, German, British] in the original text, and then add, delete, swap, and replace the entity in the entity list to obtain the augmented entity list, e.g. [EU, German, British, Spanish]. We investigate that leveraging the rule-based methods to modify the entities in the entity list could generate more combinatorial entity lists without introducing grammatical errors. Then we adopt a conditional language model to generate the semantically coherent augmented text based on the augmented entity list. Thanks to the augmented entity list (including flat, nested, and discontinuous entities) we have already obtained, we can mark these preserved entities in the augmented text as shown in Figure 4. Since the augmented entity list provide the similar entity information in the text augmented by the language model, which may leads to insufficient diversity of text generation.
Therefore, we propose a diversity beam search method for generative models to enhance text diversity. Overall, the main contributions of this work are as follows:
- To the best of our knowledge, we propose the first Entity-to-Text based data augmentation technique ENTDA. ENTDA leverages the pretrained large language model with semantic coherence and entity preserving to generate the augmented text, which could be used to benefit for all NER tasks (flat, nested, and discontinuous NER tasks).
- We propose the diversity beam search strategy for ENTDA to increase the diversity of the
![1_image_0.png](1_image_0.png)
augmented text during generation process.
- We show that ENTDA outperforms strong data augmentation baselines across three NER
tasks and two settings (full data and low resource settings).
## 2 Related Work 2.1 Various Ner Tasks
Named Entity Recognition (NER) is a pivotal task in IE which aims at locating and classifying named entities from texts into the predefined types such as PERSON, LOCATION, etc. (Chiu and Nichols, 2016; Xu et al., 2017; Yu et al., 2020). In addition to flat NER task (Sang and De Meulder, 2003),
Kim et al. (2003) proposed nested NER task in the molecular biology domain. For example, in the text: *Alpha B2 proteins bound the PEBP2 site*, the entity *PEBP2* belongs to the type PROTEIN and PEBP2 site belongs to DNA.
Furthermore, some entities recognized in the text could be discontinuous (Mowery et al., 2013, 2014; Karimi et al., 2015). For example, in the text: *I experienced severe pain in my left shoulder and neck*,
the entities *pain in shoulder* and *pain in neck* contain non-adjacent mentions. Some previous works proposed the unified frameworks which are capable of handling both three NER tasks (Li et al., 2020; Yan et al., 2021; Li et al., 2021). However, there is no unified data augmentation method designed for all three NER tasks due to the complexity of entity overlap. In this work, we try to bridge this gap and propose the first generative augmentation approach ENTDA that can be used to generate augmented data for all NER tasks (flat, nested, and discontinuous NER tasks).
## 2.2 Data Augmentation For Nlp And Ner
As shown in Table 1, we compare ENTDA with rule-based and traditional generative techniques, and present the comparison results below.
![2_image_0.png](2_image_0.png)
Three NER subtasks:
Rule-based Augmentation Various rule-based augmentations for NLP tasks such as word replacement (Zhang et al., 2015; Cai et al., 2020), random deletion (Kobayashi, 2018; Wei and Zou, 2019),
and word swapping (¸Sahin and Steedman, 2018; Min et al., 2020) manipulate the words in the original texts to generate synthetic texts. However, these manipulated tokens could not maintain the original labels since the change of syntax and semantics.
Dai and Adel (2020) proposes a replacement augmentation method to decide whether the selected token should be replaced by a binomial distribution, and if so, then the token will be replaced by another token with the same label. Furthermore, the similar approaches could be extended from token-level to mention-level. However, these methods still inevitably introduce incoherent replacement. In this work, we try to introduce the Entity-to-Text based augmentation approach to improve the coherence of the augmented texts.
```
Flat: stomach discomfort [5,6,DISORDER]
Nested: cancer, cancer patient
[1,1,DISORDER] [1,2,PERSON]
Discontinuous: stomach pain [5,5,8,8,DISORDER]
```
Generative Augmentation Classic generative augmentations for NLP tasks such as back translation, which could be used to train a question answering model (Yu et al., 2018) or transfer texts from a high-resource language to a low-resource language (Hou et al., 2018; Xia et al., 2019).
Anaby-Tavor et al. (2020); Kumar et al. (2020)
adopt language model which is conditioned on sentence-level tags to modify original data for classification tasks exclusively. To utilize generative augmentation on more fine-grained and fragile token-level NER tasks, Ding et al. (2020) treats the NER labeling task as a text tagging task and requires generative models to annotate entities during generation. Zhou et al. (2022) builds the pretrained masked language models on corrupted train-
German farmer German farmer ing sentences and focuses on entity replacement.
However, these methods rely on the Text-to-Text based generative models which cannot tag a token with nested labels during generation. In this work, we adopt the Entity-to-Text based generative model to tackle all NER tasks and bootstrap the diversity of the model with diversity beam search.
```
market
-1.8-0.4-1=-3.2
-1.8
-1.8-0.9-2=-4.7
Beam Search Decoding Diversity Beam Search Decoding
market
-1.8-0.4=-2.2
-1.8
-1.8-0.9=-2.7
```
## 3 General Ner Task Formulation
Considering that ENTDA has sufficient augmentation ability on flat, nested and discontinuous NER, we first formulate the general NER task framework as follows. Given an input text X = [x1, x2*, ..., x*n] of length n and the entity type set T, the output is an entity list E = [e1, e2, ..., em*, ..., e*l] of l entities, where em = [sm1, dm1, ..., smj , dmj , tm]. The *s, d* are the start and end indexes of a space in the text X.
The j indicates that the entity consists of j spans.
The tm is an entity type in the entity type set T.
For example, the discontinuous entity stomach pain in the text: "The cancer patient has constant stomach discomfort and *pain*" will be represented as em = [5, 5, 8, 8*, DISORDER*].
## 4 Proposed Method
The proposed Entity-to-Text based data augmentation approach ENTDA consists of three modules:
Entity List Augmentation, Entity-to-Text Generation, and Augmented Text Exploitation. Now we give the details of the three modules.
## 4.1 Entity List Augmentation
Entity List Augmentation aims to adopt four rulebased methods: Add, Delete, Replace, and Swap to modify the entities in the entity list obtained from the original sentences. Now, we give the details of four operations on the original entity list E = [e1, e2, ..., em*, ..., e*l] as follows:
Sequence Entity cancer patient, stomach discomfort, stomach pain
1⑦✐Add. We first randomly select an entity em from the entity list E. Then we search for other entities in the training set and add e
′
m with the same entity type as em to the original entity list: E = [e1, e2, ..., em, e
′
m*, ..., e*l].
The *cancer patient* has constant *stomach discomfort* and *pain*.
0 1 2 3 4 5 6 7 8 9 Three NER subtasks:
Flat NER subtask: [5,6,DISORDER]
Nested NER subtask: [1,1,DISORDER] [1,2,PERSON] Discontinuous NER subtask: [5,6,8,9,DISORDER]
2⑦✐**Delete**. We randomly select an entity em from the original entity list E and delete it as E = [e1, e2, ..., em−1, em+1*, ..., e*l].
3⑦✐**Replace**. We first randomly select an entity em from the original entity list E. Similar to 1 , we search e
′
m with the same entity type to replace em as E = [e1, e2*, ..., e*
′
m*, ..., e*l].
4⑦✐**Swap**. We randomly select two entities em, e
′
m in the original entity list E and swap their positions as E = [e1, e2*, ..., e*
′
m,
..., em*, ..., e*l].
## 4.2 Entity-To-Text Generation
After we obtain the augmented entity lists, the Entity-to-Text Generation module aims to generate the text for each entity list. Since the augmented entity list provide the similar entity information for augmented text, so we propose a diversity beam search method to increase text diversity.
Compared to traditional generation models that rely on greedy decoding (Chickering, 2002) and choosing the highest-probability logit at every generation step, we adopt a diversity beam search decoding strategy. More specifically, we first inject the entity types into the augmented entity list E =
[[t1], e1, [/t1], ..., [tm], em, [/tm], ..., [tl], el, [/tl]]
as the input sequence, which should provide sufficient type guidance for the generation model, then we adopt T5 (Raffel et al., 2020) as the generation model. We first fine-tune T5 on the original Entity-to-Text data and then adopt T5 (θ)
to estimate the conditional probability distribution over all tokens in the dictionary V at time step t as:
$$\theta\left(y_{t}\right)=\log\Pr\left(y_{t}\mid y_{t-1},\ldots,y_{1},E\right).\tag{1}$$
where ytis the t th output token y in texts. We simplify the sum of log-probabilities (Eq. 1) of all previous tokens decoded Θ(y[t]) as:
$$\Theta\left(\mathbf{y}_{[t]}\right)=\sum_{\tau\in[t]}\theta\left(y_{\tau}\right),$$
$${\mathrm{(2)}}$$
1.EU's veterinary committee ruled that *German* beef, *British* lamb and *Spanish* sheep should be banned. 2.EU bans German, *British* and *Spanish* wheat from sale.
……..
1.*EU's German* wing says it has received a warning.
2.EU had already imposed a ban on *German* exports.
……..
1.EU's veterinary committee ruled that *German* beef was to eat after being imported from *Spanish*.
2.European Union bans *German, Spanish* beef imports.
……..
Diversity Beam Search 1.EU bans *German* beef from *British* market.
2.A *British* farmer accused the EU of failing to protect his sheep from *German* imports. **……..**
![3_image_0.png](3_image_0.png)
where y[t]is the token list consisting of
[y1, y2*, ..., y*t]. Therefore, our decoding problem is transformed into the task of finding the text that could maximize Θ(y). The classical approximate decoding method is the beam search (Wiseman and Rush, 2016), which stores top beam width B candidate tokens at time step t. Specifically, beam search selects the B most likely tokens from the set:
$${\mathcal{V}}_{t}=Y_{[t-1]}\times{\mathcal{V}},$$
$\eqref{eq:walpha}$
, $\mathbf{V}P$ [t_u] $\mathbf{\hat{\jmath}}$ an.
Yt = Y[t−1] × V, (3)
where Y[t−1] =
y1,[t−1]*, . . . ,* yB,[t−1] and V is the dictionary. However, traditional beam search keeps a small proportion of candidates in the search space and generates the texts with minor perturbations (Huang, 2008), which impedes the diversity of generated texts. Inspired by Vijayakumar et al.
(2016), we introduce an objective to increase the dissimilarities between candidate texts and finalize the Eq. 2 as diversity beam search decoding:
$${\hat{\Theta}}\left(\mathbf{y}_{[t]}\right)=\sum_{\tau\in[t]}(\theta\left(y_{\tau}\right)-\gamma k_{\tau}),\qquad\quad(4)$$
where γ is a hyperparameter and represents the punishment degree. kτ denotes the ranking of the current tokens among candidates. In practice, it's a penalty text of beam width: [1, 2*, ..., B*] which punishes bottom ranked tokens among candidates and thus generates tokens from diverse previous tokens. For a better understanding, we give an example about the text with beam search decoding and diversity beam search decoding in Figure 3.
The traditional greedy decoding chooses the highest-probability logit at every generation step and results in *British farmer*. Compared to the diversity beam search decoding method, the beam search decoding method maintains a small proportion of candidates in the search space without
![4_image_1.png](4_image_1.png)
Entity
![4_image_0.png](4_image_0.png)
Three NER subtasks:
```
Flat: stomach discomfort [5,6,DISORDER]
Nested: cancer, cancer patient
[1,1,DISORDER] [1,2,PERSON]
Discontinuous: stomach pain [5,5,8,8,DISORDER]
```
the introduction of a penalty text of beam width [1, 2*, ..., B*]. This additional objective increases the dissimilarities between candidate texts and thus generates tokens from diverse previous tokens. For example, *British farmer* and *German farmer* are generated instead of *British farmer* and *British market*, which brings the diversity token *German*. Likewise, the diversity token *market* will also be considered in the subsequent generation. Overall, at each time step t:
1.EU's veterinary committee ruled that *German* beef, *British*
lamb and *Spanish* sheep should be banned.
2.EU bans German, *British* and *Spanish* wheat from sale.
……..
1.EU's *German* wing says it has received a warning.
2.EU had already imposed a ban on *German* exports.
……..
1.EU's veterinary committee ruled that *German* beef was to eat
after being imported from *Spanish*.
2.United Nations bans German, *Spanish* beef imports.
……..
Diversity Beam
Search
EU, German,
British, Spanish
Add
Entity-to-Text
Text-to-Entity (NER) model
Swap
Delete
EU, German
model
EU,
German,
British
Replace
EU, German,
Spanish
British, EU,
German
1.EU bans *German* beef from *British* market.
2.A *British* farmer accused the EU of failing to protect his
sheep from *German* imports. **……..**
$$Y_{[t]}=\operatorname*{argmax}_{\mathbf{y}_{1,[t]},\cdots,\mathbf{y}_{B,[t]}\in\mathcal{Y}_{t}}\sum_{b\in[B]}\hat{\Theta}\left(\mathbf{y}_{b,[t]}\right).\tag{5}$$
This process will generate the most likely texts that are selected by ranked the B beams based on the diversity beam search decoding.
## 4.3 Augmented Text Exploitation
To utilize these augmented Entity-to-Text data, we need to mark the texts with the augmented entity lists. As illustrated in Figure 2, we first automatically judge whether the entities match the tokens in the texts and remove these noisy texts of mismatched entities. For example, EU is generated as United Nations and this generated text is automatically deleted. Then as illustrated in Figure 4, we provide the details of the text marking process:
(1) If the entity is **flat**, we obtain the start and end position indexes through the exact match between entity and text.
(2) If the entity is **nested**, we first store all the overlapping entity mentions belonging to the same nested entity and match these mentions with text to obtain start and end position indexes.
(3) If the entity is **discontinuous**, we match the entity mentions which belong to the same discontinuous entity with text to obtain start and end position indexes.
Note that the process of text marking is done automatically based on the above three situations.
After we obtain these augmented data with marked British farmer British farmer flat, nested, and discontinuous entities, we naturally formulate the texts as input to NER tasks.
```
market
-1.1-0.5=-1.6
-1.1
-1.1-0.8=-1.9
market
-1.1-0.5-1=-2.6
-1.1
-1.1-0.8-2=-3.9
```
## 5 Experiments And Analyses
German farmer German farmer market
-1.8-0.4-1=-3.2
-1.8
-1.8-0.9-2=-4.7 Beam Search Decoding **Diversity Beam Search Decoding**
Figure 4: Details of marking the texts with the augmented entity lists for three NER tasks.
```
market
-1.8-0.4=-2.2
-1.8
-1.8-0.9=-2.7
```
We conduct extensive experiments on thirteen NER
datasets across three tasks (flat, nested, and discontinuous NER) and two settings (full data and low resource NER) to show the effectiveness of ENTDA on NER, and give a detailed analysis.
## 5.1 Backbone Models
We adopt two SOTA backbone models which could solve all three NER tasks:
1) **The unified Seq2Seq framework** (Yan et al.,
2021) formulates three NER tasks as an entity span text generation task without the special design of the tagging schema to enumerate spans.
2) **The unified Word-Word framework** (Li et al.,
2022b) models the neighboring relations between entity words as a 2D grid and then adopts multigranularity 2D convolutions for refining the grid representations.
These two backbone models are leveraged to solve the general NER tasks illustrated in Section 3 and demonstrate the effectiveness of ENTDA.
## 5.2 Datasets
To demonstrate that ENTDA could be used in various NER tasks and backbone models, we follow Yan et al. (2021); Li et al. (2022b) and adopt the same datasets (split) as follows:
1) **Flat NER Datasets**: We adopt the CoNLL2003 (Sang and De Meulder, 2003) and OntoNotes
(Pradhan et al., 2013) datasets. For OntoNotes, we evaluate in the English corpus with the same setting as Yan et al. (2021).
2) **Nested NER Datasets**: We adopt the ACE
2004 (Doddington et al., 2004), ACE 2005 (Christopher Walker and Maeda., 2005) and GENIA (Kim et al., 2003) datasets. Following Yan et al. (2021),
we split the ACE 2004/ACE 2005 into train/dev/test sets by 80%/10%/10% and GENIA into 81%/9%/10% respectively.
3) **Discontinuous NER Datasets** We adopt the CADEC (Karimi et al., 2015), ShARe13 (Mowery et al., 2013) and ShARe14 (Mowery et al., 2014) datasets from biomedical domain. Following Yan et al. (2021), we split the CADEC into train/dev/test sets by 70%/15%/15% and use 10% training set as the development set for ShARe13/ShARe14.
| Method / Datasets | Flat NER datasets | Nested NER datasets | Discontinuous NER datasets | AVG. | ∆ | | | | | |
|-----------------------------|---------------------|-----------------------|------------------------------|--------|-------|---------|---------|-------|-------|-------|
| CoNLL2003 | OntoNotes | ACE2004 | ACE2005 | Genia | CADEC | ShARe13 | ShARe14 | | | |
| Unified Word-Word Framework | 93.14 | 90.66 | 87.54 | 86.72 | 81.34 | 73.22 | 82.57 | 81.79 | 84.62 | - |
| +Label-wise token rep. | 93.32 | 90.78 | 87.83 | 86.98 | 81.65 | 73.47 | 82.84 | 82.07 | 84.87 | 0.25↑ |
| +Synonym replacement | 93.35 | 90.75 | 87.87 | 86.93 | 81.63 | 73.50 | 82.87 | 82.10 | 84.88 | 0.26↑ |
| +Mention replacement | 93.29 | 90.80 | 87.89 | 86.97 | 81.64 | - | - | - | - | - |
| +Shuffle within segments | 93.30 | 90.68 | 87.68 | 86.84 | 81.47 | 73.36 | 82.71 | 81.92 | 84.75 | 0.13↑ |
| +DAGA | 93.47 | 90.89 | - | - | - | - | - | - | - | - |
| +MELM | 93.60 | 91.06 | - | - | - | - | - | - | - | - |
| +ENTDA (Delete) | 93.82 | 91.23 | 88.29 | 87.54 | 82.12 | 73.86 | 83.31 | 82.45 | 85.33 | 0.71↑ |
| +ENTDA (Add) | 93.93 | 91.26 | 88.27 | 87.60 | 82.19 | 73.89 | 83.34 | 82.55 | 85.42 | 0.76↑ |
| +ENTDA (Replace) | 93.87 | 91.21 | 88.18 | 87.46 | 82.40 | 73.82 | 83.19 | 82.52 | 85.33 | 0.71↑ |
| +ENTDA (Swap) | 93.91 | 91.25 | 88.18 | 87.54 | 82.32 | 73.81 | 83.30 | 82.52 | 85.35 | 0.73↑ |
| +ENTDA (All) | 93.88 | 91.34 | 88.21 | 87.56 | 82.25 | 73.86 | 83.35 | 82.47 | 85.37 | 0.75↑ |
| +ENTDA (None) | 93.44 | 90.89 | 87.84 | 87.01 | 81.73 | 73.57 | 82.90 | 82.09 | 84.93 | 0.31↑ |
| +ENTDA (All) w/o Diver. | 93.55 | 91.01 | 87.93 | 87.23 | 81.91 | 73.75 | 83.02 | 82.20 | 85.08 | 0.46↑ |
| Unified Seq2Seq Framework | 92.78 | 89.51 | 86.19 | 84.74 | 79.10 | 70.76 | 79.69 | 79.40 | 82.78 | - |
| +Label-wise token rep. | 92.91 | 89.68 | 86.33 | 85.04 | 79.41 | 71.22 | 79.93 | 79.64 | 83.03 | 0.25↑ |
| +Synonym replacement | 92.85 | 89.59 | 86.28 | 85.32 | 79.36 | 71.18 | 79.86 | 79.55 | 83.00 | 0.22↑ |
| +Mention replacement | 92.80 | 89.80 | 86.14 | 85.01 | 79.44 | - | - | - | - | - |
| +Shuffle within segments | 92.85 | 89.40 | 86.22 | 84.99 | 79.28 | 71.13 | 79.72 | 79.50 | 82.89 | 0.11↑ |
| +DAGA | 92.92 | 89.97 | - | - | - | - | - | - | - | - |
| +MELM | 92.95 | 89.95 | - | - | - | - | - | - | - | - |
| +ENTDA (Delete) | 93.38 | 90.23 | 86.51 | 86.26 | 80.80 | 71.51 | 80.58 | 80.04 | 83.67 | 0.89↑ |
| +ENTDA (Add) | 93.27 | 90.27 | 86.73 | 86.39 | 80.88 | 71.50 | 80.92 | 80.16 | 83.77 | 0.99↑ |
| +ENTDA (Replace) | 93.32 | 90.16 | 86.55 | 86.41 | 80.74 | 71.64 | 80.64 | 80.23 | 83.71 | 0.93↑ |
| +ENTDA (Swap) | 93.45 | 90.04 | 86.40 | 86.30 | 80.67 | 71.37 | 80.37 | 80.12 | 83.59 | 0.81↑ |
| +ENTDA (All) | 93.51 | 90.31 | 86.92 | 86.39 | 80.94 | 71.70 | 80.83 | 80.36 | 83.87 | 1.09↑ |
| +ENTDA (None) | 92.90 | 90.02 | 86.28 | 85.57 | 79.66 | 71.30 | 80.13 | 79.71 | 83.20 | 0.42↑ |
| +ENTDA (All) w/o Diver. | 93.13 | 90.21 | 86.47 | 85.78 | 79.88 | 71.54 | 80.31 | 79.97 | 83.41 | 0.63↑ |
Table 2: F1 results of various NER tasks. For all three backbone models and six baseline augmentation approaches, we rerun their open source code and adopt the given parameters.
We show the detailed statistics and entity types of the datasets in Appendix A.
## 5.3 Baseline Augmentation Methods
Unlike sentence-level classification tasks, NER is a fine-grained token-level task, so we adopt six entity-level data augmentation baselines, which are designed for various NER tasks.
The four rule-based baseline augmentation techniques: (1) **Label-wise token replacement** (Dai and Adel, 2020) utilizes a binomial distribution to decide whether each token should be replaced, and then replaces the chosen token with another token that has the same entity type. (2) **Synonym replacement** (Dai and Adel, 2020) replaces the chosen token with the synonym retrieved from WordNet. (3) **Mention replacement** (Dai and Adel, 2020) replaces the chosen entity with another entity, which has the same entity type. (4) **Shuffle within**
segments (Dai and Adel, 2020) splits the sentences into segments based on whether they come from the same entity type, and uses a binomial distribution to decide whether to shuffle tokens within the same segment. The two generative baseline augmentation techniques are: (5) **DAGA** (Ding et al., 2020)
treats the NER labeling task as a text tagging task and annotates entities with generative models during generation. (6) **MELM** (Zhou et al., 2022) generates augmented data with diverse entities, which is built upon pre-trained masked language models.
MELM is further finetuned on corrupted training sentences with only entity tokens being randomly masked to focus on entity replacement.
We present another model: ENTDA **(All)**,
which adopts four entity list operations simultaneously to generate augmented texts. Note that we focus on entity-level NER augmentation tasks, so to the best of our knowledge, we have employed all entity-level augmentation techniques.
## 5.4 Experiment Settings
For ENTDA, we fine-tune the T5-Base (Raffel et al., 2020) with the initial parameters on the Entity-to-Text data of the training set and utilize the default tokenizer with max-length as 512 to preprocess the data. We use AdamW (Loshchilov and Hutter, 2018) with 5e−5 learning rate to optimize the cross entropy loss. The batch size is set to 5 and the number of training epoch is set to 3. During diversity beam search decoding, we set γ as 10 and beam width B as 3, which means that each entity set will generate three texts.
ENTDA and all baselines augment the training set by 3x for a fair comparison. For example, the number of texts in the training set is 100, we generate 300 texts and add them to the training set. We replace the language model in MELM (Zhou et al.,
2022) with XLM-RoBERTa-large (355M) (Conneau et al., 2020), and we use T5-Base (220M)
with fewer parameters for comparison.
## 5.5 Results And Analyses
Table 2 shows the average F1 results on three runs. All backbone NER models gain F1 performance improvements from the augmented data when compared with the models that only use original training data, demonstrating the effectiveness of data augmentation approaches in the various NER
tasks. Surprisingly, ENTDA (None) outperforms the baseline methods by 0.11% F1 performance among the backbone models, which shows that the generative models using a diversity beam search have sufficient capacity to generate high-quality augmented data.
More specifically, for flat NER datasets, MELM
is considered as the previous SOTA data augmentation approach. The proposed ENTDA (All) on average achieves 0.23% higher in F1 among flat NER
datasets and two backbone models. For nested and discontinuous NER datasets, the label-wise token replacement method achieves the best performance among baselines. ENTDA (All) achieve an average 0.78% F1 boost among nested and discontinuous NER datasets, which demonstrates that leveraging generative model to augment semantically coherent texts is effective.
Among all NER datasets, ENTDA is undoubtedly capable of achieving state-of-the-art results
(with student's T test p < 0.05). Except ENTDA
(All), ENTDA (Add) achieves the largest F1 performance gains of 0.99% and 0.76% on the unified Seq2Seq and Word-Word frameworks, respectively. We attribute this delightful improvement of the "Add" operation to the additionally introduced knowledge: we add the entity from the training set with the same entity type.
Ablation Study In Table 2, we remove the entity list augmentation module (ENTDA(None)), or change the diversity beam search to the traditional beam search
(ENTDA(All) w/o Diver.). We can conclude that entity list augmentation and diversity beam search modules bring an average F1 improvement of
Method / Datasets CoNLL2003 ACE2005 CADEC
Unified Word-Word Framework 86.83 79.56 65.03
+Label-wise token rep. 87.23 79.97 65.50 +Synonym replacement 87.16 80.01 65.46
+Mention replacement 87.30 80.10 - +Shuffle within segments 87.04 79.85 65.28 +DAGA 87.82 - –
+MELM 88.24 - –
+ENTDA **(Delete)** 89.91 81.94 69.12
+ENTDA **(Add)** 90.13 **82.15** 69.03 +ENTDA **(Replace)** 90.07 82.01 69.29 +ENTDA **(Swap)** 89.97 81.98 69.25
+ENTDA (All) **90.22** 82.08 **69.31**
Unified Seq2Seq Framework 85.90 77.32 62.24
+Label-wise token rep. 86.44 77.81 62.56 +Synonym replacement 86.73 77.79 62.61 +Mention replacement 86.94 77.83 –
+Shuffle within segments 86.26 77.65 62.49
+DAGA 87.05 - – +MELM 87.43 - –
+ENTDA **(Delete)** 89.20 79.10 66.04 +ENTDA **(Add)** 89.62 79.23 **66.42** +ENTDA **(Replace)** 89.41 79.02 66.21
+ENTDA **(Swap)** 88.96 78.96 65.93
+ENTDA (All) 89.82 **79.51** 66.40
0.56% and 0.38% on the eight datasets. Using the entity list augmentation module can give a richer entity combination, which brings more improvement. Adopting the diversity beam search brings more diverse texts and gains greater improvements.
## Handling Low Resource Ner Scenarios
We further introduce an extreme yet practical scenario: only limited labeled data is available.
This low resource NER scenario demonstrates that our ENTDA approach bootstraps the generalization ability of the NER model and is a quite appealing approach for data-oriented applications in the real-world. In practice, we randomly choose 10%
training data from CoNLL2003/ACE2005/CADEC
to represent the three NER tasks. Note that the fine-tuning of T5-large and our four operations on the entity list are also done on 10% training data.
From Table 3, compared to training directly on the 10% training set, leveraging the augmented data achieves the performance improvement in F1. We also observe that ENTDA approach obtains the most competitive F1 performance improvement when compared with baseline data augmentation approaches. More specifically, ENTDA
(All) achieve an average 2.97% F1 boost among three backbone models, which means ENTDA obtains more performance gains under the low resource scenario than in the full data scenario. Especially for the most challenging discontinuous dataset CADEC, ENTDA (All) obtains the largest F1 performance gain of 4.22%. Surprisingly, on
Method / Datasets Politics Natural Science Music Literature AI
![7_image_0.png](7_image_0.png)
Seq2Seq Framework 70.11 70.72 72.90 63.69 56.77
+Label-wise token rep. 70.45 70.91 73.48 63.97 57.04 +Synonym replacement 70.43 71.04 73.66 63.92 57.34
+Mention replacement 70.47 71.07 73.54 64.02 57.42
+Shuffle within segments 70.39 70.94 73.30 63.88 57.26
+DAGA 71.06 71.51 73.46 64.21 57.83
+ENTDA **(Delete)** 72.60 72.05 75.87 67.18 61.58 +ENTDA **(Add)** 72.81 **72.55** 76.20 67.82 61.97
+ENTDA **(Replace)** 72.94 72.46 76.12 67.57 61.89
+ENTDA **(Swap)** 72.47 71.89 75.58 67.06 61.37
+ENTDA (All) **72.98** 72.47 76.55 68.04 **62.31**
Table 4: F1 results of real low resource NER tasks.
![7_image_1.png](7_image_1.png)
10% CoNLL2003, ENTDA (All) has only a 2.94%
decrease in F1 performance compared to using the full training data, but ENTDA (All) saves 10x the annotated data, which shows that adopting ENTDA
is quite appealing for real-world applications.
## Tackling Real Low Resource Ner Tasks
We adopt real low resource NER datasets (Liu et al., 2021) from Wikipedia which contains politics, natural science, music, literature and artificial intelligence domains with only 100 or 200 labeled texts in the training set. ENTDA and baseline data augmentation approaches still augment the training set by 3x. From Table 4, we are delighted to observe ENTDA could quickly learn from the extremely limited Entity-to-Text data and bring 3.45% F1 performance gains over various domains.
Compared with baseline augmentation methods, ENTDA generates more diverse texts and undoubtedly gains greater advantages.
Various Augmentation Multiples Performance We further vary the multiples of augmented data from 2x to 10x the training set to study the influence of data augmentation approaches for the NER backbone models under low resource scenarios. We choose different low resource datasets
| Method / Datasets | CoNLL2003 | CADEC | AI |
|-------------------------|-------------|---------|------|
| Label-wise token rep. | 8.12 | 8.87 | 7.52 |
| Synonym replacement | 7.44 | 7.88 | 7.01 |
| Mention replacement | 7.07 | 7.42 | 6.54 |
| Shuffle within segments | 10.24 | 12.32 | 9.65 |
| DAGA | 5.46 | 6.23 | 5.07 |
| MELM | 5.27 | 6.29 | 4.82 |
| ENTDA (All) | 4.74 | 5.19 | 4.28 |
Table 5: Perplexity of the augmented data with various augmentation approaches. Lower perplexity is better.
and three representative augmentation approaches
(Mention replacement, MELM, and ENTDA (All)),
then represent the results in Figure 5.
We could observe that the unified Seq2Seq framework has more performance gains with everincreasing augmented data. ENTDA (All) consistently achieves better F1 performance, with a clear margin, compared to baseline augmentation approaches under various augmentation multiples.
Especially for Music, ENTDA (All) brings an incredible 4.01% improvement in F1 performance with only 300 augmented data.
Semantic Coherence Analysis Compared with baseline augmentation approaches, ENTDA conditionally generates texts with the diversity beam search decoding, which provides more coherent texts. We analyze the coherence through perplexity based on a large Transformer language model: GPT-2 (Radford et al.,
2019). From Table 5, ENTDA obtains the lowest perplexity. Although DAGA and MELM are also based on generative models, the texts are not natural enough since only partial text is replaced.
Diversity Evaluation We measure the diversity of augmented sentences through automatic and manual metrics. For automatic metric, we introduce the Type-Token Ratio (TTR) (Tweedie and Baayen, 1998) to evaluate the ratio of the number of different words to the total number for each original text. Higher TTR
(%) indicates more diversity in sentences. Besides that, we ask 5 annotators to give a score for the degree of diversity of the 200 generated texts, with
| Methods / Datasets | CoNLL2003 | CADEC | AI | | | |
|-------------------------|-------------|---------|--------|-----|--------|-----|
| TTR | Diver. | TTR | Diver. | TTR | Diver. | |
| Label-wise token rep. | 81.2 | 3.1 | 80.5 | 3.4 | 81.9 | 3.3 |
| Synonym replacement | 81.9 | 3.3 | 80.1 | 3.5 | 82.6 | 3.4 |
| Mention replacement | 83.8 | 3.9 | 82.9 | 3.6 | 84.2 | 3.8 |
| Shuffle within segments | 72.9 | 2.4 | 71.6 | 2.0 | 73.7 | 2.1 |
| DAGA | 73.8 | 2.8 | 74.1 | 2.6 | 74.3 | 3.1 |
| MELM | 77.2 | 3.2 | 78.1 | 2.9 | 76.6 | 3.0 |
| ENTDA (All) | 86.4 | 4.3 | 85.1 | 4.5 | 83.7 | 4.4 |
Approach: Original Sentence Entity: **unsupervised learning, principal component analysis**
, cluster analysis Entity Type: **field, algorithm, algorithm** Text: The main methods used in *unsupervised learning* are principal component analysis and *cluster analysis*.
Approach: ENTDA (Add)
Entity: **unsupervised learning, principal component analysis**
, cluster analysis, dimension reduction Entity Type: **field, algorithm, algorithm, algorithm** Text: In unsupervised learning, principal component analysis, *cluster analysis* and dimension reduction are used to reduce the number of variables in a task.
Approach: ENTDA (Delete)
Entity: **unsupervised learning, principal component analysis** Entity Type: **field, algorithm** Text: In the field of unsupervised learning, *principal component analysis* is used to model the learning process.
Operation: ENTDA (Replace)
Entity: **unsupervised learning, principal component analysis**
, dimension reduction Entity Type: **field, algorithm, algorithm** Text: In the field of unsupervised learning, *principal component analysis* and dimension reduction are used to reduce the size of the data.
Operation: ENTDA (Swap)
Entity: **unsupervised learning, cluster analysis**
, principal component analysis Entity Type: **field, algorithm, algorithm**
Text: *Unsupervised learning* uses *cluster analysis* and *principal component analysis* to learn a task.
Operation: ENTDA (All)
Entity: **unsupervised learning, dimension reduction**
, principal component analysis Entity Type: **field, algorithm, algorithm** Text: *Unsupervised learning* uses *cluster analysis* to achieve the purpose of dimension reduction for better learning a task.
Approach: Mention Replacement Entity: heterodyning, principal component analysis
, cluster analysis Entity Type: **field, algorithm, algorithm** Text: The main methods used in *heterodyning* are *principal component analysis* and *cluster analysis*.
Operation: DAGA
Text: *Unsupervised learning* uses *principal component analysis* and *cluster analysis*.
Entity (Unchanged): **unsupervised learning, principal component**
analysis, cluster analysis Entity Type (Unchanged): **field, algorithm, algorithm**
![8_image_0.png](8_image_0.png)
score range of 1~5. According to the annotation guideline in Appendix D, a higher score indicates the method can generate more diverse texts.
We present the average scores on the datasets in Table 6. ENTDA could obtain 7.8% TTR and 1.4 diversity performance boost in average compared to MELM.
## 6 Case Study
We show eight approaches to obtain augmented data for the AI domain in Table 7. Compared with baseline augmentation methods, ENTDA introduces a knowledge expansion and conditionally generates texts based on the diversity beam search, which provides more coherent and diverse texts. For example, The Mention Replacement approach replaces the entities unsupervised learning with heterodyning, which ignores the semantics of the context and makes an ungrammatical replacement, resulting in incoherent and unreasonable texts. For the DAGA approach, it simply stacks three entities: unsupervised learning, principal component analysis, cluster analysis in the text, which could not provide knowledge expansions to the NER models.
## 7 Conclusions And Future Work
In this paper, we propose an Entity-to-Text based data augmentation approach ENTDA for NER
tasks. Compared with traditional rule-based augmentation methods that break semantic coherence, or use Text-to-Text based augmentation methods that cannot be used on nested and discontinuous NER tasks, our method can generate semantically coherent texts for all NER tasks, and use the diversity beam search to improve the diversity of augmented texts. Experiments on thirteen public real-world datasets, and coherence and diversity analysis show the effectiveness of ENTDA. Moreover, we can also apply the method of data augmentation to low-resource relation extraction (Hu et al., 2020, 2021b,a; Liu et al., 2022b; Hu et al.,
2023), natural language inference (Li et al., 2023, 2022c), semantic parsing (Liu et al., 2022a, 2023),
and other NLP application tasks, thus realizing knowledge enhancement based on data augmentation approach.
## 8 Limitations
We discuss the limitations of our method from three perspectives.
First, our method is based on pre-trained language models, so compared to rule-based data augmentation methods (synonym replacement, shuffle within segments, etc.), our method requires higher time complexity.
Second, the entity matching process (Section 4.3) will discard sentences which cannot match entities in the entity list, which will affect the utilization of data.
Third, our data augmentation method based on the pre-trained language models, whose generalization ability is limited since the augmented knowledge comes from the pre-trained language models.
However, the knowledge in pre-trained language models is limited and not domain-specific. How to improve the generalization ability of the data augmentation methods is a future research work.
## 9 Acknowledgement
We thank the reviewers for their valuable comments. Yong Jiang and Lijie Wen are the corresponding authors. Xuming Hu, Aiwei Liu and Lijie Wen were partially supported by the National Key Research and Development Program of China
(No. 2019YFB1704003), the National Nature Science Foundation of China (No. 62021002), Tsinghua BNRist and Beijing Key Laboratory of Industrial Bigdata System and Application. Philip S. Yu was partially supported by the NSF under grants III-1763325, III-1909323, III-2106758, SaTC-1930941.
## References
Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Do not have enough data? deep learning to the rescue! In Proc.
of AAAI, volume 34, pages 7383–7390.
Hengyi Cai, Hongshen Chen, Yonghao Song, Cheng Zhang, Xiaofang Zhao, and Dawei Yin. 2020. Data manipulation: Towards effective instance learning for neural dialogue generation via learning to augment and reweight. In *Proc. of ACL*, pages 6334–6343.
David Maxwell Chickering. 2002. Optimal structure identification with greedy search. *Journal of machine* learning research, 3(Nov):507–554.
Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. *TACL*,
4:357–370.
Julie Medero Christopher Walker and Kazuaki Maeda.
2005. Ace 2005 multilingual training corpus. In Linguistic Data Consortium, Philadelphia 57.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proc.
of ACL, pages 8440–8451.
Xiang Dai and Heike Adel. 2020. An analysis of simple data augmentation for named entity recognition.
In *Proc. of COLING*, pages 3861–3867, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, and Chunyan Miao. 2020. DAGA: Data augmentation with a generation approach for low-resource tagging tasks. In *Proc. of EMNLP*, pages 6045–6057, Online.
Association for Computational Linguistics.
George R Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ace) program–tasks, data, and evaluation.
In *Proc. of LREC*.
Steven Y Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy. 2021. A survey of data augmentation approaches for nlp. In *Proc. of ACL-IJCNLP: Findings*, pages 968–988.
Yutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu.
2018. Sequence-to-sequence data augmentation for dialogue language understanding. In *Proc. of COLING*, pages 1234–1245.
Xuming Hu, Zhaochen Hong, Chenwei Zhang, Irwin King, and Philip S Yu. 2023. Think rationally about what you see: Continuous rationale extraction for relation extraction. *arXiv preprint arXiv:2305.03503*.
Xuming Hu, Lijie Wen, Yusong Xu, Chenwei Zhang, and Philip S. Yu. 2020. Selfore: Self-supervised relational feature learning for open relation extraction.
In *Proc. of EMNLP*, pages 3673–3682.
Xuming Hu, Chenwei Zhang, Fukun Ma, Chenyao Liu, Lijie Wen, and Philip S. Yu. 2021a. Semi-supervised relation extraction via incremental meta self-training.
In *Findings of EMNLP*, pages 487–496.
Xuming Hu, Chenwei Zhang, Yawen Yang, Xiaohe Li, Li Lin, Lijie Wen, and Philip S. Yu. 2021b. Gradient imitation reinforcement learning for low resource relation extraction. In *Proc. of EMNLP*, pages 2737– 2746.
Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-08: HLT, pages 586–594.
Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. *Journal of biomedical* informatics, 55:73–81.
J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. *Bioinformatics*,
19(suppl_1):i180–i182.
Sosuke Kobayashi. 2018. Contextual augmentation:
Data augmentation by words with paradigmatic relations. In *Proc. of NAACL-HLT*.
Varun Kumar, Ashutosh Choudhary, and Eunah Cho.
2020. Data augmentation using pre-trained transformer models. In Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems, pages 18–26.
Bohan Li, Yutai Hou, and Wanxiang Che. 2022a. Data augmentation approaches in natural language processing: A survey. *AI Open*.
Fei Li, ZhiChao Lin, Meishan Zhang, and Donghong Ji.
2021. A span-based model for joint overlapped and discontinuous named entity recognition. In *Proc. of* ACL-IJCNLP, pages 4814–4828, Online. Association for Computational Linguistics.
Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022b.
Unified named entity recognition as word-word relation classification. In *Proc. of AAAI*, volume 36, pages 10965–10973.
Shuang Li, Xuming Hu, Li Lin, Aiwei Liu, Lijie Wen, and Philip S. Yu. 2023. A multi-level supervised contrastive learning framework for low-resource natural language inference. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31:1771–
1783.
Shu'ang Li, Xuming Hu, Li Lin, and Lijie Wen.
2022c. Pair-level supervised contrastive learning for natural language inference. *arXiv preprint* arXiv:2201.10927.
Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC framework for named entity recognition. In Proc.
of ACL, pages 5849–5859, Online. Association for Computational Linguistics.
Aiwei Liu, Xuming Hu, Li Lin, and Lijie Wen. 2022a.
Semantic enhanced text-to-sql parsing via iteratively learning schema linking graph. In *Proc. of KDD*,
pages 1021–1030.
Aiwei Liu, Xuming Hu, Lijie Wen, and Philip S
Yu. 2023. A comprehensive evaluation of chatgpt's zero-shot text-to-sql capability. *arXiv preprint* arXiv:2303.13547.
Shuliang Liu, Xuming Hu, Chenwei Zhang, Shu'ang Li, Lijie Wen, and Philip S. Yu. 2022b. Hiure: Hierarchical exemplar contrastive learning for unsupervised relation extraction. In *Proc. of NAACL-HLT*, pages 5970–5980.
Zihan Liu, Yan Xu, Tiezheng Yu, Wenliang Dai, Ziwei Ji, Samuel Cahyawijaya, Andrea Madotto, and Pascale Fung. 2021. Crossner: Evaluating crossdomain named entity recognition. In *Proc. of AAAI*,
volume 35, pages 13452–13460.
Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam.
Junghyun Min, R Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. 2020. Syntactic data augmentation increases robustness to inference heuristics. In Proc. of ACL, pages 2339–2352.
Danielle L. Mowery, Sumithra Velupillai, Brett R.
South, Lee M. Christensen, David Martínez, Liadh Kelly, Lorraine Goeuriot, Noémie Elhadad, Sameer Pradhan, Guergana K. Savova, and Wendy W. Chapman. 2013. Task 1: Share/clef ehealth evaluation lab 2013. In *CLEF*.
Danielle L. Mowery, Sumithra Velupillai, Brett R.
South, Lee M. Christensen, David Martínez, Liadh Kelly, Lorraine Goeuriot, Noémie Elhadad, Sameer Pradhan, Guergana K. Savova, and Wendy W. Chapman. 2014. Task 2: Share/clef ehealth evaluation lab 2014. In *CLEF*.
Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143–152.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21:1–
67.
Gözde Gül ¸Sahin and Mark Steedman. 2018. Data augmentation via dependency tree morphing for lowresource languages. In *Proc. of EMNLP*, pages 5004–
5009.
Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. In *Proc. of* HLT-NAACL, pages 142–147.
Connor Shorten and Taghi M Khoshgoftaar. 2019. A
survey on image data augmentation for deep learning.
Journal of Big Data, 6(1):1–48.
Fiona J Tweedie and R Harald Baayen. 1998. How variable may a constant be? measures of lexical richness in perspective. *Computers and the Humanities*,
32(5):323–352.
Ashwin K Vijayakumar, Michael Cogswell, Ramprasaath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models.
Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In *Proc. of EMNLP-IJCNLP*, pages 6382–6388.
Sam Wiseman and Alexander M Rush. 2016. Sequenceto-sequence learning as beam-search optimization.
In *Proc. of EMNLP*, pages 1296–1306.
Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. In Proc. of ACL, pages 5786–5796.
Mingbin Xu, Hui Jiang, and Sedtawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. In Proc. of ACL, pages 1237–1247.
Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In *Proc.*
of ACL, pages 5808–5822, Online. Association for Computational Linguistics.
Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V
Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. In ICLR.
Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020.
Named entity recognition as dependency parsing. In Proc. of ACL, pages 6470–6476.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text classification. *NeurIPS*, 28:649–657.
Ran Zhou, Xin Li, Ruidan He, Lidong Bing, Erik Cambria, Luo Si, and Chunyan Miao. 2022. Melm: Data augmentation with masked entity language modeling for low-resource ner. In *Proc. of ACL*, pages 2251–2262.
## A Dataset Statistics
we show the detailed statistics of the datasets in Table 8. We further give details on entity types for thirteen datasets in Table 9.
## B Entity Addition And Replacement Strategy
ENTDA add and replace the entity in the training set that has the same entity type. This strategy can provide the knowledge expansion during the generation, which is an appealing property when the hand-craft knowledge base is difficult to construct for augmentation approaches.
If we directly replace the entities in the text with other entities of the same type, this is equivalent to the baseline: Mention Replacement. From Table 2, 3 and 4, we could observe that compared to ENTDA (Replace), the improvement of F1 performance is greatly reduced. The main reason is that context-free entities are replaced, resulting in obscure and unreasonable texts. For example, "EU's German *wing says it has received a warning*." may be changed to "EU's World War Two *wing says it* has received a warning." since the two entities share the same type: MISC.
## C Hyperparameter Analysis
We study the hyperparameter γ in the diversity beam search, which represents the degree of probability penalty in the decoding process and determines the diversity of sentences. Modifying γ allows us to control the diversity of the texts. We vary the γ from 1 to 100 and represent the F1 results using the unified Seq2Seq framework and ENTDA
(All) in Table 10. With no more than 1% F1 fluctuating results among three datasets, ENTDA appears robust to the choice of γ.
## D Annotation Guideline
Each annotator needs to carefully read each augmented text, compare it with the original text, and give a score according to the following criteria.
Note that all augmented texts for a dataset are given an average score.
- Score:1. The augmented texts under the same original text are almost the same.
- Score:2. The augmented texts under the same original text are slightly different, with serious grammatical errors.
- Score:3. The augmented texts under the same original text are slightly different, and there are almost no grammatical errors.
- Score:4. The augmented texts under the same original text are diverse, with serious grammatical errors.
- Score:5. The augmented texts under the same original text are diverse, and there are almost no grammatical errors.
| Sentence | Entity | | | | | | | | | |
|-------------------|----------|--------|-------|----------|-------|---------|--------|----------|-------|------|
| #All | #Train | #Dev | #Test | #Avg.Len | #All | #Nes. | #Dis. | #Avg.Len | | |
| CoNLL2003 | 20,744 | 17,291 | - | 3,453 | 14.38 | 35,089 | - | - | 1.45 | |
| OntoNotes | 76,714 | 59,924 | 8,528 | 8,262 | 18.11 | 104,151 | - | - | 1.83 | |
| Politics | 1,392 | 200 | 541 | 651 | 50.15 | 22,854 | - | - | 1.35 | |
| Nature Science | 1,193 | 200 | 450 | 543 | 46.50 | 14,671 | - | - | 1.72 | |
| Music | 936 | 100 | 380 | 456 | 48.40 | 15,441 | - | - | 1.37 | |
| Literature | 916 | 100 | 400 | 416 | 45.86 | 11,391 | - | - | 1.47 | |
| AI | 881 | 100 | 350 | 431 | 39.57 | 8,260 | - | - | 1.55 | |
| Flat NER | ACE2004 | 8,512 | 6,802 | 813 | 897 | 20.12 | 27,604 | 12,626 | - | 2.50 |
| Nested NER | ACE2005 | 9,697 | 7,606 | 1,002 | 1,89 | 17.77 | 30,711 | 12,404 | - | 2.28 |
| Genia | 18,546 | 15,023 | 1,669 | 1,854 | 25.41 | 56,015 | 10,263 | - | 1.97 | |
| CADEC | 7,597 | 5,340 | 1,097 | 1,160 | 16.18 | 6,316 | 920 | 670 | 2.72 | |
| Discontinuous NER | ShARe13 | 18,767 | 8,508 | 1,250 | 9,009 | 14.86 | 11,148 | 663 | 1,088 | 1.82 |
| ShARe14 | 34,614 | 17,404 | 1,360 | 15,850 | 15.06 | 19,070 | 1,058 | 1,656 | 1.74 | |
Table 8: Dataset statistics. "\#" denotes the amount. "Nes." and "Dis." denote nested and discontinuous entities respectively.
| Datasets | Entity Types |
|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| CoNLL2003 | location, organization, person, miscellaneous, person, norp, facility, organization, gpe, |
| OntoNotes | location, product, event, work of art, law, language date, time, percent, money, quantity, ordinal, cardinal |
| ACE2004 | gpe, organization, person, facility, vehicle, location, wea |
| ACE2005 | gpe, organization, person, facility, vehicle, location, wea |
| Genia | protein, cell_type, cell_line, RNA, DNA, |
| CADEC | ade |
| ShARe13 | disorder |
| ShARe14 | disorder |
| Politics | politician, person, organization, political party, event, election, country, location, miscellaneous scientist, person, university, organization, country, enzyme, protein, chemical compound, chemical element, |
| Natural Science | event, astronomical object, academic journal, award, location, discipline, theory, miscellaneous music genre, song, band, album, musical artist, |
| Music | musical instrument, award, event, country, location, organization, person, miscellaneous |
| Literature | writer, award, poem, event, magazine, person, location, book, organization, country, miscellaneous |
| AI | field, task, product, algorithm, researcher, metrics, university country, person, organization, location, miscellaneous |
Table 9: Detailed statistics on entity types for thirteen NER datasets.
| Datasets / γ | 1 | 5 | 10 | 25 | 50 | 100 |
|----------------|-------|-------|-------|-------|-------|-------|
| CoNLL2003 | 93.01 | 93.26 | 93.51 | 93.44 | 93.28 | 93.16 |
| ACE2005 | 85.46 | 86.41 | 86.39 | 86.30 | 86.06 | 85.77 |
| CADEC | 70.88 | 71.34 | 71.70 | 71.64 | 71.42 | 70.99 |
Table 10: F1 results under different γ using the unified Seq2Seq framework and ENTDA (All).
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
✓ A2. Did you discuss any potential risks of your work?
Section 7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract, Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4, Section 5, Appendix A, Appendix B
✓ B1. Did you cite the creators of artifacts you used?
Section 4, Section 5, Appendix A, Appendix B
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 4, Section 5, Appendix A, Appendix B
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 4, Section 5, Appendix A, Appendix B
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section 4, Section 5
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5.4, Appendix A
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5.2, Appendix A
## C ✓ **Did You Run Computational Experiments?** Section 5, Appendix D
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 5.4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5.4, Appendix D
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5.5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 5.4, Section 5.5, Appendix D
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 5.5, Appendix F
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section 5.5, Appendix F
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 5.5, Appendix F
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Section 5.5 |
opedal-etal-2023-world | World Models for Math Story Problems | https://aclanthology.org/2023.findings-acl.579 | Solving math story problems is a complex task for students and NLP models alike, requiring them to understand the world as described in the story and reason over it to compute an answer. Recent years have seen impressive performance on automatically solving these problems with large pre-trained language models and innovative techniques to prompt them. However, it remains unclear if these models possess accurate representations of mathematical concepts. This leads to lack of interpretability and trustworthiness which impedes their usefulness in various applications. In this paper, we consolidate previous work on categorizing and representing math story problems and develop MathWorld, which is a graph-based semantic formalism specific for the domain of math story problems. With MathWorld, we can assign world models to math story problems which represent the situations and actions introduced in the text and their mathematical relationships. We combine math story problems from several existing datasets and annotate a corpus of 1,019 problems and 3,204 logical forms with MathWorld. Using this data, we demonstrate the following use cases of MathWorld: (1) prompting language models with synthetically generated question-answer pairs to probe their reasoning and world modeling abilities, and (2) generating new problems by using the world models as a design space. | # World Models For Math Story Problems
Andreas Opedal⊗,± Niklas Stoehr⊗ Abulhair Saparov÷ **Mrinmaya Sachan**⊗
⊗ETH Zürich ÷New York University
±Max Planck ETH Center for Learning Systems [email protected] [email protected] [email protected] [email protected]
## Abstract
Solving math story problems is a complex task for students and NLP models alike, requiring them to understand the world as described in the story and reason over it to compute an answer. Recent years have seen impressive performance on automatically solving these problems with large pre-trained language models and innovative techniques to prompt them. However, it remains unclear if these models possess accurate representations of mathematical concepts. This leads to lack of interpretability and trustworthiness which impedes their usefulness in various applications.
In this paper, we consolidate previous work on categorizing and representing math story problems and develop MATHWORLD, which is a graph-based semantic formalism specific for the domain of math story problems. With MATHWORLD, we can assign world models to math story problems which represent the situations and actions introduced in the text and their mathematical relationships. We combine math story problems from several existing datasets and annotate a corpus of 1, 019 problems and 3, 204 logical forms with MATHWORLD. Using this data, we demonstrate the following use cases of MATHWORLD: (1) prompting language models with synthetically generated questionanswer pairs to probe their reasoning and world modeling abilities, and (2) generating new problems by using the world models as a design space.
https://github.com/eth-nlped/
mathworld
## 1 Introduction
Math story problems (MSPs) are short narrative texts that describe a dynamic situation in the world consisting of entities, actions and states, followed by a quantitative question about the world, as displayed in Fig. 1. The task of automatically solving MSPs has received much research attention
![0_image_0.png](0_image_0.png)
Figure 1: An example of a world model in MATHWORLD. MATHWORLD can be used to develop interpretable MSP solvers, to study the reasoning of LLMs and as a design space for generation of new MSPs.
in NLP. While earlier models for solving MSPs
(Hosseini et al., 2014; Kushman et al., 2014; Roy and Roth, 2015) focused on extracting various features from text to learn probabilistic models, recent efforts have used pre-trained large language models (LLMs) (Yang et al., 2021; Drori et al., 2022; Lewkowycz et al., 2022, inter alia). Although they display high performance on benchmarks, it has been shown that such neural models tend to rely heavily on shallow heuristics, raising questions about whether the models can indeed "understand" MSPs and robustly solve them (Patel et al., 2021; Stolfo et al., 2023).
From the human side, solving MSPs requires a wide set of skills. A student must not only perform a set of given computations, but first be able to process the text and map it into a corresponding world model that represents the situation described in text
(Cummins et al., 1988; Stern, 1993). Inspired by this, we take a step towards developing more interpretable solvers and introduce MATHWORLD, a semantic world model framework for MSPs.
MATHWORLD can be viewed as a formalism for reasoning in dynamical problem settings
(McCarthy, 1963; Reiter, 1991), specific to the domain of MSPs. It represents each problem as a directed graph called a *world model* (§ 3). The 9088 nodes in a world model are containers (§ 3.1)
representing entities' possession of some quantity
(Hosseini et al., 2014) and the edges represent various types of mathematical relations between the quantities (§ 3.2). The relations correspond to mathematical concepts that have been previously shown to cover a vast majority of MSPs (Mitra and Baral, 2016; Roy and Roth, 2018). We annotate a MATHWORLD dataset consisting of 1, 019 English MSPs from various widely-used datasets (KoncelKedziorski et al., 2016b; Miao et al., 2020; Patel et al., 2021), which we make publicly available.
There are several potential use cases of MATHWORLD, of which we discuss three. First, one natural application is that of developing interpretable MSP solvers. A solver using MATHWORLD follows two steps: (i) semantic parsing and (ii) reasoning. The semantic parser takes an MSP text and outputs a world model based on the explicit information in the text. The reasoner then takes the world model and solves the problem based on the quantities and their relations. Our experiments show that LLMs struggle to build accurate and well-formed world models; we encourage future work to develop stronger semantic parsers for MATHWORLD.
Another use case of MATHWORLD is as a tool to study the reasoning capabilities of existing solvers.
For instance, we can use the world model annotations to automatically generate synthetic subquestions for the MSPs. Using such subquestions, we give empirical evidence that GPT-3 (Brown et al.,
2020) benefits from the structured knowledge derived by world models in its ability to solve MSPs.
We further use our synthetic questions to understand if GPT-3 can indeed answer these intermediate questions about the world described in the MSPs, and not just the final question. We find that for problems where GPT-3 answers the final question correctly, it can only answer 64% of the intermediate questions. This suggests that GPT-3 is not accurately building world models for these problems but might be relying on reasoning shortcuts.
Finally, MATHWORLD can be considered as a design space for generating interesting new MSPs.
We illustrate the usefulness of MATHWORLD for the task of generating MSPs by prompting an LLM using the world model annotations.
## 2 Related Work
Math story problems in NLP Although the problem of automatically solving MSPs has gathered substantial interest in NLP (Roy and Roth, 2015; Kushman et al., 2014; Huang et al.,
2017; Amini et al., 2019; Xie and Sun, 2019; Drori et al., 2022), the focus has traditionally been on improving answer accuracy rather than providing didactic human-interpretable solutions (Shridhar et al., 2022). Some approaches map the text to expression trees (Koncel-Kedziorski et al., 2015; Yang et al., 2022; Roy and Roth, 2017) or explicitly model arithmetic concepts (Mitra and Baral, 2016; Roy and Roth, 2018). However, few if any computational works have attempted to solve MSPs by using mental models (Johnson-Laird, 1983), which is a common framework for analyzing how humans solve MSPs (Kintsch and Greeno, 1985). Taking inspiration from mental models of MSPs, we offer MATHWORLD as a computational model (fully expressible in first-order logic, App. D) which represents reasoning steps, arithmetic concepts and fictional elements in a human-readable graph format. We hope that such an approach can support intelligent tutoring systems (Anderson et al., 1995),
e.g., by delivering feedback and hints (Zhou et al.,
1999; Fossati, 2008) or generating new MSPs
(Polozov et al., 2015; Koncel-Kedziorski et al.,
2016a; Srivastava and Goodman, 2021).
In particular, we draw inspiration from Hosseini et al. (2014), who propose a symbolic approach that maps the text to container-based states. However, their symbolic representation is purely extracted from syntactic rules without human annotation. Further, their approach only covers problems that involve a transfer of some quantity between some actors (although they do not use that terminology), requiring addition and/or subtraction. In contrast, MATHWORLD is more closely tied to the MSP's semantics. It covers a strictly larger set of problem types, involving more concepts and all four basic arithmetic operators (+, −, ×, ÷). See Table 1 for a comparison between MATHWORLD
and Hosseini et al. (2014), as well as Mitra and Baral (2016) and Roy and Roth (2018) from which we adopt the taxonomy over arithmetic concepts.
Reasoning with large language models LLMs have displayed impressive performance on numerical reasoning tasks (Brown et al., 2020; Chowdhery et al., 2022), particularly by the help of careful prompt engineering (Wei et al., 2022; Shridhar et al., 2023; Zhou et al., 2023). While language models have been argued to be intrinsically limited in their ability to perform human-like rea-
| Arithmetic | Conceptual | granularity | Annotations? | Mapping to | |
|--------------------------|--------------|-----------------------|----------------------|--------------|----|
| Semantic | | | | | |
| coverage | coverage | formal logic? | | | |
| Transfer Rate | | | | | |
| MATHWORLD | (+, −, ×, ÷) | World model | Yes | Yes | |
| Comparison Part-whole | | | | | |
| Hosseini et al. (2014) | (+, −) | Transfer | World model | No | No |
| Transfer | Concepts | | | | |
| Mitra and Baral (2016) | (+, −) | Comparison (add) | & equations | Yes | No |
| Part-whole Transfer Rate | | | | | |
| Roy and Roth (2018) | (+, −, ×, ÷) | Comparison Part-whole | Concepts & equations | No | No |
soning (Bender and Koller, 2020), the mechanism by which they find answers in complex reasoning tasks is currently an active area of research (Tafjord et al., 2021; Saparov and He, 2023). MATHWORLD
provides ground truth world model annotations, which is valuable in such studies (as demonstrated in § 5.2). One other aspect of LLMs that may limit them when applied to reasoning is that they produce natural language text, which may be ambiguous and diverse. These considerations motivate us to study MSPs as structured representations of meaning, which can in turn be used to generate natural language (Saparov and Mitchell, 2022).
Semantic parsing MATHWORLD can be viewed as a domain-specific semantic formalism. Our work thus also relates closely to semantic parsing, particularly of graph-based structures (Banarescu et al., 2013; Cai and Lam, 2019; Zhang et al., 2019; Bai et al., 2022). However, while most other formalisms consider meaning only at the sentence level, our world model graphs span the meaning across multiple sentences.
## 3 Mathw**Orld**
In this section, we present our world model formalism MATHWORLD. We formalize an MSP
as a sequence of n sentences s = s1 *◦ · · · ◦* sn. It can be separated into a **body** b and a **question** q, such that s = b ◦ q. The body is further partitioned into a sequence of n − 1 declarative sentences b = s1 *◦ · · · ◦* sn−1 and the question consists of a single interrogative sentence q = sn.
World models in MATHWORLD are directed and labelled graphs, denoted g.
1 We refer to the nodes of the graph as **containers** (§ 3.1) and the edges of the graph as **relations** (§ 3.2). Each container and relation is labelled with a set of properties. One such property is the **quantity**, which may be either an explicit number mentioned in text or a variable representing an unknown number. The containers and relations along with their properties specify the equations induced by the MSP. In addition, each g is associated with a **reference variable** r, which points to the variable in g that holds the correct answer to the question as stated in q. We consider each s to be associated with some structure (*g, r*).
We say that g is **faithful** if it represents the semantics of the problem text according to the framework of MATHWORLD. Further, g is **complete** if r can be solved with the equations induced by g. A complete world model is **correct** if, when evaluated, r gives the correct answer to the problem. See Fig. 1 for an example of a world model.
In order to allow for incremental parsing, we segment the world models into sentence-level logical forms mi, i = 1*, . . . , n*. The logical form is a sequence that represents the containers and/or relations associated with the corresponding sentence.2 We can convert (m1*, . . . , m*n) to a world model graph, and vice versa. The two representations are nearly equivalent, with the exception of a few caveats (see App. F for details). There is no bound on the problem length and, by extension, the number of logical forms. MATHWORLD is thus able to represent problems of any arbitrary number of 1The graphs may be cyclic. Although in practice, they tend to be acyclic.
2A logical form may be empty. Such will be the case for text outside the coverage of MATHWORLD.
reasoning steps. The assignment of logical forms may be ambiguous in the sense that there may be multiple faithful logical forms for a given sentence
(discussed in App. B).
We consider subgraphs gi, for sentence i, of the final graph g. A subgraph gi corresponds to the logical forms up to sentence i, i.e., (m1*, . . . , m*i) 7→
gi. We refer to the subgraph for some sentence index i as the **state** of i. As an example of how world models are built incrementally with states, consider Fig. 1. The first sentence maps to the container for label *Will* holding the entity *money* of quantity 83 with unit *dollar*. The second sentence provides information on an update to Will's possessed money, a TRANSFER relation (§ 3.2.1). Finally, the question sentence introduces rate information, a RATE
relation (§ 3.2.2), between money and toys.
In the next sections, we describe the details of containers and relations in depth.
## 3.1 Containers
We adopt and modify the containers described in the model of Hosseini et al. (2014). Semantically, containers represent containment/possession. We refer to the possessor in the text as the **label** of the container.3In Fig. 1, the container label is *Will* for all containers (although in general the label can vary across containers). The label must be a noun plus any associated noun adjuncts (like elementary school). In addition to label, a container may have the following four properties:
Entity: The entity is *what* is possessed in the container. It is a noun, for which there may be an associated count. When expressed in a problem text, it must be the head of a noun phrase. In Fig. 1, money and toy are entities.4 Quantity: The quantity is the number associated with the entity. It may be known, in which case it will be a positive real number, or unknown, in which case it will be a variable.
Attribute: The attribute is a modifier for the entity. It is often an adjective, but may take other forms as well. The attribute is an optional property.
a mass noun, but may exist in other cases as well.
For example, "liter of water" and "kg of apples" will both be assigned to containers with units. The unit is an optional property.
Entity, attribute and unit are written in their lemmatized forms. The label is not, in order to be able to distinguish between a set (plural: *friends*) and an element of some set (singular: *friend*).
Note that the containers take a variable number of properties; having arity 3, 4 or 5. Two containers are **equal** if they have the same arity and the same properties. We refer to a container's **structure** as its container label, entity, attribute (if exists) and unit (if exists). Two containers are **structurally**
equal if they have the same structure.
## 3.2 Relations
Relations are the edges in g. They represent the interactions between the various parts of the world model, from which the equations of the MSP are induced. The relations are directed, and the direction encodes semantics of the relation depending on the type of relation. Like containers, relations have properties. The properties and their arity also depend on the type of relation.
There are four types of relations: TRANSFER,
RATE, COMPARISON and PARTWHOLE. Together they span all four basic arithmetic operators
(+, −, ×, ÷). Next, we give a detailed description of each of these relation types. Examples of world models with each relation type are provided in App. A.
## 3.2.1 T**Ransfer**
TRANSFER relations model that a transfer of some quantity of an entity has occurred. A given container structure will either gain or lose quantity from a TRANSFER relation. For example, "Alice ate 3 apples" will correspond to a TRANSFER with a loss of 3 apples for the container labeled Alice.
A TRANSFER is always between two containers of the same structure. The direction of the edge describes order: The source container will hold the quantity *before* the transfer event occurred, and the target container will hold the quantity *after* the transfer event occurred.
In addition to quantity, TRANSFER takes the following two properties:
Recipient: The label of the container structure where the quantity of the given entity is *gained*.
Sender: The label of the container structure where the quantity of the given entity is *lost*.
A recipient, a sender or both must exist. TRANS-FER thus has arity 2 or 3. The TRANSFER relation either adds or subtracts the relation quantity to/from the source container quantity, depending on whether the relation connects the recipient containers or sender containers.
## 3.2.2 Rate
The RATE relation models mathematical rate between two quantities. These two quantities are held in two separate containers with the same label, and the ratio quantity of the rate is given as a property to the relation. RATE has this one single property.
The direction of the edge determines the relationship: The source container holds the numerator of the rate, and the target container holds the denominator of the rate. In the example in Fig. 1, the source container holds the entity *money* and the target container holds the entity toy, indicating that the rate quantity concerns *money per toy*. Mathematically, RATE implies that the source quantity divided by the relation quantity equals the target quantity.
3.2.3 C**OMPARISON**
COMPARISON is invoked when there is an explicit relationship between two quantities in the MSP. For example, "Alice is twice as old as Bob". The COMPARISON relation may be either between containers with different labels, such as "Alice has 3 more apples than Bob", or between containers with the same label, such as "Alice has 3 more red apples than she has green apples". It takes two properties; quantity and type:
Type: The arithmetic operation type COMPARI-SON. It can take one of the two values; add (indicating addition) or mul (indicating multiplication).
The quantity held in the source container is the one that is combined with the quantity of the COM-PARISON relation under the arithmetic operator, the output of which will be the quantity held in the target container. 3.2.4 PARTW**HOLE**
PARTWHOLE relations model set partitions. The set represented by some container is partitioned into subsets, each of which is represented by another container. For each of the subset containers (the parts), there is an outgoing edge to the container with the superset (the whole). Thus, PARTWHOLE implies that for a given container that has ingoing PARTWHOLE edges, the sum over the quantities in the source containers of those edges equals the quantity in the target container. Note that PARTWHOLE differs from the other relations in that it requires multiple edges to induce an equation.5In most cases, all containers involved in a PARTWHOLE relation will have the same label.
The relation can then be viewed as a relation between entities possessed by a specific label. For instance, "Alice has 3 red apples and 6 green apples, how many apples does she have in total?" would be represented by PARTWHOLE. PARTWHOLE
relations have no properties.
PARTWHOLE relations may represent meaning that is not explicit in text. Parsing the text of a problem that requires PARTWHOLE might thus lead to an incomplete (§ 3) world model, which may require additional assumptions. In addition, orienting PARTWHOLE relations might require commonsense knowledge. For instance, a problem might introduce a quantity for tables and a quantity for chairs, and ask about the total number of furniture.
## 3.3 World Model Equivalence And Similarity
One of the principal utilities of MATHWORLD is to allow for evaluating models on their reasoning ability. For that we need consistent equivalence notions and similarity metrics between world models, which we provide here.
Let g and g′ be **isomorphic** if there exists an isomorphism on the underlying graphs that additionally preserves relation types. We consider two forms of equivalence notions between world models, which we call strong and weak equivalence.
Weak equivalence deems two world models to be equal if they are isomorphic. Strong equivalence additionally requires all properties of the containers and relations to be equal.6In addition, we create two similarity scores based on the AMR metric smatch (Cai and Knight, 2013): Weak smatch considers graph topology in the same way as our isomorphism equivalence, and strong smatch additionally considers all properties of the world models.
We give details on these similarity scores in App. C.
## 3.4 Comparison To Other Logical Formalisms
MATHWORLD can be fully expressed in first-order logic (FOL). We provide a constructive proof in the form of a conversion in App. D, which enables comparison of the expressive power of MATHWORLD with that of other formalisms. Both AMR and MATHWORLD restrict the expressivity of full FOL in different ways. AMR provides a way to express negation (the polarity relation)
but does not provide a way to directly express universal quantification7(Bos, 2016). MATHWORLD
represents sets of objects as containers and enables universal quantification over those sets. This is restricted, however, as MATHWORLD does not allow the definition of sets of sets, or nested universal quantification.8 Negation is not directly expressible in MATHWORLD, as it is designed for the domain of MSPs where negation is quite rare.
MATHWORLD is more comparable to situation calculus (McCarthy, 1963), where each relation can be modeled as an action that changes the state of the world. Like situation calculus, the changing world state over time is implicitly represented in MATHWORLD (via the TRANSFER relation),
whereas in FOL, an explicit description of the time of each event is necessary.
## 4 Data Collection
In order to study how models are able to answer MSPs, convert them to logical form, perform world modeling, and reason mathematically to find the answer, we require a diverse dataset of labeled MSPs that spans all concepts covered by MATHWORLD.
To ensure diversity and wide variety in the examples, we collect them from numerous sources:
1. The math word repository MAWPS (KoncelKedziorski et al., 2016b) gathers several datasets
(Hosseini et al., 2014; Kushman et al., 2014; Koncel-Kedziorski et al., 2015; Roy and Roth, 2015), thus providing a wide variety of MSPs.
2. To complement with more challenging problems, we also adopt problems from ASDIV-A
(Miao et al., 2020), which was designed for linguistic diversity and math concept diversity.
| Train | Test | | | |
|---------|--------|-------|-----|-----|
| MSPs | LFs | MSPs | LFs | |
| ASDIV-A | 328 | 1,052 | 83 | 272 |
| MAWPS | 312 | 936 | 79 | 235 |
| SVAMP | 173 | 563 | 44 | 146 |
| TOTAL | 813 | 2,551 | 206 | 653 |
3. We also annotate a subset of the SVAMP dataset
(Patel et al., 2021), which was introduced as a challenge set to test robustness to data artifacts.
This enables future work to test the robustness of MATHWORLD parsers.
We randomly sample a subset from each of these three datasets,9and annotate them with world models. We obtain 1, 019 MSPs, which corresponds to 3, 204 logical forms, which we partition into 80/20 train/test splits. Table 2 provides more details.
We hire external workers for annotation. Annotation follows three phases: A first training phase where annotators are given several small sets at a time with follow-up discussion sessions, an agreement phase in which all annotators are given the same problems and a final scale-up phase. We use an annotation tool created specifically for this work (shown in App. E.2). The problems are annotated incrementally sentence-by-sentence, in order to match logical forms to sentences as described in § 3. Questions are hidden from annotators until all preceding sentences are completed, in order to avoid bias stemming from having read the question—MATHWORLD is meant to capture the world model of the problem irrespective of what is asked in the question. Within sentences, we ask annotators to add containers and relations according to the order in which they occur in text. This allows us to write the logical forms according to within-sentence order when creating training data for semantic parsing. We maintain this order with integer IDs that are incremented automatically in the annotation tool.
We performed an agreement analysis of 125 overlapping MSPs, revealing a high agreement rate considering the complexity of the annotation task. Concretely, 61 out of these 125 were strongly equivalent (§ 3.3) across annotators, and 107 were weakly equivalent (§ 3.3). Many of the only weakly equivalent annotations were due to ambiguity in the properties (App. B.1), and almost half of the 18 non-agreed problems were due to ambiguity in relation type (App. B.2). The strong and weak smatch scores were 0.91 and 0.97 respectively. These can be interpreted as approximate upper bounds on the smatch scores achievable by any model, due to the ambiguity in the dataset. Many of the annotation errors, also outside of the overlapping set, could be either corrected or discarded *ex post*. Further details on annotation are given in App. E.
## 5 Applications Of Mathw**Orld**
In this section we showcase some applications of MATHWORLD: solving (§ 5.1), probing of reasoning (§ 5.2) and generation of new MSPs (§ 5.3).
## 5.1 Parsing And Reasoning
We spell out a framework for solving MSPs using MATHWORLD. The framework consists of two components: A *parser* and a *reasoner*. The parser is tasked with assigning a faithful world model g to an input problem s, along with a reference variable r. The reasoner is then queried with r and computes an answer based on the induced equations of g. We also present a set of initial experiments, meant to introduce the task of MATHWORLD parsing to the community.
## 5.1.1 Parser
Given an MSP s, the task is to assign a world model g. The first step is to predict the sequence of logical forms m1*, . . . , m*n. We model this as a conditional distribution
$$p(m_{1},\ldots,m_{n}\mid\mathbf{s})=\prod_{i=1}^{n}p(m_{i}\mid s_{1},\ldots,s_{i}).\tag{1}$$
With this factorization, we can parse the graph incrementally one sentence at a time. The factorization is based on two assumptions:
mi ⊥ sj , ∀*i < j* and mi ⊥ mj , ∀i ̸= j. Both are aligned with MATHWORLD as outlined in § 3: the first assumption means that a logical form is independent of the sentences in subsequent steps, and the second assumption means that logical forms are independent of each other. Dependencies of logical forms on preceding sentences are kept due to coreferences, elliptical constructions and other inter-sentence dependencies.
As explained in § 3, the logical forms are linearized representations of the world model graphs.
Thus, our pipeline (as well as applications like those demonstrated in § 5) requires that we are able to convert from one representation to the other:
World model graphs must be converted to logical forms in order to create training data for a semantic parser, and the predicted logical forms must be converted to world model graphs and reference variables for visualization and reasoning. The details of this conversion are given in App. F.
5.1.2 Reasoner Once we have a world model graph, we apply a reasoning algorithm over the graph to compute an answer. The reasoner takes a world model and a reference variable, and outputs a numeric value for the reference variable r. Our implementation is deterministic and follows two steps. First, it extracts all equations induced by the world model (as described in § 3.2 and illustrated in App. A). Second, it solves for r using a recursive algorithm. Full pseudocode along with a discussion is presented in App. H.
10
## 5.1.3 Baseline Solving Experiments
We demonstrate our proposed modeling framework with a baseline semantic parser, in the form of a large language model that is supervised incontext. We use Codex (Chen et al., 2021), as language models trained on code have been previously shown to perform well on structured prediction tasks (Madaan et al., 2022; Drozdov et al., 2023).
The prompt contains 50 ground truth examples from MAWPS and ASDIV-A, and we evaluate the model on the test sets of MAWPS, ASDIVA and SVAMP. We also implement a rule-based baseline system, based on Hosseini et al. (2014).
Our results corroborate that this is a challenging task; for the least difficult dataset the model gets roughly one third of the problems correct, and predicts a complete world model for only slightly more than half of the problems. The rule-based baseline gets nearly no problems correct. Indeed, a model 10We note that annotated world models are not necessarily complete (def. in § 3). Annotators were requested to only build world models that represent what is made explicit in the text. Some problems may require additional background knowledge to build a complete world model.
| svamp-71 |
|------------------------------------|
| Kelly has 160 nintendo games. |
| Synthetic Container Question |
| Q: How many {attr} {ent}s does |
| {label} have? |
| A: {quantity} |
| How many will she have left if she |
| gives away 64 games? |
| Synthetic Relation Question |
| Q: How many {ent}s are transferred |
| from {sender}? |
| A: {quantity} |
![7_image_0.png](7_image_0.png)
must, for each sentence, produce well-formed logical forms that exhaustively and correctly capture the semantics in MATHWORLD, combine these into a world model and query the reasoner with the correct reference variable. One mistake in any of these steps may lead to an incorrect answer. With much progress and research interest in semantic parsing in recent years (Shin et al., 2021; Qiu et al.,
2022) there are several promising directions for improvement, and we invite the research community to help in developing stronger semantic parsers for this challenging task. Further details on the setup and results can be found in App. I.1.
## 5.2 Probing Llms' Partial Knowledge
World models enable us to study the reasoning ability of LLMs: Beyond just testing whether a model outputs the correct solution to an MSP, we can test whether the model follows a correct reasoning path and accurately builds world model representations.
Setup We design question and answer templates that are automatically filled based on information in the world model. Two examples of such templates are given in Fig. 2 and a list of all templates is given in App. I.3. By courtesy of the world model we know the true answer to each of these synthetic questions, enabling us to create prompts with question-answer pairs.
We experiment with three types of prompts, all displayed with full-length examples in Table 8:
(1) synth QA (all at once). We first include the
| from x MSPs | | |
|------------------------------|------|------|
| QA type | 0 | 1 |
| (1) synth QAs (all at once) | 70.8 | 71.8 |
| (2) synth QAs (sent by sent) | 71.3 | 78.6 |
| (3) original MSP QAs | 69.4 | 70.8 |
Table 3: Results obtained by GPT-3 in answering math story problems reported in accuracy percent. A larger increase in performance is observed when the synthetic question-answer pairs are presented at the relevant part of the text, rather than at the end.
complete problem text, followed by synthetic question and answer pairs related to some part of the text. We randomly sample two such pairs; (2) synth QA (sentence-by-sentence). We again sample two question-answer pairs at random, but in this setting they are imputed right after the sentence in which the answer to the question is given; (3) original MSP QA. Under this setting we do not include any synthetic question-answer pairs, only the original text. All prompts end with the MSP question that we aim to solve followed by "A:". We study both whether the synthetic questions help the model answer the MSP correctly, and how well the model answers the synthetic questions themselves.
Results We report results obtained by GPT-3
(Brown et al., 2020) on the combined test set of all three datasets in Table 3. The number of incontext examples is either 0 or 1. We observe increased performance when including synthetic question-answer pairs, particularly in setting (2)
where the questions are imputed at the relevant part of the MSP text. We hypothesize that doing so helps guide the reasoning trace of the model, in a similar vein as chain-of-thought prompting (Wei et al., 2022). Further, we find that GPT-2 (Radford et al., 2019), BART (Lewis et al., 2020), Codex
(Chen et al., 2021), T5 (Raffel et al., 2020) and NT5 (Yang et al., 2021) overall perform poorly, but benefit from an increase in performance when synthetic question-answer pairs are provided.
We further compare the ability of GPT-3 to answer the intermediate synthetic questions to its ability to answer the original final question. For each MSP, we first select a container or relation uniformly at random and then create a synthetic question. We then ask both the synthetic question and the original question at the end of two separate prompts in a zero-shot setting. Table 4 displays the results. Interestingly, in more than one third of the
| Synthetic Question | | |
|----------------------|---------|-------|
| Original Question | Correct | Wrong |
| Correct | 46.0% | 25.7% |
| Wrong | 11.0% | 17.3% |
![8_image_2.png](8_image_2.png)
cases that the model gets the original question right
(top row), it gets the intermediate synthetic question wrong (top right cell). Overall it also shows a higher accuracy on the original questions (top row) than the synthetic intermediate questions (left column). While some of these results could be explained by the nature of the templated questions, it does seem to indicate that the model makes use of heuristics rather than human-like reasoning when solving MSPs (Patel et al., 2021).
## 5.3 Generation Of Msps
MATHWORLD can be considered as a space under which a practitioner can design new MSPs with certain desired features. For instance, a teacher may be interested in generating variations of an MSP to test a specific mathematical concept with a specific unknown variable. To demonstrate the potential for such applications we provide a small proof-of-concept experiment.
Setup We use GPT 3.5 Turbo (Ouyang et al.,
2022) with a prompt of 30 examples from the train sets of MAWPS and ASDIV-A. One example consists of the logical forms for a full MSP world model (source) followed by the text of the MSP (target). We separate sentence-aligned logical forms in the source as well as the sentences in the target by a marker, so that the model can pick up the alignment patterns. The ground truth examples are sampled randomly. To generate a new MSP conditioned on a world model, we append the logical form corresponding to the world model to the end of the prompt. We try generating new MSPs both based on (i) world models present in our annotated test sets (paraphrasing) and (ii) manual augmentations of annotated world models. We perform evaluation for setting (i) using SacreBLEU (Post, 2018) and BERTScore (Zhang et al., 2020), comparing all MSPs in the test sets to their paraphrases.11 11More details on the generation setup are given in App. I.2.
![8_image_0.png](8_image_0.png)
generate generate
![8_image_1.png](8_image_1.png)
Results We obtain SacreBLEU scores of 66.73, 40.86 and 26.02 and F1 BERTScores of 0.933, 0.930 and 0.931 for MAWPS, ASDIV-A and SVAMP respectively. Qualitatively we observe that the generated MSPs mostly stay faithful to the logical forms but tend to be shorter and less linguistically complex than the original problems, which would explain the comparatively low SacreBLEU
scores in comparison to the BERTScores. Further, we give the first six examples we generated according to the described setup. One of them is shown in Fig. 3. The model generates an output MSP
very similar to the original, having only accessed the original's ground truth logical forms. We further augment the original world model by changing the TRANSFER to a RATE. Note how the generated MSP is faithful to the augmented world model. The other five examples are shown in Table 6.
## 6 Conclusion
In this work, we have presented a novel formalism, MATHWORLD, for expressing the semantics of math story problems. We have annotated a MATHWORLD corpus consisting of 1, 019 problems and 3, 204 logical forms. A world model derived from MATHWORLD exposes the structure of the reasoning process needed to solve the problem, which benefits several applications as we have demonstrated in § 5. As such, we hope that MATHWORLD
will promote use cases beyond just improved MSP
solving, ranging from automated chain-of-thought prompting to math problem generation.
## Limitations
MATHWORLD is limited to cover math story problems using the four basic arithmetic operators. Furthermore, within the space of such problems, it does not cover "second-order" MSPs (as discussed in § 3.4). Neither does it cover negation nor inequalities.
We only consider datasets with MSPs written in English in this work. However, MATHWORLD
should in principle be able to cover the same type of problems formulated in other languages as well.
An obvious limitation of this work is the low performance on the task of solving MSPs. The focus of this work is to introduce the world model formalism and its use cases, and we leave for future work to build stronger MATHWORLD parsers.
## Ethics Statement
We foresee no major ethical concerns with this work. The introduction of MATHWORLD is aimed at improving the interpretability and robustness of existing and future models for math story problem solving. On this account, we hope to contribute to identifying (and hopefully reducing) existing biases in pre-trained language models, or any future alternatives. However, we would like to caution that the formalism could be used to generate inappropriate math story problems.
## Acknowledgements
We thank Arnav Mishra, Aryaman Kolhe, Devraj Thakur, Gaurav Saini and Soham Bopardikar for help with annotation work. We further thank Jakub Macina, Kumar Shridhar and Menna El-Assady for input in the early stages of the project, Ethan Wilcox and Ying Jiao for helpful feedback, and Yixiong Wang for help in implementation of a symbolic baseline solver. Andreas Opedal is partially supported by the Max Planck ETH Center for Learning Systems. Niklas Stoehr acknowledges funding from the Swiss Data Science Center.
## References
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages
2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics.
John R. Anderson, Albert T. Corbett, Kenneth R.
Koedinger, and Ray. Pelletier. 1995. Cognitive tutors:
Lessons learned. *Journal of the Learning Sciences*,
4(2):167–207.
Xuefeng Bai, Sen Yang, Leyang Cui, Linfeng Song, and Yue Zhang. 2022. Cross-domain generalization for AMR parsing. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 10907–10921, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In *Proceedings of the 7th Linguistic* Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5185–5198, Online. Association for Computational Linguistics.
Johan Bos. 2016. Squib: Expressive power of abstract meaning representations. *Computational Linguistics*,
42(3):527–535.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Deng Cai and Wai Lam. 2019. Core semantic first: A
top-down approach for AMR parsing. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3799–3809, Hong Kong, China. Association for Computational Linguistics.
Shu Cai and Kevin Knight. 2013. Smatch: An evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. *CoRR*, abs/2110.14168.
Denise Dellarosa Cummins, Walter Kintsch, Kurt Reusser, and Rhonda Weimer. 1988. The role of understanding in solving word problems. *Cognitive* Psychology, 20(4):405–438.
Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, Roman Wang, Nikhil Singh, Taylor L. Patti, Jayson Lynch, Avi Shporer, Nakul Verma, Eugene Wu, and Gilbert Strang. 2022. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level.
Proceedings of the National Academy of Sciences, 119(32):e2123433119.
Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, and Denny Zhou. 2023. Compositional semantic parsing with large language models. In The Eleventh International Conference on Learning Representations.
Davide Fossati. 2008. The role of positive feedback in Intelligent Tutoring Systems. In Proceedings of the ACL-08: HLT Student Research Workshop, pages 31–
36, Columbus, Ohio. Association for Computational Linguistics.
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization.
In *Proceedings of the 2014 Conference on Empirical* Methods in Natural Language Processing (EMNLP),
pages 523–533, Doha, Qatar. Association for Computational Linguistics.
Danqing Huang, Shuming Shi, Chin-Yew Lin, and Jian Yin. 2017. Learning fine-grained expressions to solve math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 805–814, Copenhagen, Denmark. Association for Computational Linguistics.
Philip Nicholas Johnson-Laird. 1983. *Mental models*
: towards a cognitive science of language, inference and consciousness. Cognitive science series 6. Harvard University Press, Cambridge, Massachusetts.
Walter Kintsch and James G. Greeno. 1985. Understanding and solving word arithmetic problems. *Psychological Review*, 92(1):109–129.
Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*,
pages 3499–3505, Florence, Italy. Association for Computational Linguistics.
Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang.
2015. Parsing algebraic word problems into equations. *Transactions of the Association for Computational Linguistics*, 3:585–597.
Rik Koncel-Kedziorski, Ioannis Konstas, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2016a. A themerewriting approach for generating algebra word problems. In *Proceedings of the 2016 Conference on* Empirical Methods in Natural Language Processing, pages 1617–1628, Austin, Texas. Association for Computational Linguistics.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016b.
MAWPS: A math word problem repository. In *Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 1152–1157, San Diego, California. Association for Computational Linguistics.
Robert Koons. 2022. Defeasible reasoning. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, Summer 2022 edition. Metaphysics Research Lab, Stanford University.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 271–281, Baltimore, Maryland. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. pages 7871–7880.
Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models. In Advances in Neural Information Processing Systems.
Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. 2022. Language models of code are few-shot commonsense learners. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1384–1403, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
John McCarthy. 1963. Situations, actions, and causal laws. In *Stanford Artificial Intelligence Laboratory* and Memo (Stanford Artificial Intelligence Laboratory).
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2020. A diverse corpus for evaluating and developing English math word problem solvers. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984, Online.
Association for Computational Linguistics.
Arindam Mitra and Chitta Baral. 2016. Learning to use formulas to solve simple arithmetic problems. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2144–2153, Berlin, Germany.
Association for Computational Linguistics.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
Training language models to follow instructions with human feedback.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094, Online.
Association for Computational Linguistics.
Oleksandr Polozov, Eleanor O'Rourke, Adam M.
Smith, Luke Zettlemoyer, Sumit Gulwani, and Zoran Popovic. 2015. Personalized mathematical word problem generation. In *IJCAI*.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186–
191, Brussels, Belgium. Association for Computational Linguistics.
Linlu Qiu, Peter Shaw, Panupong Pasupat, Pawel Nowak, Tal Linzen, Fei Sha, and Kristina Toutanova. 2022. Improving compositional generalization with latent structure and data augmentation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4341–4362, Seattle, United States. Association for Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21(140):1–67.
Raymond Reiter. 1991. The Frame Problem in Situation the Calculus: A Simple Solution (Sometimes) and a Completeness Result for Goal Regression, page 359–380. Academic Press Professional, Inc., USA.
Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In *Proceedings of the 2015* Conference on Empirical Methods in Natural Language Processing, pages 1743–1752, Lisbon, Portugal. Association for Computational Linguistics.
Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application to arithmetic word problem solving. In *Proceedings of the Thirty-First AAAI*
Conference on Artificial Intelligence, AAAI'17, page 3082–3088. AAAI Press.
Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. *Transactions of the Association for Computational Linguistics*, 6:159–172.
Abulhair Saparov and He He. 2023. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. In International Conference on Learning Representations (ICLR).
Abulhair Saparov and Tom M. Mitchell. 2022. Towards general natural language understanding with probabilistic worldbuilding. *Transactions of the Association for Computational Linguistics*, 10:325–342.
Richard Shin, Christopher Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. Constrained language models yield few-shot semantic parsers. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7699–7715, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Kumar Shridhar, Jakub Macina, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, and Mrinmaya Sachan.
2022. Automatic generation of socratic subquestions for teaching math word problems. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4136–4149, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya Sachan. 2023. Distilling reasoning capabilities into smaller language models. In *Findings of the Association for Computational Linguistics: ACL 2023*,
Toronto, Canada.
Josep M. Sopena, Agusti LLoberas, and Joan L. Moliner. 1998. A connectionist approach to prepositional phrase attachment for real world texts. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2, pages 1233–
1237, Montreal, Quebec, Canada. Association for Computational Linguistics.
Megha Srivastava and Noah Goodman. 2021. Question generation for adaptive education. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 2: Short Papers), pages 692–701, Online.
Association for Computational Linguistics.
Elsbeth Stern. 1993. What makes certain arithmetic word problems involving the comparison of sets so difficult for children? *Journal of Educational Psychology*, 85:7–23.
Alessandro Stolfo, Zhijing Jin, Kumar Shridhar, Bernhard Schölkopf, and Mrinmaya Sachan. 2023. A
causal framework to quantify the robustness of mathematical reasoning with language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Toronto, Canada.
Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021.
ProofWriter: Generating implications, proofs, and abductive statements over natural language. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3621–3634, Online.
Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. volume 2201.11903. arXiv.
Zhipeng Xie and Shichao Sun. 2019. A goal-driven tree-structured neural model for math word problems.
In *Proceedings of the Twenty-Eighth International* Joint Conference on Artificial Intelligence, IJCAI-19, pages 5299–5305. International Joint Conferences on Artificial Intelligence Organization.
Peng-Jian Yang, Ying Ting Chen, Yuechan Chen, and Daniel Cer. 2021. NT5?! Training T5 to perform numerical reasoning. *arXiv*, 2104.07307.
Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Liang Lin, and Xiaodan Liang. 2022. LogicSolver: Towards interpretable math word problem solving with logical prompt-enhanced learning. In Findings of the Association for Computational Linguistics: EMNLP
2022, pages 1–13, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. AMR parsing as sequence-tograph transduction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 80–94, Florence, Italy. Association for Computational Linguistics.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2023. Leastto-most prompting enables complex reasoning in large language models. In *International Conference* on Learning Representations (ICLR).
Yujian Zhou, Reva Freedman, Michael Glass, Joel A.
Michael, Allen A. Rovick, and Martha W. Evens.
1999. Delivering hints in a dialogue-based intelligent tutoring system. In Proceedings of the Sixteenth National Conference on Artificial Intelligence and the Eleventh Innovative Applications of Artificial Intelligence Conference Innovative Applications of Artificial Intelligence, AAAI '99/IAAI '99, page 128–134, USA. American Association for Artificial Intelligence.
![13_image_0.png](13_image_0.png)
Figure 4: Example of a world model using TRANSFER.
## A Mathworld **Examples** A.1 T**Ransfer** Consider The Following Problem:
The school cafeteria had 14 apples. If they used 13 to make lunch for the students and then bought 49 more, how many apples would they have?
We display the corresponding world model in Fig. 4. The first sentence will correspond to a container for *school cafeteria* that holds 14 of entity apple. The second sentence describes two transfers: a first one where the school cafeteria is the sender of 13 apples, and a second one where the school cafeteria is the recipient of 49 apples. We get two equations:
The school cafeteria had 14 apples. If they used 13 to make lunch for the students
and then bought 49 more, how many apples would they have?
$$14-13=x_{1}$$ $$x_{1}+49=x_{2}$$
The question asks for how many apples the school cafeteria has in the end, which matches the container holding the variable x2 in the world model.
Although the TRANSFER relation always connects two containers of the same structure in the graph, a transfer event may occur between two containers of different structure. For example, "Alice gives 3 apples to Bob" describes a transfer event with Alice losing 3 apples and Bob gaining 3 apples. In these cases, we need two edges with the same properties in the world model; one for Alice's containers and one for Bob's containers (see Fig. 5). Consider the following problem with a transfer event occurring between two different possessors:
Alice has 7 apples and Bob has 4 apples. Alice gives 3 apples to Bob. How many apples does Bob have now?
We show the corresponding world model in Fig. 5. *Alice* and Bob are represented by two separate containers, which are both updated by the same transfer event.
## A.2 Rate
Consider the following problem:
![13_image_1.png](13_image_1.png)
![13_image_2.png](13_image_2.png)
![13_image_3.png](13_image_3.png)
![13_image_4.png](13_image_4.png)
Figure 5: Example of a world model using TRANSFER.
Lansing has 25 elementary schools. There are 247 students in each school. How many elementary students are there altogether in Lansing?
| Lansing | Lansing |
|------------------------------------------|------------------------------|
| Entity: elem school Quantity: 25 | Entity: student Quantity: x1 |
| Rate(247, student, elementary school) | |
Figure 6: Example of a world model using RATE.
Lansing has 25 elementary schools. There are 247 students in each school. How many elementary students are there altogether in Lansing?
(2) (3) $\frac{1}{2}$
This is a rate problem, as we get a rate on the number of students per elementary schools in the second sentence. The relation induces the following equation:
$${\frac{x_{1}}{25}}=247$$
$$(4)$$
The question asks for the total number of students in Lansing, which corresponds to the quantity in the container that holds the entity *student*.
## A.3 C**Omparison**
Consider the following problem:
James has 232 balloons. Amy has 101 balloons.
How many more balloons does James have than Amy?
The first two sentences will correspond to two containers, representing the number of balloons possessed by James and Amy respectively. In the James has 232 balloons. Amy has 101 balloons. How many more balloons does James have than Amy?
![13_image_5.png](13_image_5.png)
| Amy Entity: balloon Quantity: 101 |
|-------------------------------------|
Figure 7: Example of a world model using COMPARI-SON.
![14_image_0.png](14_image_0.png)
question sentence, we get information about an COMPARISON relation between these two containers, with properties x1 and add. Since we need to add the balloons in Amy's container to get the number of balloons in James' container, the edge is directed outwards from Amy's container. This relation induces the following equation:
$$101+x_{1}=232$$
101 + x1 = 232 (5)
The world model is displayed in Fig.$~$7.
## A.4 Partw**Hole** Consider The Following Problem:
Gavin has 23 shirts. 6 are blue the rest are green.
How many green shirts does Gavin have?
The first sentence will correspond to a container for Gavin holding the quantity of his shirts. The part-whole information is introduced in the second sentence, in which the 6 refers to shirts in the previous sentence (via an elliptical construction), and
"the rest" tells us we have an additional complementing part of green shirts. Hence, the second sentence is assigned two new containers with attributes *blue* and *green*, as well as PARTWHOLE
relations from both of these containers to the whole container introduced in the first sentence. This leads to the following equation:
$$6+x_{1}=23$$
6 + x1 = 23 (6)
The reference variable is the quantity in the container holding Gavin's green shirts. See Fig. 8 for the world model.
## B Ambiguity
Ambiguity occurs when the same problem text may be assigned multiple correct and faithful world models. We distinguish between two types of ambiguity for MATHWORLD: **property ambiguity**
and **structural ambiguity**.
## B.1 Property Ambiguity
Property ambiguity concerns cases where there are multiple possible properties to containers and/or relations that yield a semantically faithful world model. For instance, it is ambiguous whether "carrot sticks" is to be interpreted as an entity *carrot* stick, as entity *carrot* with unit *stick*, or as entity stick with attribute *carrot*. Property ambiguity may also follow from syntactic ambiguity in the problem text.
## B.2 Structural Ambiguity
$$({\boldsymbol{\mathfrak{S}}})$$
Structural ambiguity occurs when the topology, including relation types, differs between several correct and faithful world models for a given problem.
Consider the following example:
James ate 22 carrot sticks before dinner and 15 more after dinner. How many carrot sticks did he eat?
This problem could be modeled either with TRANSFER or PARTWHOLE. In the case of TRANSFER, we view James as possessing some quantity of carrot sticks to start with. He then eats 22 of these, which can be viewed as a TRANSFER
where James is the sender. This TRANSFER relation will be an outgoing edge into a new updated container for James' carrots. Another TRANSFER
occurs for the 15 carrot sticks he ate after dinner.
The reference variable would then be the variable held in the first container - how many carrot sticks James had initially. See Fig. 9 for the world model.
Note that such a world model is not sufficient for solving the problem without further assumptions, it requires defeasible reasoning (Koons, 2022). We must assume that James had no carrot sticks after having eaten the ones post dinner, corresponding to the third container holding quantity 0, in order for the world model to be complete.12
![14_image_1.png](14_image_1.png)
Figure 9: World model with a Transfer interpretation.
Another possibility would be with PARTWHOLE.
With PARTWHOLE, we take the static view of James possessing 22 carrot sticks before dinner and 15 carrot sticks after dinner, assigning a container for each. The question statement gives us the information that we are asking for the total number of carrot sticks, which would be parsed with 12An alternative would be to augment r to handle expressions, giving r = 22+15. This would involve a more complex linarization scheme than that described in App. F.1 however.
PARTWHOLE to a container with the total. The reference will refer to the variable in this latter container. In contrast to the TRANSFER interpretation, the PARTWHOLE interpretation does not require additional assumptions to create a complete world model. See Fig. 10.
![15_image_0.png](15_image_0.png)
## C Similarity Scores
In this section, we describe how we adapt smatch
(Cai and Knight, 2013) for measuring similarity between world model graphs. We express the world models as conjunctions over logical triples. We label all containers and relations with a unique variable, and denote that such a variable is an instance of a container or one of the five relation types with the triple instance(variable, type). Containers are represented as arguments to the relations in the form of source and destination, which are non-core roles in AMR.13 For instance, a container c being the source node of relation r is represented as source(r, c). The topology smatch score of two world models is then computed by taking the maximum f-score over one-to-one variable mappings between the two world models, as in Cai and Knight (2013).
The full semantic smatch score is computed in the same way, with the addition of logical triples for all the container and relation properties. We define core argument roles for the containers and each of the relation types. For instance, ARG0 of a container will be its entity. The entity *apple* belonging to container c will be represented by two logical triples instance(e, apple) and ARG0(c, e).
## D Conversion To First-Order Logic
In this section, we define a function to convert world model graphs into an equivalent FOL expres-13We refer to the AMR guidelines for more information:
https://github.com/amrisi/amr-guidelines
## Sion. D.1 Describing Quantities
Before introducing the conversion function, we first present a way in which quantities are described in FOL, as a preliminary. We define the Measure predicate, which is used to describe the "size" of a set. The set may contain countable entities such as "8 balloons" or uncountable entities such as "10 grams of coffee," and Measure is used to specify both types of quantities.
We introduce axioms to enable mathematical reasoning over the Measure predicate. If the measure of a set is a cardinal number (as in "8 balloons"),
then it is the cardinality of that set:
$$\begin{array}{c}{{\forall x\forall m({\tt Measure}(x,m)\wedge m\in\{0,1,\ldots\}}}\\ {{\qquad\qquad\leftrightarrow{\tt Cardinality}(x,m)).}}\end{array}$$
For example, if a set x contains 8 elements, we write Measure(x, 8). We also define the additivity of measures:
$$\begin{array}{c}{{\forall x\forall y\forall m_{x}\forall m_{y}(x\cap y=\varnothing}}\\ {{\qquad\land\textsf{Measure}(x,m_{x})\,\land\textsf{Measure}(y,m_{y})}}\\ {{\qquad\to\textsf{Measure}(x\cup y,m_{x}+m_{y})).}}\end{array}$$
That is, for any disjoint sets, the measure of their union is equal to the sum of their measures. To describe the size of sets containing uncountable entities (as in "10 grams of coffee"), we use the Quantity predicate. For example, if a set x contains 10 grams, we write Measure(x, Quantity(10, Gram)). To enable reasoning over such measures, we define the following axiom:
$\forall x\forall y\forall u(\text{Quantity}(x,u)+\text{Quantity}(y,u))$ $=\text{Quantity}(x+y,u))$.
That is, quantities may be summed if they share the same units. Subtraction of quantities is defined similarly. Further axioms can be defined to allow conversions between units, such as:
$$x,{\tt M i l i l i t e r})$$
∀x(Quantity(x, Milliliter)
$$={\mathrm{Quantity}}(x/1000,{\mathrm{Litter}})).$$
## D.2 Conversion Function
Let g = (*V, E*) be world model graph consisting of a set of containers V (i.e. vertices) and relations E
(i.e. edges). Let E¯ ⊆ E be the subset of relations 9103 that do not have type PARTWHOLE (for which the semantics of the edges are not independent and thus need to be treated separately). Recall that the world model may also contain variables, which represent unknown quantities. Let U be the set of these variables. We can define a function ∥g∥ that converts g into an equivalent FOL expression.
$\|g\|=\exists v_{1}\ldots\exists v_{|V|}\exists e_{1}\ldots\exists e_{|E|}\exists u_{1}\ldots\exists u_{|U|}($ $\|V_{1}\|\wedge\ldots\wedge\|V_{|V|}\|\wedge\|\bar{E}_{1}\|\wedge\ldots\wedge\|\bar{E}_{|\bar{E}|}\|)$. **D.2.1 Converting containers**
Recall that each container in the world model Vi ∈
V is labeled with a set of properties: the label
(denote as Li), entity (Ei), quantity (Qi), attribute
(Ai), and unit (Ui). Note that the unit property is optional depending on whether the entity Eiis countable or not. If the entity is countable, the container is mapped to a definition of a set:
$$\|V_{i}\|={\mathsf{O w n e r}}(v_{i},{\mathcal{L}}_{i})$$
$\Lambda$Measure($v_{i}$, $\|\mathcal{Q}_{i}\|$) $\Lambda$$\forall x\in v_{i}(\mathcal{E}_{i}(x)\wedge\mathcal{A}_{i}(x))\wedge\|E^{\mathsf{PN},i}\|$, $\Lambda$$\forall x\in v_{i}(\mathcal{E}_{i}(x)\wedge\mathcal{A}_{i}(x))\wedge\|E^{\mathsf{PN},i}\|$, $\Lambda$$\forall x\in v_{i}(\mathcal{E}_{i}(x)\wedge\mathcal{A}_{i}(x))\wedge\|E^{\mathsf{PN},i}\|$,
where E*PW,i* ⊆ E is the set of edges of type PARTWHOLE whose target vertex is i. Otherwise, if the entity is uncountable:
$||V_{i}||=\text{Owner}(v_{i},\mathcal{L}_{i})\wedge\mathcal{E}_{i}(v_{i})\wedge\mathcal{A}_{i}(v_{i})$ $\wedge\text{Measure}(v_{i},\text{Quantity}(||\mathcal{Q}_{i}||,\mathcal{U}_{i}))$ $\wedge\|E^{\text{PN},i}\|$.
Note that the attribute and unit properties may be omitted, and if the container viis missing a property, the corresponding conjunct is omitted as well
(e.g., if the container is missing an attribute property, the conjunct Ai(·) is omitted). Each quantity Qiis mapped as follows:
$$\|\mathcal{Q}_{i}\|=\begin{cases}\mathcal{Q}_{i},&\text{if}\mathcal{Q}_{i}\in\mathbb{R},\\ u_{j},&\text{if}\mathcal{Q}_{i}=x_{j}\text{for some}x_{j}\in U.\end{cases}$$ Unlike other relations, the semantics of
PARTWHOLE edges are not independent of
each other, and so we define them here as a special
case:
∥E
PW,i∥ = PartWhole({vs1
, vs2
, . . .}, vi),
where sj is the index of the source vertex of the edge E
PW,i j, and so {vs1
, vs2
, . . .} is the set of the source vertices of the PARTWHOLE edges with target vertex i. In section D.3, we provide axioms that define the semantics of each relation, including PartWhole.
## D.2.2 Converting Relations
Each relation E¯i ∈ E¯ is also converted into a conjunction. Let si be the index of the source vertex of E¯i, and similarly let ti be the index of the target vertex.
If the edge E¯iis labeled as TRANSFER, it may have the following properties: the sender (denote as Si), recipient (Ri), entity (Ei), quantity (Qi),
attribute (Ai), and unit (Ui). Similarly to containers, the entities in relations may be countable or uncountable. For brevity, we only show the conversion for the case where the entities are countable, but the conversion of uncountable quantities mirrors that shown for containers above. In this case, the TRANSFER edge is converted:
$$\|{\bar{E}}_{i}\|=\mathsf{T r a n s f e r}(e_{i})$$
$$\begin{array}{l}{{\varepsilon_{i}||=\mathrm{~transr}(e_{i})}}\\ {{\land\mathrm{~Source}(e_{i},v_{s_{i}})\land\mathrm{~Target}(e_{i},v_{t_{i}})}}\\ {{\land\mathrm{~Sender}(e_{i},{\mathcal{S}}_{i})\land\mathrm{~Recipient}(e_{i},{\mathcal{R}}_{i})}}\\ {{\land\ \exists r(\mathrm{Arg}(e_{i},r)\land\mathrm{~Measure}(r,||{\mathcal{Q}}_{i}||)}}\\ {{\land\ \forall x\in r({\mathcal{E}}_{i}(x)\land{\mathcal{A}}_{i}(x))).}}\end{array}$$
If the edge E¯iis labeled as RATE, it may have the following properties: the entity (Ei), quantity
(Qi), attribute (Ai), and unit (U). Then, the edge is converted:
∥E¯i∥ = Rate(ei)
$\wedge$$\mathsf{Source}(e_{i},v_{s_{i}})$$\wedge$$\mathsf{Target}(e_{i},v_{t_{i}})$ $\wedge$$\exists r(\mathsf{Arg}(e_{i},r)$$\wedge$$\forall y\in r(\mathsf{Measure}(y,\|\mathcal{Q}_{i}\|))$ $\wedge$$\forall x\in y(\mathcal{E}_{i}(x)$$\wedge$$\mathcal{A}_{i}(x))))$.
Finally, if the edge E¯iis labeled as COMPARI-SON, it may have the following properties: the type
(Ti ∈ {Add, Mul}), quantity (Qi), and unit (Ui).
Then, the edge is converted:
∥E¯i∥ = ComparisonTi(ei) ∧ Source(ei, vsi ) ∧ Target(ei, vti ) ∧ Arg(ei, ∥Qi∥).
Note that in the above, the sender, recipient, attribute, and unit properties are optional. If the relation is missing any property, the corresponding conjunct is omitted (e.g., if the attribute property is missing, the corresponding term Ai(x) is omitted).
See Figure 11 for an example application of the above conversion function.
Natural language representation:
"James has 232 balloons. Amy has 101 balloons. How many more balloons does James have than Amy?"
MATHWORLD **representation:**
James has 232 balloons. Amy has 101 balloons. How many more balloons does James have than Amy?
| MATHWORLD representation: James Entity: balloon Explicit(x1, Quantity: 232 balloon, add) | Amy Entity: balloon Quantity: 101 |
|---------------------------------------------------------------------------------------------|-------------------------------------|
First-order logic representation:
∃v1∃v2∃e1∃u1( Owner(v1, James)
∧ Measure(v1, 232) ∧ ∀x ∈ v1.balloon(x)
∧ Owner(v2, Amy)
∧ Measure(v2, 101) ∧ ∀x ∈ v2.balloon(x)
∧ ComparisonAdd(e1)
∧ Source(e1, v2) ∧ Target(e1, v1)
∧ ∃r(Arg(e1, r) ∧ Measure(*r, u*1)
∧ ∀x ∈ r.balloon(x)))
Figure 11: Example of a math story problem with its equivalent representations as a world model graph and in first-order logic.
## D.3 Semantics Of Relations And Predicates
We define the semantics of each relation, starting with the RATE relation:
$\forall v_{s}\forall v_{t}\forall r(\text{Rate}(e)\wedge\text{Arg}(e,r)$ $\wedge\text{Source}(e,v_{s})\wedge\text{Target}(e,v_{t})$ $\rightarrow\text{Partition}(r,v_{s})\wedge$ $\exists m(\text{Measure}(r,m)\wedge\text{Measure}(v_{t},m)))$,
where Partition(*r, v*s) denotes that r is a *partition* of the set vs: r is a set of disjoint subsets of vs such that their union is equal to vs. More precisely:
$\forall x\forall y\Big{(}\text{Partition}(x,y)\leftrightarrow$ $\forall z,z^{\prime}\in x(z\neq z^{\prime}\to z\cap z^{\prime}=\varnothing)\wedge y=\bigcup z\Big{)}$.
We also use the notion of a partition to define the semantics of the TRANSFER relation:
$\forall e\forall v_{s}\forall v_{t}\forall r(\text{Transfer}(e)\wedge\text{Arg}(e,r))$ $\wedge\text{Source}(e,v_{s})\wedge\text{Target}(e,v_{t})$ $\rightarrow\exists z(\text{Owner}(v_{s},z)\wedge\text{Owner}(v_{t},z))$ $\wedge\text{Recipient}(e,z)$ $\wedge\text{Partition}(\{r,v_{s}\},v_{t}))$ $\vee\exists z(\text{Owner}(v_{s},z)\wedge\text{Owner}(v_{t},z))$ $\wedge\text{Sender}(e,z)$ $\wedge\text{Partition}(\{r,v_{t}\},v_{s})))$.
We define the semantics of COMPARISONADD:
∀e∀vs∀vt∀ms∀mt∀r(
ComparisonAdd$(e)$$\Lambda$$\mathrm{Arg}(e,r)$ $\Lambda$$\mathrm{Source}(e,v_{s})$$\Lambda$$\mathrm{Target}(e,v_{t})$ $\Lambda$$\mathrm{Measure}(v_{s},m_{s})$$\Lambda$$\mathrm{Measure}(v_{t},m_{t})$ $\to m_{s}+r=m_{t})$.
COMPARISONMUL is defined similarly. Finally, we define PARTWHOLE as a simple set partition:
∀vt∀X(
PartWhole(*X, v*t) ↔ Partition(*X, v*t)).
## E Annotation Details E.1 Data Preprocessing
We segment all sentences into smaller independent clauses when possible. This is done in order to create simpler units of training data for a semantic parser. We use the Berkeley Neural Parser (Kitaev and Klein, 2018; Kitaev et al., 2019) for this task, splitting sentences recursively at the two coordinating conjunctions and and but.
14 Over the three datasets we consider, 302 sentences are split in this way. Additionally, some question sentences start with a subordinate clause that introduces new information, like "If Alice bought 3 more apples today, how many apples did she end up with?". We split these into a declarative clause and an interrogative clause, and remove the leading subordinating conjunction.
14Some phrases with a trailing preposition are split erroneously in this way, like "Sally picked 7 lemons and Mary picked 9 lemons from the lemon tree" is split into "Sally picked 7 lemons" and "Mary picked 9 lemons from the lemon tree", pointing to the challenges of prepositional phrase attachment in neural constituency parsing (Sopena et al., 1998). We detect and correct such cases manually.
## E.2 Annotation Scheme And Tool
As mentioned in § 3, MATHWORLD considers logical forms at the sentence level. Hence, we must also annotate the world model graphs incrementally, sentence by sentence. This is done via a drag-and-drop annotation tool, ANT-NLP, built specifically for the purpose of this work.15 When annotating a problem in the tool, annotators get to build the graph incrementally one sentence at a time. Each sentence is given in a separate page, as shown in Fig. 12, and the graph from the previous sentence is carried over to the next. We save all incremental world models, as they set the basis for the sentence linearization described in App. F.1.
The incremental world models are stored in json graph format.16 We want annotators to include all information included in the text that fits MATHWORLD, irrespective of the relevance to the question. Therefore, in order not to create any bias stemming from the question, we hide the question sentence until all other preceding parts have been annotated.
We ask annotators to follow the ordering that information is given within each sentence when adding containers and relations. For instance, a sentence "Alice has 3 apples and 4 oranges" should first be assigned a container for apples and then be assigned a container for oranges. This allows us to preserve the ordering of the text when linearizing logical forms for training data. To capture this ordering we annotate IDs for the containers and relations. The space of IDs is the set of natural numbers, and is shared between containers and relations. Ids are incremented automatically in the tool as annotators add new containers or relations.
The tool includes options to flag problems that require background knowledge, or where the annotator is uncertain about their annotation. They can additionally add a free text comment about their annotation for a particular problem.
## E.3 Procedure
Annotation is performed by external workers, who are taught to be familiar with the semantics of MATHWORLD. We employ two annotators, hired from a small annotation company in India.17 At the time of annotation, both of them were undergraduate students in technical universities. As support material, annotators are given a comprehensive guideline document, a set of example annotations and a video showcasing how annotation is performed using the tool. We follow three phases for annotation:
1. **Training phase**. This phase is for annotators to learn the formalism. They are given batches of 5−7 problems at a time to annotate independently. These annotations are then discussed and, if needed, corrected in a follow-up meeting. The initial batches consist of simple problems, both conceptually and linguistically.
After the annotators can successfully annotate these simple problems, they are gradually given more challenging ones. This phase ends when all annotators can successfully annotate most problems across all datasets.
2. **Agreement phase**. Here, annotators are given the same set of 90 problems, with 30 from each dataset. They are asked to annotate these independently. This set is used to measure agreement between annotators.
3. **Scale-up phase**. Here, annotators are given separate datasets to annotate on their own.
Some of these problems are overlapping in order to allow for agreement analysis.
## E.4 Agreement Analysis
We give further details on the agreement analysis of the 125 overlapping problems discussed in § 4.
As mentioned, there were 18 isomorphic disagreements between the two annotators (i.e., not weakly equivalent). Out of these, 7 were due to structural ambiguity (App. B.2), 1 was due to a type of error that was fixed during annotation check (see below),
8 due to a type of error for which a problem would be discarded during annotation check, and 2 less serious errors that would not be detected during the annotation check. Most errors were attributed to the same annotator. Ground truth data for overlapping annotations were thus taken from the annotated set of the annotator with the higher performance on the overlapping problems.
17There was initially a third annotator involved. However, this annotator dropped out during phase 3 as described below.
At that time, it would have required a considerable time investment to hire and train yet another annotator, and so instead, we had one of the two other annotators cover up.
![19_image_0.png](19_image_0.png)
There were 46 problems that had a weak equivalence agreement, but not a strong equivalence agreement. Some of these were due to errors and some were due to property ambiguity (App. B.1).
The errors were mostly incurred from entering an incorrect property, seemingly by carelessness. Several such cases could be detected and corrected as they led to errors when parsing the world model json file or when applying the reasoner to the world model (App. H). Cases of property ambiguity were often due to the attribute property.
We additionally stratified agreement across relation type. Problems with COMPARISON relations seemed to have the lowest weak equivalence agreement, followed by RATE and TRANSFER. For strong equivalence agreement on the other hand, RATE problems had the lowest agreement, followed by TRANSFER and then PARTWHOLE.
## E.5 Annotation Check And Correction
We performed the following checks of the annotations: whether the json could be parsed into a well-formed world model, whether applying the deterministic reasoner (App. H) would produce the correct answer and whether the annotator had flagged the problem with low confidence or provided a free text comment. Based on these we were able to detect and correct several faulty annotations.
Some common errors were: entering the wrong number, entering the wrong reference variable, forgetting to the enter the reference variable, orienting the edge in the wrong direction and misspelling label names. Such errors could easily be corrected.
Other more fundamental errors that could not be easily fixed led to discarding the annotation. We also spotted some cases of wrong annotated answers stemming from the original dataset, which were corrected.
## F Conversion Between World Model Graph And Logical Form
As mentioned in § 5.1, an integral part of our proposed MATHWORLD solver framework, and working with MATHWORLD more generally as in § 5, is the conversion between world models g and logical forms m. In this appendix section we provide details of both directions of this conversion. Both directions of the conversion are lossy to some small degree, as is mentioned in footnote 18.
## F.1 World Model Graph To Logical Form
Each logical form can be viewed as a incremental graph update that consists of containers and relations based on a sentence in the problem text, which is represented as a text sequence.
Containers and relations have varying arity, depending on which properties are present. This opens two possibilities. We may either split them into forms for each set of properties and have the property names explicit in the signatures (e.g., containers would have one representation each with arity 3 and 5, and two representations with arity 4),
or keep the property ordering consistent and give a default null token for missing properties. We opt for the latter, and set the default null token to be none.
We define the following predicates:
- container(label, quantity, entity, attribute, unit)
- transfer(recipient label, sender label, quantity, entity, attribute, unit)
- rate(label, quantity, source entity, source attribute, source unit, target entity, target attribute, target unit)
- difference(target label, source label, quantity, target entity, target attribute, target unit, source entity, source attribute, source unit)
- explicit(target label, source label, quantity, target entity, target attribute, target unit, source entity, source attribute, source unit)
- part(whole label, whole entity, whole attribute, whole unit, part1 label, part1 entity, part1 attribute, part1 unit, . . . , partn label, partn entity, partn attribute, partn unit)
Note that for COMPARISON, the "type" property is lifted out and its value replaces "comparison" as the name of the predicates. We replace "add" and "times" by "difference" and "explicit", respectively, for practical reasons: We do not want the name of the operator that might be required to solve the problem to be confounded with the name of the predicate. Further note that the above predicates are overloaded in comparison to the ones mentioned in
§ 3. The reason for that is that we require additional information in order to match the linearization to existing incremental graphs (the other direction of the conversion, described in App. F.2). For instance, consider two disconnected containers in a world model graph. If one wished to present them as connected with RATE, it would be sufficient to provide the quantity property to the rate. See, e.g., how in Fig. 1, quantity is provided as the only property in the RATE relation. The other properties given above would be redundant as they are already given in the containers. For a model to be able to orient that rate, however, it needs the additional information to match to the two existing containers.
Note that in the case of TRANSFER, there may be two associated edges in the graph if the properties "recipient label" and "sender label" both take values other than none. However, these are both represented by a single transfer predicate as above. PARTWHOLE is the only relation whose arity varies, reflecting the number of subsets present in the PARTWHOLE construction. An alternative would have been to have one predicate per edge, but that would have introduced redundancy.18 A sentence-level logical form often contains multiple components of the above. In these cases, we follow the ordering as introduced in text, in line with the annotated IDs. If a relation is added together with its source and/or target containers, then the containers must always precede the relation in the ordering. We enforce that the source container always precedes the target container.
As an example, the logical form of the sentence
"In a friend group there are 5 football players and 3 tennis players" is:
container(friend group, 5, player,
container(friend group, 5, player, football, none) container(friend group, 3, player, tennis, none)
Finally, a world model graph may have containers that have not been explicitly introduced in text.
18However, a drawback of our PARTWHOLE representation is that it assumes that all the part-whole edges are always introduced together in the same sentence. While this is mostly the case for the data we observe, we found the following exception: "Next on her list are the homeless people where she spent a total of $900.00. She gave $325.00 to the first set of homeless families and $260.00 to the second set of families.
How much did she give to the last set of homeless families?".
This is one example showing that the conversion is slightly lossy.
For instance, the two sentences "Alice has 5 apples. She ate 2 of them." will be represented by a world model with two containers and a TRANS-FER edge, but only the source container is explicitly mentioned in text (in the first sentence). When writing the world model graph as a logical form, we therefore discard the target container in this case.
In general, this is done by discarding all containers that do not hold an explicit quantity, unless the sentence is interrogative. For interrogative sentences we want the logical form to represent the reference variable.
## F.2 Logical Form To World Model Graph
We now consider the other direction, namely that we have a sequence of logical forms m1*, . . . , m*n on the form described in the previous section and wish to convert them to a world model graph g.
For m1, we can trivially convert the logical form to a graph. Note that the relation predicates specify the properties needed to match the relation to containers as well, so if there is a relation predicate in the logical form but no source and/or target container, we can simply create those. For subsequent logical forms, we match the logical form to the graph created from preceding sentences. For relations, we must make sure that we do not create new containers linked by that relation, if any or both of those containers are already existing in the world model. We thus first match the properties corresponding to the source and target containers in the relation predicate to any possibly existing containers, and only create new ones if none are found. In addition, some sentences will just supply an update of an unknown quantity to a known value. In these cases, we do not create a new container, but match the quantity to one already existing so that we can preserve the structural information of that container. We remark that in case that the matching with already existing containers in the world model returns multiple options, we default to the most recently created one. This turns out to work well for most cases, but could be one source of loss.
The reference variable corresponds to the logical form of the last sentence: Interrogative sentences are mapped to logical forms the same way as declarative sentences, and the reference variable is taken as the variable in the container or relation that matches the question's logical form.
Finally, for predicted logical forms, we first check the logical forms for syntactic wellformedness, keeping only the parts of the logical form that are well-formed. An additional (weak)
check for semantic well-formedness may match the properties to the vocabulary of the MSP, along with special tokens like "none", "world" etc.
## G Difficult Cases To Parse
We estimate a high coverage of our formalism among MSPs. However, although a problem might be within semantic and conceptual coverage of MATHWORLD, the text itself might prove challenging for a parser to interpret. Here, we present two problems that are captured by MATHWORLD
that put a high burden on the parser.
First, consider the following problem:
The teacher distributes 4 candies to 2 students.
Student A now has 2 more candies than Student B. Both students had 0 candies to begin with.
How many candies does Student A have? In this problem there is a transfer involved in the first sentence. The recipient of the transfer is not a single independent container however, but a set of two students. We have no information on how many candies these two students have individually, but we know that they collectively got 4 more than they had before. To capture this, we may represent both students as a container with a PARTWHOLE
relation to the individual students, which will be the recipient of the TRANSFER. The whole problem is assigned the world model in Fig. 13. This is a faithful and correct world model, but the first sentence puts a high burden on the semantic parser:
It must add The teacher distributes 4 candies to 2 students. Student A now has 2 more candies than Student 8 containers and 6 relations.
![21_image_0.png](21_image_0.png)
Next, consider the following problem (adapted from GSM8K):
Zack decided to give his 3 friends 20 marbles each and kept 5. How many marbles did he initially have?
The first sentence conveys a lot of information.
We must add a container for the total number of marbles that Zack possesses, with PARTWHOLE
relations representing how many marbles Zack has left and how many his friends have. In addition, we know that there are three friends, which we represent with a RATE. See the world model in Fig. 14. However, the fact that Zack already possesses marbles is implicit from the text, and would be challenging for a parser to detect. As a partial remedy, we could introduce a "TransferEvenly" relation, which would represent a transfer of 20 to each container in a set. In this case, Zack's friends would each be represented in a container.
Zack decided to give his 3 friends 20 marbles each and kept 5. How many marbles did he initially have?
![22_image_1.png](22_image_1.png)
![22_image_2.png](22_image_2.png)
## H Reasoner
The recursive solver takes as input a target variable and a set of visited equations. It takes all the equations containing the target variable and sorts them in increasing order of number of unknowns. Next, it iterates over the equations in this order.
If the equation only has one unknown, that unknown must be the target variable. The function then solves for the target variable and outputs the numeric value. Otherwise, it goes over the other free variables in the equation and applies the recursive function to those as target, with the equation added to the set of visited equations in order to prevent loops. Having solved for the other free variable, it substitutes its numeric value in the equation and solves for the target variable, if possible.
We present pseudo-code for the deterministic reasoner in Alg. 1.
Note that this solver assumes a certain structure of the equations, namely, that a solution can be reached by solving a sequence of equations with one unknown. Such is indeed the case for the simple MSPs we consider. However, in the case of a general system of linear equations, this algorithm would fail as it cannot handle equations of more than one unknown. We opt for our recursive solution rather than Gaussian elimination due to runtime gains: for a system of n equations with n unknowns, Gaussian elimination runs in O(n 3), while our solution has worst-case complexity O(n 2).
Further note that if we extend r to be a set of variables, we can store the intermediate results in a table and get a dynamic program. This is not necessary in our case as we do not have overlapping sub-problems.
Algorithm 1: Deterministic recursive reasoner.
![22_image_0.png](22_image_0.png)
![22_image_3.png](22_image_3.png)
1 **function** recursiveReasoner(x, visited)
/* Prepare equations containing x that 2 eqs ← {equations containing x} \ visited
/* Sort in increasing order of number of 3 eqs ← **sort**(eqs, \# of unknowns, increasing)
/* Go over equations in order */
/* Solve for x if possible */
7 **else** /* Otherwise, solve recursively */
′val ← **recursiveReasoner**(x, visited+eq)
/* Substitute unknown for value */
## I Experimental Details I.1 Solving Pipeline
Setup As our LLM we use Codex code-davinci002. We design a prompt with 50 ground truth examples from MAWPS and ASDIV-A. One example consists of the source sentence, the target linearized logical form, as well as the source and target of the previous sentence in the same MSP,
in order to allow the model to account for dependencies between sentences. These examples are handpicked to be representative of MATHWORLD.
For every MSP, we then feed each sentence following the same pattern excluding the target as a suffix to the prompt, and sample the target output from Codex. The experiments were performed on the 18th of January 2023. The parameters used for
| MAWPS ASDIV-A SVAMP | | | |
|-----------------------|------|------|------|
| Answer Acc (%) | 33.8 | 26.9 | 11.1 |
| Complete WM (%) | 50.7 | 43.3 | 33.3 |
| Weak Smatch (avg.) | 0.76 | 0.68 | 0.59 |
| Strong Smatch (avg.) | 0.76 | 0.60 | 0.38 |
sampling were the following: temperature is set to 0, max tokens is 200, frequency and presence penalty are both left at 0 and we add an additional new line stop token (which is used in the prompt to end the ground truth logical forms.
World models are built incrementally using the method described in App. F.2. We apply the deterministic reasoner (App. H) to produce an answer.
Results We show the results in Table 5. Observe that on average less than half of the predicted world models result in an answer (i.e. are complete). The rest of the times the reasoner is either unable to solve for the reference variable (the system of equations induced by the world model is underdetermined) or the world model lacks a reference variable. Incorrect answers are often caused by slight permutations of the correct logical forms (e.g.,
Codex having swapped the sender and recipient in a TRANSFER relation). If we stratify the problems by relation type, we observe that the model has the highest answer accuracy for TRANSFER and RATE,
while PARTWHOLE problems have the lowest answer accuracy. This is to be expected given that the information associated with PARTWHOLE problem is not often made explicit in text (§ 3.2.4).
## I.2 Details On Constrained Generation
The GPT 3.5 Turbo generation experiments were performed on the 24th of May 2023. The model used was gpt-3.5-turbo-0301. The sampling parameters are the same as those used during parsing
(App. I.1).
We display the results of the other five MSPs as mentioned in § 5.3 in Table 6. Observe that in all cases, the model is able to generate problems that are faithful to the concept, number and properties of the original world model (comparing the left column and the middle column). Further note that with a temperature parameter of 0, the generated problems are rather conservative. We leave for future work to explore the implications of the sampling parameters for the generated outputs.
Finally, consider the right column, where we display the MSPs generated from augmented world models. Three of the generated examples are not completely faithful to how we augmented the world models. In the first example from the top, "Lexie's brother" is provided as the recipient property in the TRANSFER relation, but in the generated example Lexie's brother is the sender. In the third example from the top, we augment the world model with a RATE, but the model instead generates a transfer type MSP. In the last example, Bob is provided as sender while Josh is provided as recipient, but the model generates a problem with these values being swapped. The other two are faithful.
## I.3 Details On Prompting Using Synthetic Questions
The GPT-3 probing experiments were performed on the 18th of January 2023. The model used was text-davinci-003. The sampling parameters used are the same as those used for Codex during parsing
(App. I.1).
In Table 7, we present the templates used to create synthetic question-answer pairs for prompting large language models.
| Original MSP | MSP generated from world model | Augmentation | MSP generated from augmented world model |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|--------------------------------------------|
| Lexie's younger brother helped pick up all the paper clips in Lexie's room. He was able to collect 81 paper clips. If he wants to distribute the paper clips in 9 boxes, how many paper clips will each box contain? | Lexie's brother has 81 paper clips. | RATE | Lexie's brother had 81 paper clips. |
| He wants to put them in 9 boxes. | → TRANSFER He gave 21 to Lexie. | | |
| How many paper clips will he put in each box? | How many paper clips does Lexie's brother have now? | | |
| Kevin collected toys to use as prizes at the fair. He collected 14 stuffed animals. He also collected 18 frisbees and several yo-yos. Kevin has 50 prizes in all. How many yo-yos did Kevin collect? | Kevin won 50 prizes at the fair. He won 14 stuffed animals, 18 frisbees, and some yo-yos. How many yo-yos did he win? | PARTWHOLE | Kevin has 14 stuffed animals. |
| → TRANSFER He gave 5 of them to his friend. How many stuffed animals does Kevin have now? | | | |
| Mrs. Hilt wants to make a border around her garden. She needs 125 rocks to complete the border. She has 64 rocks. How many more rocks does she need to complete the border? | → RATE | Mrs. Hilt is making a rock border around her garden. | |
| TRANSFER | She has 125 rocks to use. She has already used 25 rocks. How many rocks does she have left? | | |
| Zoe's school sold 620 dollars in raffle tickets. | Zoe spent $620 on raffle tickets. | RATE | Zoe had 620 dollars. |
| If each ticket cost 4 dollars, | Each ticket cost $4. | → TRANSFER She spent 100 dollars. | |
| how many tickets did they sell? | How many tickets did she buy? | How much money does Zoe have now? | |
| Josh had 16 marbles in his collection. | Josh had 16 marbles. | TRANSFER | Josh has 16 marbles. |
| He lost 7 marbles. | He lost 7 of them. | → TRANSFER He gave 7 marbles to Bob. | |
| How many marbles does he have now? | How many marbles does Josh have now? | How many marbles does Josh have now? | |
| Mrs. Hilt was making a rock border around her garden. She had 125 rocks to use. She used 64 rocks to make the border. How many rocks did she have left? | | | |
Table 6: Example of generated math story problems conditioned on world models in MATHWORLD. The left column shows the original math story problem, the middle column shows a math story problem generated conditioned on the ground truth world model of the original problem, and the right column shows a math story problem generated conditioned on a world model that has been created by augmenting the ground truth world model of the original problem. Sentences not faithful to the logical form are colored red.
| containers | Q: How many {attr}{ent}s does {label} have? A: {quant} Q: What is the amount of {attr}{ent}s associated with {label}? A: {quant} |
|--------------------|------------------------------------------------------------------------------------------------------------------------------------|
| TRANSFER | Q: How many {ent}s are transferred from {sour} to {targ}? A: {quant} |
| COMPARISON (add) | Q: How many more {ent}s does {targ} have than {sour}? A: {quant} |
| COMPARISON (times) | Q: How much more {ent} does {sour} have than {targ}? A: {quant} |
| RATE | Q: How many {ent} does {targ} have per {sour}? A: {quant} |
| PARTWHOLE | Q: How many {sour} are part of {targ}? A: {quant} |
Table 7: Templates to automatically create question-answer pairs for prompting. The templates are filled based on the information in the world model.
| prompt types | pairs sourced from one MSP (one-shot), i.e., x = 1 Baker made 43 cakes and 114 pastries. If he sold 154 pastries and 78 cakes. Q: How many cakes does baker have? A: 43 Q: How many sold cakes are associated with baker? A: 78 Q: How many more pastries than cakes did baker sell? A: 76 |
|------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| (1) synth QAs (all at once) | Bobby had 19 pieces of candy. He ate 2 pieces of candy. Q: What is the amount of candys associated with bobby? A: 19 Q: How many candys are transferred from bobby? A: 2 Q: how many pieces of candy does he still have left? A: Baker made 43 cakes and 114 pastries. Q: How many cakes does baker have? A: 43 Baker made 43 cakes and 114 pastries. If he sold 154 pastries and 78 cakes. Q: How many sold cakes are associated with baker? A: 78 Baker made 43 cakes and 114 pastries. If he sold 154 pastries and 78 cakes. Q: How many more pastries than cakes did baker sell? A: 76 |
| (2) synth QAs (sent by sent) | Bobby had 19 pieces of candy. Q: What is the amount of candys associated with bobby? A: 19 Bobby had 19 pieces of candy. He ate 2 pieces of candy. Q: How many candys are transferred from bobby? A: 2 Bobby had 19 pieces of candy. He ate 2 pieces of candy. Q: how many pieces of candy does he still have left? A: Baker made 43 cakes and 114 pastries. If he sold 154 pastries and 78 cakes. Q: How many more pastries than cakes did baker sell? A: 76 |
| (3) original MSP QAs | Bobby had 19 pieces of candy. He ate 2 pieces of candy. Q: how many pieces of candy does he still have left? A: |
Table 8: We experiment with three different types of prompts. They are displayed for the one-shot case in which one MSP in addition to the one we are trying to solve is provided in the prompt. In the above case, the model is tasked with making inference on the problem "Baker made 43 cakes and 114 pastries. If he sold 154 pastries and 78 cakes. How many more pastries than cakes did baker sell?".
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Yes, after the Conclusion (sec 7)
✓ A2. Did you discuss any potential risks of your work?
Yes, after the Limitations in the Ethics Statement
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Yes, Abstract (sec 0) and Introduction (sec 1)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Sec 4 And Details In App E
✓ B1. Did you cite the creators of artifacts you used?
Yes, Introduction (sec 1) and Data Collection (sec 4)
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Datasets were published in previous *ACL conferences and are free to be used for research purposes.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Our datasets used were created for research purposes and we have not used it for any other purposes.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Our data uses fictional characters.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Our dataset consists of annotations over previously published datasets. We trust the such documentation is given in the original papers.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Sec 4
## C ✓ **Did You Run Computational Experiments?** Yes, Sec 5.3 And Sec 6
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
No, we used pre-trained large language models in our experiments The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Yes, in sec 5.3, sec 6 and app I
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
No, but we submitted code in which such details are included.
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Yes, Sec 4 And App E
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Yes. We paid a company for annotation, which in turn paid the annotators. We would be happy to give more details on their salaries if necessary for the camera-ready version.
✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not in the paper. We did have such discussions with the annotators prior to start of annotation, but not in writing.
✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No, we were unaware of this possibility. Although we did make sure that data collection complied with our institution's requirements.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
jawahar-etal-2023-automoe | {A}uto{M}o{E}: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation | https://aclanthology.org/2023.findings-acl.580 | Mixture-of-Expert (MoE) models have obtained state-of-the-art performance in Neural Machine Translation (NMT) tasks. Existing works in MoE mostly consider a homogeneous design where the same number of experts of the same size are placed uniformly throughout the network. Furthermore, existing MoE works do not consider computational constraints (e.g., FLOPs, latency) to guide their design. To this end, we develop AutoMoE {--} a framework for designing heterogeneous MoE{'}s under computational constraints. AutoMoE leverages Neural Architecture Search (NAS) to obtain efficient sparse MoE sub-transformers with 4x inference speedup (CPU) and FLOPs reduction over manually designed Transformers, with parity in BLEU score over dense Transformer and within 1 BLEU point of MoE SwitchTransformer, on aggregate over benchmark datasets for NMT.Heterogeneous search space with dense and sparsely activated Transformer modules (e.g., how many experts? where to place them? what should be their sizes?) allows for adaptive compute {--} where different amounts of computations are used for different tokens in the input. Adaptivity comes naturally from routing decisions which send tokens to experts of different sizes. AutoMoE code, data, and trained models are available at \url{https://aka.ms/AutoMoE}. | # Automoe: Heterogeneous Mixture-Of-Experts With Adaptive Computation For Efficient Neural Machine Translation
Ganesh Jawahar∗♣**, Subhabrata Mukherjee**♠
Xiaodong Liu♠, Young Jin Kim♠, Muhammad Abdul-Mageed♣♢**, Laks V.S. Lakshmanan**♣
Ahmed Hassan Awadallah♠, Sebastien Bubeck♠**, Jianfeng Gao**♠
♣University of British Columbia, ♠Microsoft Research, ♢MBZUAI
## Abstract
Mixture-of-Expert (MoE) models have obtained state-of-the-art performance in Neural Machine Translation (NMT) tasks. Existing works in MoE mostly consider a homogeneous design where the same number of experts of the same size are placed uniformly throughout the network. Furthermore, existing MoE works do not consider computational constraints (e.g.,
FLOPs, latency) to guide their design. To this end, we develop AutoMoE - a framework for designing heterogeneous MoE's under computational constraints. AutoMoE leverages Neural Architecture Search (NAS) to obtain efficient sparse MoE sub-transformers with 4× inference speedup (CPU) and FLOPs reduction over manually designed Transformers, with parity in BLEU score over dense Transformer and within 1 BLEU point of MoE SwitchTransformer, on aggregate over benchmark datasets for NMT.
Heterogeneous search space with dense and sparsely activated Transformer modules (e.g.,
how many experts? where to place them? what should be their sizes?) allows for adaptive compute - where different amounts of computations are used for different tokens in the input. Adaptivity comes naturally from routing decisions which send tokens to experts of different sizes.
AutoMoE code, data, and trained models are available at https://aka.ms/AutoMoE.
## 1 Introduction
Sparsely activated models like the Mixture-ofExperts (MoE) (Fedus et al., 2022b) perform conditional computation in which only a subset of the weights of the network are activated per input. Selective compute allows us to design neural networks with a large number of model parameters, without significant increase in the computational cost. With increased capacity, these sparse models have demonstrated state-of-the-art performance in natural language tasks such as neural machine
∗Correspondence to {[email protected], [email protected]}.
translation (NMT) (Kim et al., 2021; Kudugunta et al., 2021; Zuo et al., 2022).
MoE architectures require several design choices: *(a) Expert placement:* Identifying Transformer layers for introducing expert sub-networks.
(b) Number of experts: How many experts to place in different layers? *(c) Expert FFN size*: What should be the feedforward network (FFN) size for each expert? Given the large search space of potential architectures and the exorbitant computational cost of training and evaluating them, existing approaches manually design MoE architectures from a highly-restricted homogeneous space. For instance, they use the same number of experts of the same capacity in different layers and make ad-hoc decisions like introducing experts in every other layer (Fedus et al., 2022b; Kim et al., 2021; Zuo et al., 2022; Du et al., 2022; Artetxe et al., 2021) or every four layers (Zoph et al., 2022).
While these MoE's support conditional computation, homogeneity (specifically, fixed-size experts)
results in the same amount (albeit different subsets) of weights to be applied to each input. We hypothesize that this is not an optimal solution and that we can reduce the number of experts (in some layers) to reduce communication cost, and the size
(of some experts) to reduce computation cost resulting in reduction in model size, FLOPs and latency without much quality degradation.
This naturally extends MoEs to be adaptive compute models (similar to work on early exit (Schuster et al., 2022)) where different amounts of computations are used for different inputs. The adaptivity comes naturally from the routing decisions which would send tokens to experts of different sizes.
The above observations are depicted in Table 1, which shows demonstrative examples of manually designed MoE's vs. those designed by our AutoMoE framework. We compare these architectures against various computational metrics (e.g.,
latency, FLOPs, active MoE parameters), archi-
![1_image_0.png](1_image_0.png)
tectural configurations and task performance. For the most efficient configuration (last row in the table), AutoMoE reduces the number of decoder layers, compensating for the capacity with increased experts in the bottom layer, and places most of the experts in the encoder. Overall AutoMoE introduces the following components and contributions:
- *Heterogeneous design with adaptive computation* for MoEs with variable number, size and placement of experts in both encoders and decoders.
- Extends *Supernet training* and evolutionary search from prior work on dense Transformers to new search space of sparse MoE's. This combines all possible MoE sub-architectures in a single graph; jointly training them via weightsharing; and searching for optimal one with best possible performance on a downstream task satisfying a user-specified computational constraint.
- Experiments on NMT benchmarks demonstrate AutoMoE-designed MoE's to obtain 4× inference speedup on CPU and equal FLOPs reduction over manually designed Transformers, with parity in BLEU with dense Transformer and within 1 BLEU point of MoE SwitchTransformer. Further, it outperforms NAS methods in the dense search space (e.g., 1.3× and 2.4× FLOPs reduction and inference speedup over HAT (Wang et al., 2020) and Evolved Transformer (So et al., 2019)).
## 2 Background
Mixture-of-Experts: MoE's have a rich literature in machine learning dating back to the early 90s (Yuksel et al., 2012). They have received significant attention with works such as (Shazeer et al.,
2017), Switch Transformers (Fedus et al., 2022b),
GShard (Lepikhin et al., 2020), BASE (Lewis et al.,
2021), Hash (Roller et al., 2021), GLaM (Du et al.,
2022), Stochastic Experts (Zuo et al., 2022), Gating Dropout (Liu et al., 2022) and ST-MoE (Zoph et al., 2022). Some crucial differences in these works include choice of expert routing function, expert placement technique, stability/performance enhancement techniques and nature of the task (pretraining vs. fine-tuning). Some challenges in building sparse expert models include: (i) lack of diversity in expert design (expert layer selection, number of experts, expert size, etc.), (ii) training instability, (iii) poor out-of-distribution generalization, (iv)
cross-task adaptation of pre-trained models, (v)
communication bottleneck, (vi) high memory and
(vii) expert load balancing issue, to name a few. A comprehensive review of recent sparse expert models can be found at (Fedus et al., 2022a).
MoE design: Most works in MoE rely on ad-hoc manual choices for expert placement, number of experts and their sizes. Existing approaches mostly use manual design, where they add experts on (i) alternate layers (Fedus et al., 2022b; Kim et al., 2021;
Machine Translation #Experts in each layer Accuracy **Computational Footprint**
Design Approach Encoder Decoder BLEU Latency # Active Params FLOPs (G)
Manually designed (every layer) 4-4-4-4-4-4 4-4-4-4-4-4 27.87 861ms 56M 3.4 Manually designed (every other layer) 1-4-1-4-1-4 1-4-1-4-1-4 28.48 794ms 56M 3.4
AutoMoE 1-1-4-4-4-1 4-1-1-1 28.15 585ms 46M 2.9
Zuo et al., 2022; Du et al., 2022; Artetxe et al.,
2021), (ii) every four layers (Zoph et al., 2022), or (iii) final few layers (Rajbhandari et al., 2022).
While these MoE's support conditional computation, they generally do not support adaptive compute since same number of expert parameters apply to every input, largely given by their homogeneous design (e.g., all experts of same size). Further, MoE design is generally agnostic to computational constraints (e.g., latency, memory) of the hardware in which the MoE model has to be deployed.
Neural Architecture Search (NAS): Given a search space of architectures and efficiency constraints (e.g., model size, latency), NAS typically aims to identify the optimal architecture that maximizes the task performance, while satisfying the efficiency constraints. NAS has been recently used for natural language understanding tasks to build efficient BERT (Devlin et al., 2019) and GPT (Brown et al., 2020) based pre-trained language models (Xu et al., 2021; Yin et al., 2021; Xu et al., 2022a,b; Gao et al., 2022; Dong et al., 2021; So et al., 2021; Javaheripi et al., 2022) as well as for machine translation tasks (So et al., 2019; Wang et al., 2020).
Hardware aware transformers (HAT) (Wang et al.,
2020) is a state-of-the-art NAS framework with dense Transformers for MT that uses hardware latency as feedback for optimization.
However, all of the above NAS works consider a search space with densely activated Transformers and non-MoE architectures, They primarily search over typical Transformer architectural hyper-parameters like number of layers, attention heads and hidden size. In contrast, we propose the first NAS framework that searches for efficient sparsely activated Mixture-of-Expert modules in Transformers. Our heterogeneous AutoMoE framework addresses some longstanding design choices for MoE's like how many experts? which layers to place them? what should be their sizes? and so on.
## 3 Designing Heterogeneous Mixture-Of-Experts
We now present the components of AutoMoE framework (illustrated in Figure 1) for designing efficient MoE's under computational constraints.
## 3.1 Heterogeneous Moe Search Space
Existing MoE approaches restrict their design space by considering uniform distribution of size and number of experts placed in different Transformer layers. For instance, the standard MoE design (Fedus et al., 2022b) for an L-layer Transformer with M experts placed in alternate layers have only two possible configurations viz., {1-M-1-· · · }, {M-1-M- *· · ·}*. (a) Our design space allows variable number of experts in each layer resulting in ML possible configurations. (b) Furthermore, our design space also allows *variable expert size*, e.g., by modulating the width of the feedforward
(FFN) subnetworks for different experts. Considering N possible FFN dimensions for each expert results in NMLpossible configurations for designing the expert space. (c) Finally, given the autoregressive nature of tasks like neural machine translation, the inference cost is dominated by the decoder (Kasai et al., 2021). For instance, for token-based MoE, decoders take 200× the time per step compared to encoders at peak throughput (Kudugunta et al., 2021). Therefore, we further consider *variable number of decoder layers* along with the above choices for expert placement and expert capacity.
To the best of our knowledge, our work is the first to study such a flexible and exhaustive design space for MoE architectures.
In addition to heterogeneous experts, we allow flexible design for non-expert Transformer modules like the number of attention heads, hidden size and intermediate feedforward dimensions. This heterogeneous design of non-MoE, i.e., dense Transformer modules, has been explored in prior works such as HAT (Wang et al., 2020) for generation
| Attributes | AutoMoE | Transformer Base / Big |
|----------------------------------------|--------------------|--------------------------|
| Encoder-Embedding-Size | {512, 640} | 512 / 1024 |
| Decoder-Embedding-Size | {512, 640} | 512 / 1024 |
| #Encoder-Layers | {6} | 6 |
| #Decoder-Layers | {1, 2, 3, 4, 5, 6} | 6 |
| Encoder-QKV-Dim | {512} | 512 / 1024 |
| Decoder-QKV-Dim | {512} | 512 / 1024 |
| #Encoder-Self-Att-Heads (PL) | {4, 8} | 8 / 16 |
| #Decoder-Self-Att-Heads (PL) | {4, 8} | 8 / 16 |
| #Decoder-Cross-Att-Heads (PL) | {4, 8} | 8 / 16 |
| #Decoder-Arbitrary-Att (PL) | {-1, 1, 2} | -1 |
| Encoder-FFN-Intermediate-Size (PL, PE) | {1024, 2048, 3072} | 2048 / 4096 |
| Decoder-FFN-Intermediate-Size (PL, PE) | {1024, 2048, 3072} | 2048 / 4096 |
| #Encoder-Experts (PL) | {1, 2, · · · M} | - |
| #Decoder-Experts (PL) | {1, 2, · · · M} | - |
Encoder-Embedding-Size {512, 640} 512 / 1024
Decoder-Embedding-Size {512, 640} 512 / 1024 #Encoder-Layers {6} 6
#Decoder-Layers {1, 2, 3, 4, 5, 6} 6
Encoder-QKV-Dim {512} 512 / 1024 Decoder-QKV-Dim {512} 512 / 1024 #Encoder-Self-Att-Heads (PL) {4, 8} 8 / 16
#Decoder-Self-Att-Heads (PL) {4, 8} 8 / 16
#Decoder-Cross-Att-Heads (PL) {4, 8} 8 / 16 #Decoder-Arbitrary-Att (PL) {-1, 1, 2} -1 Encoder-FFN-Intermediate-Size (PL, PE) {1024, 2048, 3072} 2048 / 4096 Decoder-FFN-Intermediate-Size (PL, PE) {1024, 2048, 3072} 2048 / 4096
#Encoder-Experts (PL) {1, 2, *· · ·* M} -
#Decoder-Experts (PL) {1, 2, *· · ·* M} -
Table 2: Search space of AutoMoE compared to manually configured Transformer Base / Big. 'PL' and 'PE' refer to
per layer and per expert search dimensions. Decoder arbitrary attn. searches last k encoder layers to attend for each
decoder layer. FFN size varies across layers and experts. M denotes maximum experts per layer.
tasks like NMT, and AutoDistil (Xu et al., 2022a)
for understanding tasks like those in the GLUE
benchmark (Wang et al., 2018). Table 2 shows our search space. We demonstrate our heterogeneous MoE search to perform better than both manual and NAS-searched architectures in the dense space.
## 3.2 Supernet Training For Moe
AutoMoE leverages the idea of Supernet training from prior works (Cai et al., 2020; Xu et al., 2022a; Wang et al., 2020) in Neural Architecture Search that were developed for standard non-MoE architectures. We extend Supernet training to the search space for MoE's by incorporating experts, gating and routing protocols. Typically, a Supernet consists of thousands of subnetworks that are all jointly trained via weight-sharing. The Supernet for AutoMoE is the largest sparsely activated MoE
in the search space. It consists of the maximum number of experts (M) placed in every layer of the Transformer in both encoder and decoder. Each expert FFN has the maximum intermediate hidden size in the search space. Similar principles apply to the non-expert dense modules initialized with corresponding full dimension.
The Supernet is trained with the following steps:
(i) sample a candidate architecture randomly from the search space (Guo et al., 2020); (ii) train the sampled architecture by extracting the common portion of weights from different layers in the Supernet (i.e., by weight sharing) for one training step on the task; (iii) repeat steps (i) and (ii) until the training budget is exhausted. Once the Supernet training converges, we can obtain a quick accuracy estimate for a candidate architecture (i.e. subnetwork) by extracting its shared weights from the Supernet and evaluating on the validation set.
The key challenge here is to build weight sharing techniques for MoE components, which include:
(i) *router*: a neural network that is trained to route each token (of 'embedding size') in an incoming example to exactly one expert (out of M experts)
for top-1 routing; (ii) *FFN expert*: a standard Transformer FFN block that has unique weights and is learned independently. AutoMoE's expert layers follow the Switch Transformer (Fedus et al., 2022b)
specification. For subnetwork extraction from the Supernet, AutoMoE extracts front rows and front columns of the Supernet's router weight matrix, corresponding to the subnet design. For example, consider the Supernet's router to be designed for 4 experts and 640 embedding size with the shape of the router weight matrix as 4 × 640. Consider a sampled subnet during Supernet training to consist of 3 < 4 experts and 512 < 640 embedding size with the subnet's router matrix as 3×512. To populate this matrix, we extract the first 3 rows and first 512 columns from the Supernet's weight matrix (as illustrated in Figure 2 (a)). Such a weight sharing technique allows us to design hetegogeneous MoE
architectures with varying number of experts in each Transformer layer.
AutoMoE also extracts front rows and front columns from the weight matrices of each FFN
expert from the Supernet, corresponding to the subnet design. For the previous example, assume the intermediate FFN size of each expert in the Supernet to be 3072 (shape of weight matrix for first FFN layer is 3072 × 640 and second FFN layer is 640 × 3072). Assume the sampled subnet to be designed for 2 experts with intermediate FFN size of one expert to be 2048 while the other to be 1024.
For the first expert, the weight matrices of the subnet of shape 2048 × 512 (Input) and 512 × 2048
(Output) are extracted from the first 2048 rows, 512 columns (Input) and first 512 rows, 2048 columns
![4_image_1.png](4_image_1.png)
![4_image_0.png](4_image_0.png)
(a) RouterMax. Embedding Size (e.g., 640)
(Output) of the corresponding Supernet weights.
For the second expert, the weight matrices of shape 1024 × 512 (Input) and 512 × 1024 (Output) are extracted from the first 1024 rows, 512 columns
(Input) and first 512 rows, 1024 columns (Output)
of the corresponding Supernet weights. This example is illustrated in Figure 2 (b). The subnet extraction technique does not extract weights from the third and fourth experts of the Supernet as the subnet is designed to have only two experts (not shown in the figure). Such a weight sharing technique allows us to design architectures with varying intermediate FFN size for each expert. Additional techniques for improving expert capacity such as stacking FFNs, and techniques for improving Supernet performance with sandwich sampling (Yu et al., 2019), inplace knowledge distillation (Yu et al., 2019), gradient conflict reduction (Gong et al., 2022) are left for future work.
## 3.3 Searching For Efficient Moe Subnetwork With Computational Constraint
AutoMoE search is based on an evolutionary algorithm that takes the hardware computational constraint (e.g., CPU latency ≤ 600ms) as input and aims to identify the MoE subnetwork from the Supernet which achieves maximum accuracy for the task while satisfying the constraint. The algorithm works by sampling an initial set of MoE candidate architectures randomly from the Supernet; evolving the top architectures iteratively by mutation; followed by crossover; until the search iterations are exhausted. Candidate MoE architectures are easily ranked by the Supernet performance estimator based on the validation score for the task.
![4_image_2.png](4_image_2.png)
Latency estimate for each architecture is obtained by measuring the latency directly on the target device. The standard approach measures gold latency for forward propagation of a batch of examples for a large number (e.g., 300) of passes and then computes the truncated mean (after removing bottom and top 10% outlier latencies). This latency estimation can be costly given the large space of candidate architectures. To overcome this challenge, AutoMoE uses *partially gold latency*, which is obtained by forward propagation of a batch of examples for a small number (e.g., 100) of passes and then computing truncated mean. After the search is completed, the MoE architecture with the highest performance is selected as the optimal one.
## 3.4 Training Efficient Moe Sub-Transformer
Once the optimal MoE architecture is identified, we train the model weights for the final architecture to convergence for the same number of training steps as our baseline models for a fair comparison.
## 4 Experiments Datasets And Evaluation Metrics.
We evaluate AutoMoE on standard machine translation benchmarks: WMT'14 En-De, WMT'14 EnFr and WMT'19 En-De with dataset statistics in
| Dataset | Network | #Active Params (M) | Sparsity (%) | FLOPs (G) | BLEU | GPU hours | Latency (ms) |
|------------------------------|-----------------|----------------------|----------------|-------------|--------|-------------|----------------|
| WMT'14 En-De Transformer-Big | Dense | 176 | 0 | 10.6 (1×) | 28.4 | 184 | 2199 (1×) |
| SwitchTransformer-Big | Sparse | 176 | 36 | 10.6 (1×) | 28.8 | 236 | |
| Evolved Transformer | NAS over Dense | 47 | 0 | 2.9 (3.7×) | 28.2 | 2,192,000 | - |
| HAT | NAS over Dense | 56 | 0 | 3.5 (3×) | 28.2 | 264 | 669 (3.3×) |
| Random Search | NAS over Sparse | 42 | 21 | 2.2 (4.8×) | 27.3 | 126 | 416 (5.3×) |
| AutoMoE (6 Experts) | NAS over Sparse | 45 | 62 | 2.9 (3.7×) | 28.2 | 224 | 504 (4.4×) |
| WMT'14 En-Fr Transformer-Big | Dense | 176 | 0 | 10.6 (1×) | 41.2 | 240 | 2199 (1×) |
| SwitchTransformer-Big | Sparse | 176 | 36 | 10.6 (1×) | 42.3 | 234 | |
| Evolved Transformer | NAS over Dense | 175 | 0 | 10.8 (1×) | 41.3 | 2,192,000 | - |
| HAT | NAS over Dense | 57 | 0 | 3.6 (2.9×) | 41.5 | 248 | 723 (3×) |
| Random Search | NAS over Sparse | 42 | 21 | 2.2 (4.8×) | 40.3 | 130 | 416 (5.3×) |
| AutoMoE (6 Experts) | NAS over Sparse | 46 | 72 | 2.9 (3.7×) | 41.6 | 236 | 547 (4×) |
| AutoMoE (16 Experts) | NAS over Sparse | 135 | 65 | 3.0 (3.5×) | 41.9 | 672 (3.3×) | |
| WMT'19 En-De Transformer-Big | Dense | 176 | 0 | 10.6 (1×) | 46.1 | 184 | 2199 (1×) |
| SwitchTransformer-Big | Sparse | 176 | 36 | 10.6 (1×) | 47.0 | 223 | |
| HAT | NAS over Dense | 63 | 0 | 4.1 (2.6×) | 45.8 | 264 | 758 (2.9×) |
| Random Search | NAS over Sparse | 42 | 21 | 2.2 (4.8×) | 43.7 | 126 | 416 (5.3×) |
| AutoMoE (2 Experts) | NAS over Sparse | 45 | 41 | 2.8 (3.8×) | 45.5 | 248 | 558 (3.9×) |
| AutoMoE (16 Experts) | NAS over Sparse | 69 | 81 | 3.2 (3.3×) | 45.9 | 656 (3.3×) | |
Table 3. We use pre-processed datasets and evaluation setup from (Wang et al., 2020). We report BLEU score (Papineni et al., 2002) as a performance metric with beam of size 5 and a length penalty of 0.6 (for WMT).
Baselines. We compare AutoMoE against both manually designed and NAS-searched architectures.
For **manual baselines**, we consider: (a) densely activated Transformers (Vaswani et al., 2017) with no experts; (b) sparsely activated MoE with homogeneous experts (i.e. same number and FFN size)
placed in every other layer (Fedus et al., 2022b; Kim et al., 2021; Zuo et al., 2022; Du et al., 2022; Artetxe et al., 2021).
For **NAS baselines**, we consider (c) HAT (Wang et al., 2020), which is a Supernet-based state-of-theart NAS framework for identifying efficient dense sub-Transformers for neural machine translation
(same task setting as ours); and (d) Evolved Transformer (So et al., 2019) which is one of the earlier works on finding efficient dense sub-Transformers with evolution-based architecture search. *Note that* both the NAS baselines apply only to dense nonMoE transformers, and AutoMoE is the first work to leverage NAS to identify efficient sparse MoE subtransformers. Finally, we consider (e) AutoMoE
with Random Search (typically treated as a strong baseline for NAS) that samples an MoE subnetwork
(given latency constraints) randomly from AutoMoE
search space and trains it till convergence.
Training configurations and search space. All the baselines and AutoMoE including the Supernet and final model are trained with the same setting for fair comparison. All the models are trained for 40K steps, with a warmup of 10K steps from 10−7to 10−3and use cosine annealing to 10−7 for the rest of the steps. All models are trained using fairseq toolkit (Ott et al., 2019) with an effective batch size of 524K tokens on 16 V100 GPUs.
All the NAS baselines have the same search space for dense Transformer modules (e.g., number of decoder layers, q-k-v dimension, attention heads, etc.) with AutoMoE further incorporating MoE relevant aspects (e.g., experts, gating, routing, etc.)
in the search space. The number of encoder layers is kept fixed for all the NAS baselines including AutoMoE since the latency is primarily determined by the decoders for autoregressive generation (as we discuss in Section 5.2).
Evolutionary search setup. For performance estimation, we monitor the validation loss of subnets on the NMT task. We compute latency by measuring the time taken to perform translation from a source sentence to a target sentence with same desired input/output length (30 for WMT) and original beam settings (see Section 4) on target device
![6_image_0.png](6_image_0.png)
(Intel Xeon CPU). We measure latency 300 times for gold (to report final metrics) and 100 times for partially gold (during evolutionary search) respectively; discard top and bottom 10% (outlier latency)
and compute mean of the rest. Hyper-parameter settings for evolutionary search include: 15 as iterations, 125 as population size, 25 as parents' size, 50 as mutation population size with mutation probability of 0.3 and 50 as crossover population size.
Unless otherwise stated, latency constraint for all experiments is set to 600ms.
## 5 Results 5.1 Automoe Vs. Baseline Performance
Table 4 presents a comparison of AutoMoE with baselines on several computational metrics and task performance. We report the number of parameters without embedding weights, and FLOPs without the last decoding layer for all the models, consistent with (Wang et al., 2020) evaluation.
AutoMoE-generated sparse MoE sub-Transformers obtain 4× reduction in FLOPs over both manually designed (densely activated) Transformer-Big, and
(sparsely activated) MoE SwitchTransformer-Big with experts in every layer, and equivalent inference speedups on CPU. Compared to NAS baselines like Evolved Transformer (So et al., 2019)
and HAT (Wang et al., 2020) that generate densely activated sub-Transformers, AutoMoE improves on FLOPs and latency by 2.4× and 1.3× respectively with parity in BLEU score on aggregate. Notably, Supernet-based AutoMoE and HAT have massively reduced amortized training cost (GPU hours) compared to Evolved Transformer with progressive evolutionary search. AutoMoE with Random Search, a strong NAS baseline, obtains the best speedup but with significant performance regression.
Compared to all other models (both dense and sparse), we observe AutoMoE to generate networks with high sparsity resulting in massively reduced active parameters and FLOPs. For the NAS models, we train the top-2 sub-Transformers in the Pareto and report the one with the best trade-off in BLEU
vs. FLOPs on the validation set. Maximum experts for the best performance vary for different tasks, with 6 experts for WMT'14 En-De, 16 experts for WMT'14 En-Fr and WMT'19 En-De - given the latter two datasets are 10× larger than the former.
## 5.2 Analysis
Decoder layers vs. FLOPs. Figure 3 (a) shows the average FLOPs for several AutoMoE architectures with different decoder layers as obtained during our search (varying from 3 to 6) from the Pareto, and baseline models. Notice that the FLOPs increase with increase in decoder layers, given the auto-regressive nature of NMT tasks which require generating tokens sequentially. In contrast to manually designed Transformers with 6 decoder layers
(both dense and sparsely activated MoE variants),
AutoMoE- and HAT-searched architectures reduce the number of decoder layers with a resulting decrease in both FLOPs and latency. This is also evident in Figure 3 (e) which shows that decoder latency dominates the total inference latency for all
| Model | Encoder | Decoder | | |
|-----------------------------------------|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|--------------------|-----------------------------------------------------|
| Dataset | #Experts per layer | Expert FFN Inter Size | #Experts per layer | Expert FFN Inter Size |
| Std-expert WMT'14 En-De | 5-1-1-1-2-1 | 3072-3072-3072-3072-2048-3072 | 1-1-1-1 | 3072-3072-3072-3072 |
| WMT'14 En-Fr | 1-4-2-6-5-5 | 3072-3072-3072-3072-3072-3072 | 2-1-1-3 | 3072-3072-3072-3072 |
| WMT'19 En-De | 1-1-2-1-2-1 | 3072-3072-3072-3072-3072-2048 | 1-1-1-2 | 3072-3072-3072-3072 |
| Fract-expert WMT'14 En-De | 3-2-3-4-1-3 | [2048-3072-2048]-[3072-1024]-[3072-3072- | 3-1-1-1 | [3072-1024-2048]-3072-3072- |
| 1024]-[3072-1024-3072-2048]-3072-[3072- | 3072 | | | |
| 1024-3072] | | | | |
| WMT'14 En-Fr | 6-2-3-4-4-5 | [2048-1024-2048-1024-1024-3072]-[2048- 2048]-[3072-3072-2048]-[3072-3072-2048- 3072]-[3072-1024-1024-2048]-[2048-3072- 3072-2048-2048] | 2-1-4-2 | [3072-3072]-3072-[3072- 3072-3072-2048]-[3072-2048] |
| WMT'19 En-De | 2-3-1-2-6-1 | [3072-3072]-[3072-3072-3072]-3072-[3072- | 2-4-1-1 | [3072-3072]-[3072-1024- |
| 2048]-[3072-1024-2048-3072-1024-2048]- | 2048-3072]-3072-3072 | | | |
| 3072 | | | | |
Table 5: AutoMoE-generated Pareto-optimal architectures for different datasets. FFN intermediate sizes for fractional experts (i.e. varying expert sizes within each layer) are enclosed within square brackets.
## The Models By More Than 90%.
Expert distribution in encoder vs. decoder. Figure 3 (b) plots the number of encoder experts as ratio of total experts for AutoMoE-generated subTransformers. We observe that AutoMoE assigns significantly larger number of experts to encoder as compared to the decoder. As a result, encoders have much higher capacity (i.e., encoder parameters as a proportion of overall parameters) than decoders. This correlates with the earlier observation that models with higher encoder layers compared to decoder layers enjoy better latency-performance trade-off (Kasai et al., 2021). Our findings from AutoMoE designed architectures indicate that the number of layers and experts are two knobs that jointly help in modulating encoder capacity and decoder latency to design efficient MoE.
## Expert Distribution In Different Layers. Figures 3
(c) and (d) show the percentage of experts allocated to different layers for encoders and decoders
- averaged over several sampled architectures from AutoMoE Supernet. Notice that the middle encoder layers (3 rd, 5 th) are allocated the maximum number of experts, while the first layer receives the least.
The trend reverses for decoder, with the first layer receiving most experts with gradual reduction in expert allocation. This is also consistent with keeping decoders light by dropping layers to reduce latency; while compensating for the reduced capacity with increased experts in the first few layers.
Latency vs. FLOPs as constraint for search. Ta-1We use same hyper-parameters for all models with no tuning (provided in code). Given 40K training steps for each model and no tuning, MoE numbers may not be comparable to SOTA numbers which typically train for more steps. HAT and Evol. Transformer numbers are reported from (Wang et al.,
2020). We follow their evaluation and reporting protocol.
| Search Constraint | BLEU | FLOPs (G) | Latency (ms) |
|----------------------|--------|-------------|----------------|
| Latency ≤ 200ms HAT | 41.45 | 3.6 | 212 |
| AutoMoE (2 Experts) | 41.23 | 2.9 | 176 |
| AutoMoE (4 Experts) | 41.22 | 3.0 | 198 |
| FLOPs ≤ 3 GFLOPs HAT | 40.89 | 3.0 | 158 |
| AutoMoE (2 Experts) | 41.09 | 3.0 | 216 |
| AutoMoE (4 Experts) | 41.10 | 3.0 | 229 |
Table 6: Impact of latency and FLOPs constraints on WMT'14 En-Fr dataset. Latency is computed on 1 NVIDIA V100 GPU.
ble 6 presents the impact of latency and FLOPs as computational constraints on the performanceefficiency trade-off. Constraining FLOPs results in models that fully exhaust the FLOPs budget; while leading to higher latency. On the other hand, constraining latency tends to under-utilize the budget leading to relatively superior FLOPs and latency, providing a stricter control.
Pareto-optimal AutoMoE **generated MoE architectures.** Table 5 shows sparsely activated MoE
architectures designed by two variants of AutoMoE
('std-expert': expert FFN size same in each layer and variable across; 'fract-expert': fully heterogeneous expert size) for different datasets with the best trade-off in BLEU vs. latency. On aggregate 71% of the experts are allocated to the encoder compared to the decoder. Meanwhile, 70% of the expert layers in 'fract-expert' architectures have 2 or more experts, out of which more than 75%
of the expert layers have varying capacities (i.e.,
experts with different FFN intermediate size). Figures 4, 5, 6 in Appendix show full architecture
(embedding size, layers, heads, experts, placement, sizes, etc.) of AutoMoE subnets on WMT14 En-De,
| Search Space Variation | BLEU | FLOPs |
|-----------------------------------------------------------------|-----------------------------------------|---------|
| HAT | 28.2 | 3.5G |
| AutoMoE (2 Experts) w/ fixed encoder layers | 28.2 | 2.9G |
| Varying number of encoder layers HAT w/ #Encoder-Layers ∈ {1-6} | 28.1 | 3.4G |
| AutoMoE (2 Experts) w/ #Encoder-Layers ∈ {1-6} | 28.3 | 3.7G |
| AutoMoE (2 Experts) w/ manually designed homogeneous experts 1-2-1-2-1-2 | 28.3 | 3.5G |
| 1-1-1-2-2-2 | 28.3 | 3.8G |
| 2-2-2-1-1-1 | 28.3 | 3.1G |
| AutoMoE w/ Identity Expert FFN size ∈ {0, 3072} | 28.1 | 2.7G |
| Table 7: | Variations in AutoMoE's search space on | |
## Wmt14 En-Fr And Wmt19 En-De Respectively.
MoE Search space variations. Table 7 presents the impact of search space choices on MoE efficiency and performance trade-off. The first variation is to make '\#Encoder Layers' an elastic search dimension. Note that both HAT and AutoMoE consider the number of encoder layers to be fixed (refer to Table 2). We observe that varying encoder layers has a relatively poor trade-off on model performance vs efficiency as compared to varying decoder layers, re-inforcing our prior observations on the importance of encoder capacity and depth.
In the second variation (see third major row), we fix the expert architecture (with 2 experts manually placed uniformly) in the search space and only search for standard Transformer hyper-parameters.
Observe that AutoMoE-designed models have better FLOPs than such manually designed ones.
The last variation introduces identity or dummy experts (i.e., expert with 0 intermediate FFN size, equivalent to identity operation). This explores the idea that we can *skip* the computation for some of the tokens based on context rather than always forcing them through an FFN. We observe identity experts to marginally hurt the performance but significantly reduce FLOPs (see last major row).
## 6 Conclusion
AutoMoE is the first framework to design heterogeneous MoE's under computational constraints. It supports adaptive computation i.e. variable compute for different inputs with variable-size experts.
It leverages NAS to explore a heterogeneous search space with variable number of experts, sizes, and placement choices; alongside other standard Transformer architectural hyper-parameters. AutoMoE
generated MoE subnetworks reduce FLOPs and latency over both manually designed and NASsearched architectures on benchmark MT tasks.
## 7 Limitations
Given our focus on finding efficient MoE models under computational constraints, AutoMoE search space and evaluation has been restricted in scale to big-sized Transformer models for benchmark MT
tasks. A natural extension of this work is to explore the limits of MoE models like SwitchTransformers (Fedus et al., 2022b) and GShard (Lepikhin et al., 2020) that are significantly larger containing billions to trillions of parameters; as well as designing sparse and transferable efficient expert models (Zoph et al., 2022) for diverse types of tasks like reasoning, summarization and understanding.
The limitations of this work are as follows:
1. Sandwich sampling (Yu et al., 2019), inplace knowledge distillation (Yu et al., 2019), and gradient conflict reduction (Gong et al., 2022) are popular techniques to improve the training procedure of supernet. It would be interesting to study the impact of these techniques to improve AutoMoE's supernet.
2. AutoMoE uses the hidden dimension of intermediate feedforward network (FFN) to modulate the capacity of each expert. It would be interesting to study other techniques to modulate expert capacity such as stacking variable number of hidden layers in FFN.
3. The backbone of AutoMoE's supernet uses Switch Transformer, which adds FFN based expert layers and routes each token to exactly one expert (top-1 routing). It would be interesting to: (i) search for the number of tokens to route, and (ii) search for the Transformer component (e.g., FFN, self-attention projection layers, LayerNorm) to add expert layers.
4. AutoMoE's search space contains classical Transformer components such as multi-head attention and FFN layers. It would be interesting to add components that are efficient by design such as convolutional layer, FLASH (Hua et al., 2022), and g-MLP (Liu et al., 2021).
## Acknowledgements
MAM acknowledges support from Canada Research Chairs (CRC), the Natural Sciences and Engineering Research Council of Canada (NSERC;
RGPIN-2018-04267), Canadian Foundation for Innovation (CFI; 37771), and Digital Research Alliance of Canada.2 Lakshmanan's research was supported in part by a grant from NSERC (Canada).
## References
Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona T. Diab, Zornitsa Kozareva, and Ves Stoyanov. 2021. Efficient large scale language modeling with mixtures of experts. *CoRR*,
abs/2112.10684.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. 2020. Once for all: Train one network and specialize it for efficient deployment. In *International Conference on Learning Representations*.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Chenhe Dong, Guangrun Wang, Hang Xu, Jiefeng Peng, Xiaozhe Ren, and Xiaodan Liang. 2021. EfficientBERT: Progressively searching multilayer perceptron via warm-up knowledge distillation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 1424–1437, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten P Bosma, Zongwei Zhou, Tao Wang, Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc Le, Yonghui 2https://alliancecan.ca
Wu, Zhifeng Chen, and Claire Cui. 2022. GLaM:
Efficient scaling of language models with mixtureof-experts. In *Proceedings of the 39th International* Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5547–5569. PMLR.
William Fedus, Jeff Dean, and Barret Zoph. 2022a. A
review of sparse expert models in deep learning.
William Fedus, Barret Zoph, and Noam Shazeer. 2022b.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1–39.
Jiahui Gao, Hang Xu, Han Shi, Xiaozhe Ren, Philip L. H. Yu, Xiaodan Liang, Xin Jiang, and Zhenguo Li. 2022. Autobert-zero: Evolving BERT backbone from scratch. In *Thirty-Sixth AAAI Conference on* Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI
2022 Virtual Event, February 22 - March 1, 2022, pages 10663–10671. AAAI Press.
Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, qiang liu, and Vikas Chandra. 2022. NASVit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training. In *International Conference* on Learning Representations.
Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. 2020. Single path one-shot neural architecture search with uniform sampling. In *Computer Vision - ECCV 2020 -*
16th European Conference, Glasgow, UK, August 2328, 2020, Proceedings, Part XVI, volume 12361 of Lecture Notes in Computer Science, pages 544–560.
Springer.
Weizhe Hua, Zihang Dai, Hanxiao Liu, and Quoc Le.
2022. Transformer quality in linear time. In *Proceedings of the 39th International Conference on Machine* Learning, volume 162 of *Proceedings of Machine* Learning Research, pages 9099–9117. PMLR.
Mojan Javaheripi, Shital Shah, Subhabrata Mukherjee, Tomasz L. Religa, Caio C. T. Mendes, Gustavo H.
de Rosa, Sebastien Bubeck, Farinaz Koushanfar, and Debadeepta Dey. 2022. Litetransformersearch:
Training-free on-device search for efficient autoregressive language models.
Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah Smith. 2021. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In International Conference on Learning Representations.
Young Jin Kim, Ammar Ahmad Awan, Alexandre Muzio, Andrés Felipe Cruz-Salinas, Liyang Lu, Amr Hendy, Samyam Rajbhandari, Yuxiong He, and
Hany Hassan Awadalla. 2021. Scalable and efficient moe training for multitask multilingual models.
CoRR, abs/2109.10465.
Sneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Luong, and Orhan Firat. 2021. Beyond distillation:
Task-level mixture-of-experts for efficient inference.
In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3577–3599, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020.
Gshard: Scaling giant models with conditional computation and automatic sharding. *CoRR*,
abs/2006.16668.
Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. 2021. Base layers:
Simplifying training of large, sparse models. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings* of Machine Learning Research, pages 6265–6274.
PMLR.
Hanxiao Liu, Zihang Dai, David So, and Quoc V Le.
2021. Pay attention to mlps. In Advances in Neural Information Processing Systems, volume 34, pages 9204–9215. Curran Associates, Inc.
Rui Liu, Young Jin Kim, Alexandre Muzio, and Hany Hassan. 2022. Gating dropout: Communicationefficient regularization for sparsely activated transformers. In *Proceedings of the 39th International* Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 13782–13792. PMLR.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*,
pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, and Yuxiong He. 2022.
DeepSpeed-MoE: Advancing mixture-of-experts inference and training to power next-generation AI
scale. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of
Proceedings of Machine Learning Research, pages 18332–18346. PMLR.
Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, and Jason E Weston. 2021. Hash layers for large sparse models. In Advances in Neural Information Processing Systems.
Tal Schuster, Adam Fisch, Jai Gupta, Mostafa Dehghani, Dara Bahri, Vinh Q. Tran, Yi Tay, and Donald Metzler. 2022. Confident adaptive language modeling.
Noam Shazeer, *Azalia Mirhoseini, *Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.
In *International Conference on Learning Representations*.
David So, Quoc Le, and Chen Liang. 2019. The evolved transformer. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pages 5877–
5886. PMLR.
David So, Wojciech Manke, Hanxiao Liu, Zihang Dai, ´
Noam Shazeer, and Quoc V Le. 2021. Searching for efficient transformers for language modeling. In Advances in Neural Information Processing Systems, volume 34, pages 6010–6022. Curran Associates, Inc.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE:
A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Hanrui Wang, Zhanghao Wu, Zhijian Liu, Han Cai, Ligeng Zhu, Chuang Gan, and Song Han. 2020. HAT:
Hardware-aware transformers for efficient natural language processing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7675–7688, Online. Association for Computational Linguistics.
Dongkuan Xu, Subhabrata (Subho) Mukherjee, Xiaodong Liu, Debadeepta Dey, Wenhui Wang, Xiang Zhang, Ahmed H. Awadallah, and Jianfeng Gao.
2022a. Autodistil: Few-shot task-agnostic neural architecture search for distilling large language models.
ArXiv.
Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu. 2021. Nas-bert: Task-agnostic and adaptive-size bert compression with neural architecture search. In *Proceedings of the 27th ACM*
SIGKDD Conference on Knowledge Discovery &
Data Mining, KDD '21, page 1933–1943, New York, NY, USA. Association for Computing Machinery.
Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, and Jian Li. 2022b. Analyzing and mitigating interference in neural architecture search. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 24646–24662. PMLR.
Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. 2021. AutoTinyBERT: Automatic hyper-parameter optimization for efficient pre-trained language models. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5146–5157, Online. Association for Computational Linguistics.
Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. 2019. Slimmable neural networks.
In *International Conference on Learning Representations*.
Seniha Esen Yuksel, Joseph N. Wilson, and Paul D.
Gader. 2012. Twenty years of mixture of experts.
IEEE Transactions on Neural Networks and Learning Systems, 23(8):1177–1193.
Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. 2022. St-moe: Designing stable and transferable sparse expert models.
Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Jianfeng Gao, and Tuo Zhao. 2022. Taming sparsely activated transformer with stochastic experts. In International Conference on Learning Representations.
## A Appendix A.1 Full Architecture Design
Figure 4, 5 and 6 present the full architecture design of pareto-efficient architectures generated by AutoMoE.
## A.2 Evolutionary Search - Stability
We study the initialization effects on the stability of the pareto front outputted by the evolutionary search for HAT. Table 8 displays sampled (direct)
BLEU and latency of the models in the pareto front for different seeds on the WMT'14 En-Fr task. The differences in the latency and BLEU across seeds are mostly marginal. This result highlights that the pareto front outputted by the evolutionary search is largely stable for HAT.
![11_image_0.png](11_image_0.png)
| Supernet / Pareto Front | Model 1 | Model 2 | Model 3 | | | | |
|---------------------------|-----------|-----------|-----------|--------|---------|--------|-------|
| Seed | Latency | BLEU | Latency | BLEU | Latency | BLEU | |
| HAT (SPOS) | 1 | 96.39 | 38.94 | 176.44 | 39.26 | 187.53 | 39.16 |
| HAT (SPOS) | 2 | 98.91 | 38.96 | 159.87 | 39.20 | 192.11 | 39.09 |
| HAT (SPOS) | 3 | 100.15 | 38.96 | 158.67 | 39.24 | 189.53 | 39.16 |
Table 8: Stability of the evolutionary search w.r.t. different seeds on the WMT'14 En-Fr task. Search quality is measured in terms of latency and sampled (direct) supernet performance (BLEU) of the models in the pareto front.
## A.3 Evolutionary Search - Algorithm
We present the pseudo code of the evolutionary search algorithm proposed by HAT in Algorithm 1.
This algorithm is also adopted by AutoMoE.
Algorithm 1 Evolutionary search algorithm for Neural architecture search.
Input: supernet, latency-predictor, num-iterations, num-population, num-parents, num-mutations, num-crossover, mutate-prob, latency-constraint Output: best-architecture 1: *popu* ← num-population random samples from the search space 2: for *iter* ← 1 to num-iterations do 3: cur-parents ← top 'num-parents' architectures from *popu* by supernet's validation loss 4: cur-mutate-popu = {}
5: for mi ← 1 to num-mutations do 6: cur-mutate-gene ← mutate a random example from *popu* with mutation probability mutate-prob 7: if cur-mutate-gene satisfies latency-constraint via latency-predictor **then**
8: cur-mutate-popu = cur-mutate-popu ∪ cur-mutate-gene 9: cur-crossover-popu = {}
10: for ci ← 1 to num-crossover do 11: cur-crossover-gene ← crossover two random examples from *popu* 12: if cur-crossover-gene satisfies latency-constraint via latency-predictor **then**
13: cur-crossover-popu = cur-crossover-popu ∪ cur-crossover-gene 14: *popu* = cur-parents ∪ cur-mutate-popu ∪ cur-crossover-popu 15: return top architecture from *popu* by supernet's validation loss
![14_image_0.png](14_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Abstract
✓ B1. Did you cite the creators of artifacts you used?
Abstract
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
hu-etal-2023-language | Language Agnostic Multilingual Information Retrieval with Contrastive Learning | https://aclanthology.org/2023.findings-acl.581 | Multilingual information retrieval (IR) is challenging since annotated training data is costly to obtain in many languages. We present an effective method to train multilingual IR systems when only English IR training data and some parallel corpora between English and other languages are available. We leverage parallel and non-parallel corpora to improve the pretrained multilingual language models{'} cross-lingual transfer ability. We design a semantic contrastive loss to align representations of parallel sentences that share the same semantics in different languages, and a new language contrastive loss to leverage parallel sentence pairs to remove language-specific information in sentence representations from non-parallel corpora. When trained on English IR data with these losses and evaluated zero-shot on non-English data, our model demonstrates significant improvement to prior work on retrieval performance, while it requires much less computational effort. We also demonstrate the value of our model for a practical setting when a parallel corpus is only available for a few languages, but a lack of parallel corpora resources persists for many other low-resource languages. Our model can work well even with a small number of parallel sentences, and be used as an add-on module to any backbones and other tasks. | # Language Agnostic Multilingual Information Retrieval With Contrastive Learning
Xiyang Hu1, Xinchi Chen2∗, Peng Qi2∗, Deguang Kong2**, Kunlun Liu**2, William Yang Wang2**, Zhiheng Huang**2 1Carnegie Mellon University 2AWS AI Labs [email protected], {xcc,pengqi,kongdegu,kll}@amazon.com [email protected], [email protected]
## Abstract
Multilingual information retrieval (IR) is challenging since annotated training data is costly to obtain in many languages. We present an effective method to train multilingual IR systems when only English IR training data and some parallel corpora between English and other languages are available. We leverage parallel and non-parallel corpora to improve the pretrained multilingual language models' cross-lingual transfer ability. We design a *semantic contrastive loss* to align representations of parallel sentences that share the same semantics in different languages, and a new *language* contrastive loss to leverage parallel sentence pairs to remove language-specific information in sentence representations from non-parallel corpora. When trained on English IR data with these losses and evaluated zero-shot on nonEnglish data, our model demonstrates significant improvement to prior work on retrieval performance, while it requires much less computational effort. We also demonstrate the value of our model for a practical setting when a parallel corpus is only available for a few languages, but a lack of parallel corpora resources persists for many other low-resource languages. Our model can work well even with a small number of parallel sentences, and be used as an add-on module to any backbones and other tasks.
## 1 Introduction
Information retrieval (IR) is an important natural language processing task that helps users efficiently gather information from a large corpus (some representative downstream tasks include question answering, summarization, search, recommendation, etc.), but developing effective IR systems for all languages is challenging due to the cost of, and therefore lack of, annotated training data in many languages. While this problem is not unique to IR
Work done during Xiyang's internship at AWS AI.
* denotes equal contribution.
Code is at: https://github.com/xiyanghu/multilingualIR.
![0_image_0.png](0_image_0.png)
Figure 1: (a) The *semantic contrastive loss* encourages the embeddings of parallel pairs, i.e. sentences that have the same semantics but from different languages, to be close to each other, and away from the rest negative samples - sentences with different semantics. (b) The language contrastive loss incorporates the non-parallel corpora in addition to the parallel ones. It encourages the distances from a sentence representation, which can be a sample from both the parallel corpora and the nonparallel corpora, to the two embeddings of a paralleled pair to be the same.
research (Joshi et al., 2020), constructing IR data is often more costly due to the need to either translate a large text corpus or gather relevancy annotations, or both, which makes it difficult to generalize IR
models to lower-resource languages.
One solution to this is to leverage the pretrained multilingual language models to encode queries and corpora for multilingual IR tasks (Zhang et al.,
2021; Sun and Duh, 2020). One series of work on multilingual representation learning is based on training a masked language model, some with the next sentence prediction task, on monolingual corpora of many languages, such as mBERT and XLM-R (Conneau et al., 2020). They generally do not explicitly learn the alignment across different languages and do not perform effectively in empirical IR experiments. Other works directly leverage multilingual parallel corpora or translation pairs to explicitly align the sentences in two languages, such as InfoXLM (Chi et al., 2021) and LaBSE
(Feng et al., 2022).
In this work, we propose to use the *semantic contrastive loss* and the *language contrastive loss* to 9133 jointly train with the information retrieval objective, for learning cross-lingual representations that encourage efficient lingual transfer ability on retrieval tasks. Our semantic contrastive loss aims to align the embeddings of sentences that have the same semantics. It is similar to the regular InfoNCE
(Oord et al., 2018) loss, which forces the representations of parallel sentence pairs in two languages to be close to each other, and away from other negative samples. Our language contrastive loss aims to leverage the non-parallel corpora for languages without any parallel data, which are ignored by the semantic contrastive loss. It addresses the practical scenario wherein parallel corpora are easily accessible for a few languages, but the lack of such resources persists for many low-resource languages. The language contrastive loss encourages the distances from a sentence representation to the two embeddings of a paralleled pair to be the same.
Figure 1 illustrates how the two losses improve language alignment. In experiments, we evaluate the zero-shot cross-lingual transfer ability of our model on monolingual information retrieval tasks for 10 different languages. Experimental results show that our proposed method obtains significant gains, and it can be used as an add-on module to any backbones. We also demonstrate that our method is much more computationally efficient than prior work. Our method works well with only a small number of parallel sentence pairs and works well on languages without any parallel corpora.
## 2 Background: Multilingual Dpr
Dense Passage Retriever (DPR) (Karpukhin et al.,
2020) uses a dual-encoder structure to encode the queries and passages separately for information retrieval. To generalize to multilingual scenarios, we replace DPR's original BERT encoders with a multilingual language model XLM-R (Conneau et al., 2020) to transfer English training knowledge to other languages.
Concretely, given a batch of N query-passage pairs (pi
, qi
), we consider all other passages pj, j ̸= i in the batch irrelevant (negative) passages, and optimize the retrieval loss function as the negative log-likelihood of the gold passage:
$$\mathcal{L}_{\mathrm{IR}}=-\frac{1}{N}\sum_{i=1}^{N}$$ $$\log\frac{\exp\left(\mathrm{sim}\left(\mathbf{q}_{i},\mathbf{p}_{i}\right)\right)}{\exp\left(\mathrm{sim}\left(\mathbf{q}_{i},\mathbf{p}_{i}\right)\right)+\sum_{j=1,j\neq i}^{N}\exp\left(\mathrm{sim}\left(\mathbf{q}_{i},\mathbf{p}_{j}\right)\right)}\tag{1}$$
where the similarity of two vectors is defined as sim(u, v) = u⊤v
∥u∥∥v∥
.
## 3 **Contrastive Learning For Cross-Lingual** Generalization
The multilingual dense passage retriever only uses English corpora for training. To improve the model's generalization ability to other languages, we leverage two contrastive losses, *semantic contrastive loss* and *language contrastive loss*. Figure 2 shows our model framework.
Specifically, the *semantic contrastive loss* (Chen et al., 2020a) pushes the embedding vectors of a pair of parallel sentences close to each other, and at the same time away from other in-batch samples that have different semantics. The *language contrastive loss* focuses on the scenario when there is no parallel corpora for some languages, which encourages the distance from a sentence embedding to paralleled embedding pairs to be the same.
## 3.1 Semantic Contrastive Loss
To learn a language-agnostic IR model, we wish to encode the sentences with the same semantics but from different languages to have the same embeddings. For each parallel corpora batch, we do not limit our sample to just one specific language pair. We randomly sample different language pairs for a batch. For example, a sampled batch could contain multiple language pairs of En-Ar, En-Ru, En-Zh, etc. This strategy can increase the difficulty of our contrastive learning and make the training more stable.
Concretely, we randomly sample a mini-batch of 2N data points (N here does not have to be the same value as the N in Section 2). The batch contains N pairs of parallel sentences from multiple different languages. Given a positive pair zi and zj , the embedding vectors of a pair of parallel sentences (*i, j*) from two languages, the rest 2(N − 1) samples are used as negative samples.
The semantic contrastive loss for a batch is:
$$\mathcal{L}_{\text{semaCL}}=-\ \frac{1}{2N}\sum_{\left(i,j\right)}$$ $$\left[\log\frac{\exp\left(\text{sim}\left(\boldsymbol{z}_{i},\boldsymbol{z}_{j}\right)/\tau\right)}{\sum_{k=1,k\neq i}^{2N}\exp\left(\text{sim}\left(\boldsymbol{z}_{i},\boldsymbol{z}_{k}\right)/\tau\right)}+\right.\tag{2}$$ $$\left.\log\frac{\exp\left(\text{sim}\left(\boldsymbol{z}_{j},\boldsymbol{z}_{i}\right)/\tau\right)}{\sum_{k=1,k\neq j}^{2N}\exp\left(\text{sim}\left(\boldsymbol{z}_{j},\boldsymbol{z}_{k}\right)/\tau\right)}\right]$$
where $\tau$ is a temperature hyperparameter.
![2_image_0.png](2_image_0.png)
## 3.2 Language Contrastive Loss
When training multilingual IR systems, we might not always have parallel corpora for all languages of interest. In a realistic scenario, we have easy access to a few high-resource languages' parallel corpora, but no such availability for many lowresource languages. We propose a language contrastive loss to generalize the model's ability to the languages which do not have any parallel corpora.
For a batch B consisting of both parallel corpora P
and non-parallel corpora Q, we denote zi and zj as the embeddings of a pair of parallel sentences
(*i, j*) from two languages. We wish the cosine similarity from any other sentence embedding zk to the two embeddings of a parallel pair to be the same.
Therefore, we minimize the following loss.
$$\mathcal{L}_{\text{langCL}}=-\ \frac{1}{N(N-2)}\sum_{(i,j)\in\mathbb{P}\ k\in(\mathbb{P}\cup\mathbb{O})\setminus\{i,j\}}\tag{3}$$ $$\left[\log\frac{\exp(\text{sim}\left(\boldsymbol{z}_{i},\boldsymbol{z}_{k}\right))}{\exp(\text{sim}\left(\boldsymbol{z}_{i},\boldsymbol{z}_{k}\right))+\exp(\text{sim}\left(\boldsymbol{z}_{j},\boldsymbol{z}_{k}\right))}+\right.$$ $$\left.\log\frac{\exp(\text{sim}\left(\boldsymbol{z}_{j},\boldsymbol{z}_{k}\right))}{\exp(\text{sim}\left(\boldsymbol{z}_{i},\boldsymbol{z}_{k}\right))+\exp(\text{sim}\left(\boldsymbol{z}_{j},\boldsymbol{z}_{k}\right))}\right]$$
The optimum can be reached when sim (zi, zk) =
sim (zj , zk) for all *i, j, k*. Note that the parallel corpus involved is not the target language's parallel corpus. For example, in Formula 3, i and j are two languages that are parallel with each other, and k is a third language (target language) that does not have any parallel corpus with other languages.
## 3.3 **Semantic Vs Language Contrastive Losses**
While both the semantic contrastive loss and language contrastive loss can serve to align the representations of parallel sentences and remove language bias, they achieve this goal differently, one via contrasting against in-batch negative samples, the other using in-batch parallel examples to constrain the target language embeddings. Moreover, a key property of the language contrastive loss is that as long as there is some parallel corpus, we can use this loss function to remove the language bias from representations of sentences where no parallel data exists, which makes it more broadly applicable.
## 4 Training
The two contrastive losses are applied to the passage encoder only. Experiments show that applying them to both the passage encoder and the query encoder would result in unstable optimization, where we see weird jumps in the training loss curves.
The joint loss with the information retrieval loss, the semantic contrastive loss, and the language contrastive loss is
$$\mathcal{L}=\mathcal{L}_{\rm IR}+w_{s}\mathcal{L}_{\rm semaCL}+w_{l}\mathcal{L}_{\rm langCL},\tag{4}$$
where ws and wl are hyperparameters for the semantic contrastive loss and the language contrastive loss weights which need to be tuned adaptively in different tasks.
We train our model using 8 Nvidia Tesla V100 32GB GPUs. We use a batch size of 48. We use the AdamW optimizer with β1 = 0.9, β2 = 0.999 and a learning rate of 10−5. For the three losses LIR,LsemaCL,LlangCL, we sequentially calculate the loss and the gradients. We use ws = 0.01 and wl = 0.001. The hyperparameters are determined through a simple grid search.
## 5 Experiments 5.1 Datasets
Our IR experiments involve two types of datasets:
IR datasets and the parallel corpora.
5.1.1 Information Retrieval In our experiments, we only use English information retrieval corpora (**Natural Questions**), and we evaluate the model's zero-shot transfer ability on other target languages (**Mr.TyDi**).
- **Natural Questions** (Kwiatkowski et al., 2019)
is an English QA dataset. Following Zhang et al. (2021), we use NQ dataset to train IR.
- **Mr.TyDi** is a multilingual dataset for monolingual retrieval (Zhang et al., 2021), which is constructed from a question answering dataset TyDi (Clark et al., 2020). It contains eleven typologically diverse languages, i.e., Arabic
(Ar), Bengali (Bn), English (En), Finnish (Fi),
Indonesian (Id), Japanese (Ja), Korean (Ko),
Russian (Ru), Swahili (Sw), Telugu (Te), Thai
(Th). We do not use Mr.TyDi for IR training.
## 5.1.2 Parallel Corpora
WikiMatrix parallel corpora contains extracted parallel sentences from the Wikipedia articles in 85 different languages (Schwenk et al., 2021). For those languages involved in the Mr.Tydi dataset, the number of parallel pairs between them and the English of the WikiMatrix dataset ranges from 51,000 and 1,019,000. During training, we sample the same number of parallel pairs (50K) for them.
## 5.2 Baseline Models
We apply our contrastive loss functions on three multilingual pretrained language models:
- **XLM-R** (Conneau et al., 2020) is a pretrained transformer-based multilingual language model. It is trained on a corpus from 100 languages only with the Masked language Model (MLM) objective in a Roberta way.
- **InfoXLM** (Chi et al., 2021) uses 42GB parallel corpora to pre-train XLM-R by maximizing mutual information between multilingualmulti-granularity texts.
- **LaBSE** (Feng et al., 2022) pre-trains BERT
with Masked Language Model and Translation Language Model on the monolingual data
| InfoXLM | LaBSE | Our Model | |
|------------------|---------|-------------|--------|
| Batch Size | 2,048 | 8,192 | 48 |
| Training Steps | 200K | 1.8M | 24.54K |
| Training Compute | 347x | 12,518x | 1x |
Table 1: A comparison of our model and baseline models' pre-training for lingual adaptation. Ours actually uses a "co-training" mode rather than "pre-training", so our training steps are the same as the main task.
and bilingual translation pairs. They train the model by 1.8M steps using a batch size of 8192.
In Table 1, we compare the computational efforts needed by each model to improve the language transfer ability. Both InfoXLM and LaBSE require a large-scale pre-training which needs a larger batch size and a larger number of training steps than ours. Our model only requires "co-training" on the parallel corpora along with the main task. In Table 1, we list our model's training steps on the information retrieval task. This comparison indicates that for the retrieval task, our model does not need the costly pre-training as InfoXLM and LaBSE.
## 5.3 Information Retrieval - All Languages Have Parallel Corpora With English
For the information retrieval training, we follow the previous literature (Zhang et al., 2021; Wu et al., 2022) to use an English QA dataset - the Natural Questions dataset (Kwiatkowski et al., 2019) for both training and validation.
We evaluate our model performance on the Mr.TyDi dataset (Zhang et al., 2021) for monolingual query passage retrieval in eleven languages.
We follow Zhang et al. (2021) to use MRR@100 and Recall@100 as metrics.
In this section, we experimented with the setting when we have parallel corpora from English to all other target languages. We tested three different variants of our model using the XLM-R as the backbone:
1. we only include the semantic contrastive loss for the parallel corpora: LIR + wsLsemaCL;
2. we only include the language contrastive loss for the parallel corpora: LIR + wlLlangCL;
3. we use both the semantic contrastive loss and the language contrastive loss: LIR +
wsLsemaCL + wlLlangCL.
Table 2 shows the results of our model and the baseline XLM-R model. We also report the results
| Model | Ar | Bn | En | Fi | Id | Ja | Ko | Ru | Sw | Te | Th | Avg |
|-------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| XLM-R | 0.335 | 0.345 | 0.275 | 0.302 | 0.368 | 0.274 | 0.275 | 0.287 | 0.231 | 0.298 | 0.403 | 0.308 |
| + semaCL | 0.399 | 0.465 | 0.332 | 0.355 | 0.445 | 0.360 | 0.338 | 0.345 | 0.281 | 0.550 | 0.482 | 0.396 |
| + langCL | 0.402 | 0.437 | 0.338 | 0.335 | 0.425 | 0.339 | 0.320 | 0.329 | 0.265 | 0.600 | 0.453 | 0.386 |
| + semaCL + langCL | 0.404 | 0.465 | 0.338 | 0.346 | 0.430 | 0.333 | 0.320 | 0.341 | 0.266 | 0.516 | 0.477 | 0.385 |
![4_image_0.png](4_image_0.png)
| Model | Ar | Bn | En | Fi | Id | Ja | Ko | Ru | Sw | Te | Th | Avg |
|--------------------------------------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| Results reported by Wu et al. (2022) XLM-R 0.365 0.374 | 0.275 | 0.318 | 0.395 | 0.299 | 0.304 | 0.306 | 0.274 | 0.346 | 0.401 | 0.333 | | |
| InfoXLM | 0.373 | 0.354 | 0.325 | 0.300 | 0.380 | 0.310 | 0.299 | 0.313 | 0.351 | 0.311 | 0.400 | 0.338 |
| LABSE | 0.372 | 0.504 | 0.314 | 0.309 | 0.376 | 0.271 | 0.309 | 0.325 | 0.394 | 0.465 | 0.374 | 0.365 |
| CCP | 0.426 | 0.457 | 0.359 | 0.372 | 0.462 | 0.377 | 0.346 | 0.360 | 0.392 | 0.470 | 0.489 | 0.410 |
| Results reported by Zhang et al. (2021) BM25 (default) 0.368 0.418 | 0.140 | 0.284 | 0.376 | 0.211 | 0.285 | 0.313 | 0.389 | 0.343 | 0.401 | 0.321 | | |
| BM25 (tuned) | 0.367 | 0.413 | 0.151 | 0.288 | 0.382 | 0.217 | 0.281 | 0.329 | 0.396 | 0.424 | 0.417 | 0.333 |
| Our implementation XLM-R | 0.335 | 0.345 | 0.275 | 0.302 | 0.368 | 0.274 | 0.275 | 0.287 | 0.231 | 0.298 | 0.403 | 0.308 |
| + semaCL | 0.399 | 0.465 | 0.332 | 0.355 | 0.445 | 0.360 | 0.338 | 0.345 | 0.281 | 0.550 | 0.482 | 0.396 |
| InfoXLM | 0.371 | 0.337 | 0.284 | 0.272 | 0.343 | 0.311 | 0.271 | 0.298 | 0.338 | 0.306 | 0.385 | 0.320 |
| + semaCL | 0.375 | 0.413 | 0.331 | 0.314 | 0.406 | 0.321 | 0.292 | 0.318 | 0.299 | 0.304 | 0.427 | 0.345 |
| LaBSE | 0.321 | 0.419 | 0.240 | 0.283 | 0.347 | 0.224 | 0.290 | 0.296 | 0.428 | 0.387 | 0.322 | 0.323 |
| + semaCL | 0.333 | 0.485 | 0.300 | 0.313 | 0.395 | 0.216 | 0.265 | 0.329 | 0.374 | 0.330 | 0.308 | 0.332 |
Table 3: MRR@100 on the monolingual information retrieval task of Mr.TyDi dataset.
of Wu et al. (2022), which propose a model called contrastive context prediction (CCP) to learn multilingual representations by leveraging sentencelevel contextual relations as self-supervision signals. For our analysis, we mainly focus on MRR,
since MRR is more aligned with our retrieval loss function, which aims to rank relevant passages at higher orders. We also report Recall@100 in Table 7 in Appendix A. We find that overall our model performs significantly better than the basic XLM-R.
For our different model variants, we find that: (1)
using only the semantic contrastive loss for the parallel corpora would achieve the best average performance; (2) using only the language contrastive loss for the parallel corpora also achieves a significant performance improvement, which is lower than but close to using only the semantic contrastive loss; (3)
using both semantic contrastive loss and language contrastive loss would only contribute to a few languages like Ar, but does not improve the overall performance. Our assumption is that the semantic contrastive loss has already efficiently removed the language embedding shifts by leveraging the parallel pairs, so it is not helpful to use additional language contrastive loss when we have parallel corpora for all the languages. In Section 5.4, we experiment with a more practical scenario when we only have parallel corpora for some of the target languages but non-parallel corpora for the rest.
And we find our language contrastive loss brings significant performance gains in that case.
We then further compare the performance of our best model - XLM-R + semantic contrastive loss, with those of other strong baselines, i.e. InfoXLM and LaBSE. We also examine if the semantic contrastive loss can be used as an add-on module to InfoXLM and LaBSE to further boost their performance. Table 3 shows the MRR@100 results of XLM-R, InfoXLM, LaBSE themselves
- all of them are trained with the IR loss, and the results trained jointly with the semantic contrastive loss. We find that our best model - XLMR with only semantic contrastive loss - significantly outperforms these strong baselines. Note that both InfoXLM and LaBSE involve a largescale pre-training to improve the lingual transfer ability, which is not required in our method. Our model only requires joint training with the contrastive loss, which needs much less computational effort as in Table 1. We also find that the semantic contrastive loss can be used as an add-on module to effectively boost the performance of InfoXLM and LaBSE. But such an add-on module's improvements on InfoXLM and LaBSE are not as large as that on XLM-R. We speculate that this phenomenon could be attributed to that InfoXLM
and LaBSE have already been pre-trained on other datasets, which have some distribution shifts away from the WikiMatrix dataset we used for the semantic contrastive loss add-on module. We also report the Recall@100 results in Table 8 of Appendix A. In addition to the above results output by our own runs, we also list the results reported by Wu et al.
(2022) in Table 3 as a reference. The difference in
![5_image_2.png](5_image_2.png)
![5_image_1.png](5_image_1.png)
the baseline model performances may be due to the randomness during model training. We also present the performance of the traditional BM25 method.
The average MRR@100 of BM25 is significantly lower than that of our method.
## 5.3.1 Effect Of The Size Of Parallel Dataset
We further investigate the effect of the size of the parallel dataset on the multilingual retrieval performance. We train our model by varying the parallel dataset size using the XLM-R with only semantic contrastive loss. Figure 3 shows the results. We find that: (1) using parallel corpora can significantly boost the retrieval performance, compared with the dashed horizontal line when we do not have parallel corpora at all (the basic XLM-R); (2)
even when we only have a small parallel corpus of 500 pairs for each language, we can already achieve a good performance MRR@100=0.38. When we gradually increase the parallel corpora to 50,000, the MRR@100 grows gradually to 0.396. But the increase is not very large. This suggests that our model framework can work well even with a small parallel corpora dataset. This makes our method promising for those low-resource languages which lack parallel corpora with English.
## 5.3.2 Effect Of Language Pair Connection
In order to understand how different language pair connections affect performance, we conduct experiments using different language pairs on En, Fi, Ja, Ko. We experimented with six different settings:
1. **Basic setting:** Train XLM-R without using any parallel corpora, which is the same as the first row in Table 2; 2. **Setting 1:** Train XLM-R with parallel corpora between English and all other languages, i.e.
![5_image_0.png](5_image_0.png)
Table 4: MRR@100 on different language pair connections.
En-Fi, En-Ja, En-Ko;
3. **Setting 2:** Train XLM-R with parallel corpora between English and Korean, and between Korean and the rest languages, i.e. En-Ko, Ko-Fi, Ko-Ja; 4. **Setting 3:** Train XLM-R with parallel corpora between English and Korean, and between Japanese to Finnish, i.e. En-Ko, Ja-Fi; 5. **Setting 4:** Train XLM-R with parallel corpora between English and Korean, i.e. En-Ko; 6. **Setting 5:** Train XLM-R with parallel corpora between Japanese to Finnish, i.e. Ja-Fi.
Table 4 shows the MRR@100 results. We find that all settings 1 to 5 significantly surpass the basic setting. This echoes our previous finding that it is helpful to leverage parallel corpora. Among settings 1 to 5, the differences are not large - the minimum MRR of them is 0.325, and the maximum one is 0.343. This suggests that the connection among language pairs is not a deterministic factor for our method. We also report the Recall@100 in Table 9 of Appendix A.
## 5.4 Information Retrieval - Some Languages Do Not Have Parallel Data
In this section, we investigate the scenario when we have parallel corpora only for some of the target languages, but not for the rest languages. This scenario emphasizes a realistic constraint that we lack parallel corpora for many low-resource languages.
To test it, we leave Ru, Sw, Te, Th as languages that do not have parallel corpora, and keep these parallel corpora for all other languages, i.e. Ar, Bn, Fi, Id, Ja, Ko. We experimented with three different settings:
1. **XLM-R + Semantic CL:** we only use the semantic contrastive loss on languages which have parallel corpora (Ar, Bn, Fi, Id, Ja, Ko):
LIR + wsLsemaCL;
2. **XLM-R + Semantic CL + Language CL**
(WikiMatrix): we use the semantic con-
![6_image_0.png](6_image_0.png)
Note: Avg for languages with (∥) and without (∦) parallel data.
Table 5: Experiment results when Ru, Sw, Te, Th do NOT have parallel data (MRR@100).
trastive loss on languages which have parallel corpora (Ar, Bn, Fi, Id, Ja, Ko), and the language contrastive loss on these parallel corpora (Ar, Bn, Fi, Id, Ja, Ko) along with the non-parallel WikiMatrix corpora:
LIR + wsLsemaCL + wlLlangCL.
3. **XLM-R + Semantic CL + Language CL**
(Mr.TyDi): we use the semantic contrastive loss on languages which have parallel corpora
(Ar, Bn, Fi, Id, Ja, Ko), and the language contrastive loss on these parallel corpora
(Ar, Bn, Fi, Id, Ja, Ko) along with the nonparallel Mr.TyDi corpora: LIR + wsLsemaCL +
wlLlangCL;
Table 5 shows the MRR@100 results of our experiments. The language contrastive loss can effectively leverage the non-parallel corpora to improve the information retrieval performance. For the **XLM-R + Semantic CL + Language CL**
(Mr.TyDi) setting, the language contrastive loss boosts the average MRR@100 from 0.358 to 0.385.
We also calculate the average performance on the languages with parallel corpora (Ar, Bn, Fi, Id, Ja, Ko), and the languages without parallel corpora
(Ru, Sw, Te, Th). The **Avg (withParallel)** column and the **Avg (noParallel)** column in Table 5 are their corresponding results. We find that the language contrastive loss can improve the performance on both types of languages. For languages with parallel corpora (Ar, Bn, Fi, Id, Ja, Ko), the MRR@100 increases from 0.365 to 0.391; for languages without parallel corpora (Ru, Sw, Te, Th),
the MRR@100 increase from 0.360 to 0.389. This result suggests our model can be effectively deployed in situations when we have no parallel corpora for low-resource languages. Appendix A Table 10 reports the Recall@100 results.
Since using the Mr.TyDi corpora brings in the target domain information, we also examine the XLM-R + Semantic CL + Language CL (WikiMatrix) setting. This setting uses the WikiMatrix non-parallel corpora for Ru, Sw, Te, Th —
it does not introduce the target domain informa-
![6_image_1.png](6_image_1.png)
tion, and reflects the clean gain from the language contrastive loss. We find that using the WikiMatrix non-parallel corpora achieves a little lower but close performance than the one using the Mr.TyDi corpora. This suggests that the introduction of the target domain information is very minor in improving IR performance.
## 5.4.1 **Effect Of The Size Of Non-Parallel Dataset**
We further investigate the effect of the size of the non-parallel dataset on the multilingual retrieval performance. We train our model by varying the non-parallel dataset size using XLM-R with both the semantic contrastive loss and the language contrastive loss. We keep the size of the parallel dataset fixed at 50,000. Figure 4 shows the results. The dashed horizontal line is the one using only parallel corpora, i.e. the first row in Table 5. We find that: (1) using non-parallel corpora can significantly boost the retrieval performance, compared with the most left point when we do not use the nonparallel corpora at all; (2) when the non-parallel corpora dataset size increases from 0 to 10,000, the MRR@100 improves quickly; (3) when the non-parallel dataset size increases from 10,000 to 50,000, the MRR@100 has minor changes, but its variance decreases.
## 5.5 Bucc: Bitext Retrieval
The information retrieval task above is not common to see in multilingual NLP papers. A closely related task they often work on is the BUCC1task
(Zweigenbaum et al., 2018). The BUCC task has been tested in the LaBSE benchmark model we used in the previous section (Feng et al., 2022),
and in many other multilingual NLP works, such as Artetxe and Schwenk 2019; Yang et al. 2019; Schwenk 2018; Reimers and Gurevych 2020, etc.
Therefore, following these works, we also investigate our model's performance on the BUCC task.
For the BUCC bitext mining task, we follow previous work (Artetxe and Schwenk, 2019; Reimers and Gurevych, 2020) to first encode texts, and then use the equation below to calculate the score of two sentence embeddings u, v:
score(u, v) = sim(u, v)
$\frac{\text{Sum}(\mathbf{u},\mathbf{v})}{\sum_{\mathbf{x}\in\text{NN}_k}(\mathbf{u})\ \frac{\text{sim}(\mathbf{u},\mathbf{x})}{2k}\ +\ \sum_{\mathbf{x}\in\text{NN}_k}(\mathbf{v})\ \frac{\text{sim}(\mathbf{v},\mathbf{x})}{2k}}$ 2.
(5)
where NNk(u) denotes u's k nearest neighbors
in another language. The training set is used to
find a threshold value of the score, for which pairs
with scores above this threshold are predicted as
parallel sentences. We use F1 to measure the model performance on BUCC.
Table 6 shows F1 score of our model based on
the XLM-R, InfoXLM, and LaBSE. We first examine the vanilla XLM-R, InfoXLM, and LaBSE
as the text encoder. LaBSE and InfoXLM outperform XLM-R a lot due to their large-scale pretraining on improving lingual adaptation using parallel datasets. When we add our semantic contrastive loss to XLM-R, we get a large improvement
across all four languages. We find that our model
(XLM-R + Semantic CL) outperforms XLM-R, but
underperforms InfoXLM and LaBSE. We attribute
LaBSE's great performance to its much larger pretraining than ours, and LaBSE's training involves
Translation Language Model (Conneau and Lample, 2019) with translation corpora. This is exactly
the same type of corpora as BUCC's translation
parallel pairs. When we add our semantic contrastive loss to InfoXLM, we obtain performance
gain for all languages. The gain is smaller than
that of XLM-R because InfoXLM has already been
trained on parallel corpora. When we add our semantic contrastive loss module to LaBSE, we obtained a small increase in the average performance
| Model | De | Fr | Ru | Zh | Avg |
|---------------------------|-------|-------|-------|-------|-------|
| XLM-R | 17.90 | 12.12 | 21.84 | 15.06 | 16.73 |
| + semaCL | 72.86 | 69.44 | 73.10 | 66.27 | 70.42 |
| InfoXLM | 63.32 | 54.36 | 69.92 | 66.19 | 63.45 |
| + semaCL | 80.40 | 75.48 | 78.20 | 75.04 | 77.28 |
| LaBSE (Feng et al., 2022) | 92.50 | 88.70 | 88.90 | 88.90 | 89.75 |
| LaBSE | 93.03 | 89.75 | 89.75 | 85.93 | 89.62 |
| + semaCL | 92.95 | 89.50 | 89.82 | 89.77 | 90.51 |
- the performance on Zh has a significant increase.
One important insight we get from comparing Table 6 and Table 3 is that a model's better performance in NLP tasks like BUCC does not necessarily mean better performance in information retrieval. Most existing multilingual NLP papers only examine the BUCC bi-text retrieval task, and we highlight the inconsistency between models' performances on the two types of retrieval tasks.
## 6 Related Work
Dense monolingual / multilingual information retrieval study recently attracts great attention, which mainly benefits from (1) supervised finetuning based on large pre-trained language models and
(2) self-supervised contrastive learning.
Dense Passage Retrieval (Karpukhin et al., 2020)
is the framework first proposed for monolingual **superivsed finetuning** on information retrieval. It uses a BERT-based dual-encoder structure to encode the query and the candidate passages into embeddings. Similar to monolingual IR, supervised finetuning can also be applied to multilingual pretrained language models (LMs) for multilingual IR. Commonly uesd multilingual pretrained LMs include multilingual BERT (mBERT, Devlin et al.,
2019) and XLM-R (Conneau et al., 2020), both of which are trained on large corpora representing about 100 languages primarily with the masked language modeling task. These models do not use any explicit objective to improve the alignment between language sentences. Recent efforts in NLP field have provided easy access to parallel corpora, e.g.
Schwenk et al. (2021). Many multilingual language models use additional parallel data to improve lingual transfer ability. InfoXLM (Chi et al., 2021)
uses parallel corpora to pre-train XLM-R by maximizing mutual information between multilingual multi-granularity texts. LaBSE (Feng et al., 2022)
pre-trains BERT with Masked Language Model and Translation Language Model on the monolingual data and bilingual translation pairs.
Self-supervised contrastive learning is another way used to improve the cross-lingual alignment.
Contrastive learning maximizes the agreement between positive samples, and minimizes the similarity of positive and negative ones (He et al., 2020; Chen et al., 2020a,b,c). For language representation learning, Clark et al. (2019) apply contrastive learning to train a discriminative model to learn language representations. For multilingual representation learning, contrastive learning has been used to improve cross-lingual transfer ability by using additional parallel data (Hu et al., 2021) or by leveraging other self-supervision signals (Wu et al., 2022).
## 7 Conclusion
In this paper, we present a model framework for multilingual information retrieval by improving lingual adaptation through contrastive learning. Our experiments demonstrate the effectiveness of our methods in learning better cross-lingual representations for information retrieval tasks. The two contrastive losses can be used as an add-on module to any backbones and many other tasks besides information retrieval.
## 8 Limitations
In this work, we did not conduct a detailed analysis of how language-specific characteristics contribute to our model's cross-lingual generalization capabilities. Future work may address this question through extensive matrix experiments - traverse the training on each possible language pair combination and evaluate on all languages.
## References
Mikel Artetxe and Holger Schwenk. 2019. Marginbased parallel corpus mining with multilingual sentence embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3197–3203, Florence, Italy. Association for Computational Linguistics.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations.
Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. 2020b. Big self-supervised models are strong semi-supervised learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 22243–22255.
Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. 2020c. Improved baselines with momentum contrastive learning. *arXiv preprint arXiv:2003.04297*.
Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3576–3588, Online. Association for Computational Linguistics.
Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. *Transactions of the Association for Computational Linguistics*, 8:454–470.
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2019. Electra: Pre-training text encoders as discriminators rather than generators.
In *International Conference on Learning Representations*.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. *Advances in* neural information processing systems, 32.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 878–891, Dublin, Ireland. Association for Computational Linguistics.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729–9738.
Junjie Hu, Melvin Johnson, Orhan Firat, Aditya Siddhant, and Graham Neubig. 2021. Explicit alignment objectives for multilingual bidirectional encoders. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3633–3643, Online. Association for Computational Linguistics.
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP
world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018.
Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*.
Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics.
Holger Schwenk. 2018. Filtering and mining parallel data in a joint multilingual space. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
pages 228–234, Melbourne, Australia. Association for Computational Linguistics.
Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2021. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1351–1361, Online. Association for Computational Linguistics.
Shuo Sun and Kevin Duh. 2020. CLIRMatrix: A massively large collection of bilingual and multilingual datasets for cross-lingual information retrieval. In
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 4160–4170, Online. Association for Computational Linguistics.
Ning Wu, Yaobo Liang, Houxing Ren, Linjun Shou, Nan Duan, Ming Gong, and Daxin Jiang. 2022. Unsupervised context aware sentence representation pretraining for multi-lingual dense retrieval. arXiv preprint arXiv:2206.03281.
Yinfei Yang, Gustavo Hernandez Abrego, Steve Yuan, Mandy Guo, Qinlan Shen, Daniel Cer, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Improving multilingual sentence embedding using bidirectional dual encoder with additive margin softmax. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5370–5378. International Joint Conferences on Artificial Intelligence Organization.
Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin.
2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. In *Proceedings of the 1st Workshop* on Multilingual Representation Learning, pages 127–
137, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp.
2018. Overview of the third bucc shared task: Spotting parallel sentences in comparable corpora. In Proceedings of 11th workshop on building and using comparable corpora, pages 39–42.
## A Appendix
Table 7 and Table 8 show the Recall@100 results of experiments in Section 5.3. Table 9 shows the Recall@100 results of experiments in Section 5.3.2.
Table 10 shows the Recall@100 results of experiments in Section 5.4.
Model Ar Bn En Fi Id Ja Ko Ru Sw Te Th Avg
XLM-R 0.782 0.797 0.754 0.755 0.840 0.741 0.691 0.741 0.614 0.820 0.852 0.762
+ semaCL 0.799 0.851 0.777 0.773 **0.867** 0.779 0.730 **0.763** 0.597 0.862 **0.886** 0.789
+ langCL 0.799 0.820 0.782 0.769 0.858 0.769 0.723 0.750 **0.629 0.892** 0.871 0.788 + semaCL + langCL 0.806 0.864 0.798 0.784 0.858 **0.780 0.736** 0.743 0.626 0.867 0.877 **0.794**
Table 7: Recall@100 on the monolingual information retrieval task of Mr.TyDi dataset.
Model Ar Bn En Fi Id Ja Ko Ru Sw Te Th Avg Results reported by Wu et al. *(2022)*
XLM-R 0.813 0.842 0.776 0.782 0.886 0.785 0.727 0.774 0.633 0.875 0.882 0.798 InfoXLM 0.806 0.860 0.804 0.749 0.869 0.788 0.717 0.767 0.724 0.867 0.874 0.802
LABSE 0.762 0.910 0.783 0.760 0.852 0.669 0.644 0.744 0.750 0.889 0.834 0.782
CCP 0.820 0.883 0.801 0.787 0.875 0.800 0.732 0.772 0.751 0.888 0.889 0.818
Results reported by Zhang et al. *(2021)*
BM25 (default) 0.793 0.869 0.537 0.719 0.843 0.645 0.619 0.648 0.764 0.758 0.853 0.732
BM25 (tuned) 0.800 0.874 0.551 0.725 0.846 0.656 0.797 0.660 0.764 0.813 0.853 0.758
Our implementation
XLM-R 0.782 0.797 0.754 0.755 0.840 0.741 0.691 0.741 0.614 0.820 0.852 0.762
+ semaCL 0.799 0.851 0.777 **0.773 0.867** 0.779 **0.730** 0.763 0.597 0.862 **0.886** 0.789
InfoXLM 0.797 **0.900** 0.785 0.725 0.843 0.790 0.717 0.753 0.711 **0.873** 0.875 **0.797**
+ semaCL 0.790 0.842 **0.791** 0.731 0.829 **0.805** 0.708 0.753 0.646 0.800 0.866 0.778
LaBSE 0.769 0.887 0.760 0.773 0.854 0.652 0.649 **0.764 0.832** 0.862 0.824 0.784
+ semaCL 0.725 0.865 0.762 0.770 0.845 0.575 0.604 0.734 0.739 0.724 0.697 0.731
Table 8: Recall@100 on the monolingual information retrieval task of Mr.TyDi dataset.
En Fi Ja Ko Avg
Table 9: Recall@100 on different language pair connections.
| Basic Setting | 0.754 | 0.755 | 0.741 | 0.691 | 0.735 |
|-----------------|---------|---------|---------|---------|---------|
| Setting 1 | 0.776 | 0.770 | 0.777 | 0.706 | 0.757 |
| Setting 2 | 0.785 | 0.778 | 0.781 | 0.710 | 0.763 |
| Setting 3 | 0.767 | 0.762 | 0.766 | 0.723 | 0.754 |
| Setting 4 | 0.779 | 0.785 | 0.764 | 0.722 | 0.762 |
| Setting 5 | 0.765 | 0.759 | 0.722 | 0.703 | 0.737 |
| Model | Ar | Bn | En | Fi | Id | Ja | Ko | Avg∥ | Ru | Sw | Te | Th | Avg∦ | Avg |
|-----------------------------------------------------------------|-------------------------------------------------|-------------------------------|-------|------|------|------|------|--------|------|------|------|------|--------|-------|
| XLM-R | 0.782 0.797 0.754 0.755 0.840 0.741 0.691 0.766 | 0.741 0.614 0.820 0.852 0.757 | 0.762 | | | | | | | | | | | |
| + semaCL | 0.759 0.824 0.752 0.738 0.826 0.745 0.715 0.767 | 0.724 0.598 0.851 0.859 0.758 | 0.762 | | | | | | | | | | | |
| + langCL (WikiMatrix) | 0.778 0.797 0.782 0.757 0.840 0.752 0.726 0.776 | 0.734 0.550 0.861 0.854 0.749 | 0.766 | | | | | | | | | | | |
| + langCL (Mr.TyDi) | 0.767 0.842 0.739 0.749 0.819 0.761 0.713 0.770 | 0.724 0.605 0.773 0.857 0.739 | 0.759 | | | | | | | | | | | |
| + semaCL + langCL (WikiMatrix) | 0.784 0.856 0.776 0.774 0.866 0.759 0.738 0.796 | 0.761 0.592 0.828 0.887 0.767 | 0.783 | | | | | | | | | | | |
| + semaCL + langCL (Mr.TyDi) | 0.792 0.806 0.768 0.782 0.871 0.782 0.737 0.795 | 0.755 0.619 0.850 0.885 0.777 | 0.786 | | | | | | | | | | | |
| Note: Avg for languages with (∥) and without (∦) parallel data. | | | | | | | | | | | | | | |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 7
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3, Section 4, And Section 5
✓ B1. Did you cite the creators of artifacts you used?
Section 5
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section 5 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5
✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
We followed the standard data splits
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 3, Section 4, and Section 5
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
jukic-etal-2023-easy | Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods | https://aclanthology.org/2023.findings-acl.582 | A popular approach to unveiling the black box of neural NLP models is to leverage saliency methods, which assign scalar importance scores to each input component. A common practice for evaluating whether an interpretability method is faithful has been to use evaluation-by-agreement {--} if multiple methods agree on an explanation, its credibility increases. However, recent work has found that saliency methods exhibit weak rank correlations even when applied to the same model instance and advocated for alternative diagnostic methods. In our work, we demonstrate that rank correlation is not a good fit for evaluating agreement and argue that Pearson-r is a better-suited alternative. We further show that regularization techniques that increase faithfulness of attention explanations also increase agreement between saliency methods. By connecting our findings to instance categories based on training dynamics, we show that the agreement of saliency method explanations is very low for easy-to-learn instances. Finally, we connect the improvement in agreement across instance categories to local representation space statistics of instances, paving the way for work on analyzing which intrinsic model properties improve their predisposition to interpretability methods. | # Easy To Decide, Hard To Agree: Reducing Disagreements Between Saliency Methods
Josip Jukic´♣,1, Martin Tutek♣,♥,2**, Jan Šnajder**1 1TakeLab, University of Zagreb 2UKP Lab, Technical University of Darmstadt
{josip.jukic, jan.snajder}@fer.hr [email protected]
## Abstract
A popular approach to unveiling the black box of neural NLP models is to leverage saliency methods, which assign scalar importance scores to each input component. A common practice for evaluating whether an interpretability method is *faithful* has been to use evaluation-by-agreement - if multiple methods agree on an explanation, its credibility increases. However, recent work has found that saliency methods exhibit weak rank correlations even when applied to the same model instance and advocated for alternative diagnostic methods. In our work, we demonstrate that rank correlation is sensitive to small perturbations when evaluating agreement and argue that Pearson-r could be a better-suited alternative.
We further show that regularization techniques that increase faithfulness of attention explanations also increase agreement between saliency methods. By connecting our findings to instance categories based on training dynamics, we show that the agreement of saliency method explanations is very low for easy-to-learn instances. Finally, we connect the improvement in agreement across instance categories to local representation space statistics of instances, paving the way for work on analyzing which intrinsic model properties improve their predisposition to interpretability methods.1
## 1 Introduction
Following the meteoric rise of the popularity of neural NLP models during the neural revolution, they have found practical usage across a plethora of domains and tasks. However, in a number of high-stakes domains such as law (Kehl and Kessler, 2017), finance (Grath et al., 2018), and medicine
(Caruana et al., 2015), the opacity of deep learning methods needs to be addressed. In the area of explainable artificial intelligence (XAI), one of the major recent efforts is to unveil the neural black box and produce explanations for the end-user. There are various approaches to rationalizing model predictions, such as using the attention mechanism
(Bahdanau et al., 2014), saliency methods (Denil et al., 2014; Bach et al., 2015; Ribeiro et al., 2016; Lundberg and Lee, 2017; Shrikumar et al., 2017; Sundararajan et al., 2017), rationale generation bydesign (Lei et al., 2016; Bastings et al., 2019; Jain et al., 2020), or self-rationalizing models (Marasovic et al., 2022). These methods have to simultaneously satisfy numerous desiderata to have practical application in high-stakes scenarios: they have to be *faithful* - an accurate representation of the inner reasoning process of the model, and *plausible* –
convincing to human stakeholders.
When evaluating faithfulness in using attention as explanations, Jain and Wallace (2019) have shown that attention importance scores do not correlate well with gradient-based measures of feature importance. The authors state that although gradient-based measures of feature importance should not be taken as ground truth, one would still expect importance measures to be highly agreeable, bringing forth the *agrement-as-evaluation* paradigm (Abnar and Zuidema, 2020; Meister et al., 2021). While the imperfect agreement is something one could expect as interpretability methods differ in their formulation, and it is reasonable to observe differences in importance scores, subsequent work has shown that saliency methods exhibit low agreement scores even when applied to the same model instance (Neely et al., 2021). Since a single trained model instance can only have a single feature importance ranking for its decision, disagreement of saliency methods implies that at least one, if not all methods, do not produce faithful explanations
- placing doubt on their practical relevance. It has been hypothesized that unfaithfulness of attention is caused by input entanglement in the hidden space
(Jain and Wallace, 2019). This claim has later been experimentally verified through results showing that regularization techniques targeted to reduce entanglement significantly improve the faithfulness of attention-based explanations (Mohankumar et al.,
2020; Tutek and Šnajder, 2020). While entanglement in the hidden space is clearly a problem in the case of attention explanations, where attention weights directly pertain to hidden states, we also hypothesize that representation entanglement could cause similar issues for gradient- and propagationbased explainability methods - which might not be able to adequately disentangle importance when propagating toward the inputs.
In our work, we first take a closer look at whether the rank correlation is an appropriate method for evaluating agreement and confirm that, as hypothesized in previous work, small differences in values of saliency scores significantly affect agreement scores. We argue that a linear correlation method such as Pearson-r is less sensitive to perturbations since the exact ranking order of features is not as crucial for agreement as the relative importance values, which Pearson-r adequately captures. We hypothesize that the cause of saliency method disagreements is rooted in representation entanglement and experimentally show that agreement can be significantly improved by regularization techniques such as tying (Tutek and Šnajder, 2020)
and conicity (Mohankumar et al., 2020). The fact that regularization methods, which were originally aimed at improving faithfulness of attention, also improve agreement between saliency methods suggests that the two problems have the same underlying cause. Taking the analysis deeper, we apply techniques from dataset cartography (Swayamdipta et al., 2020) and show that, surprisingly, the explanations of easy-to-learn instances exhibit a lower agreement than of ambiguous instances. We further analyze how local curvature of the representation space morphs when regularization techniques are applied, paving the way for further analysis of
(dis)agreements between interpretability methods.
## 2 Background And Related Work
Explainability methods come in different flavors determined by the method of computing feature importance scores. Saliency methods perform *posthoc* analysis of the trained black-box model by either leveraging gradient information (Denil et al.,
2014; Sundararajan et al., 2017), modifying the backpropagation rules (Bach et al., 2015; Shrikumar et al., 2017), or training a shallow interpretable model to locally approximate behavior of the blackbox model (Ribeiro et al., 2016), all with the goal of assigning scalar saliency scores to input features. Alternatively, if the analyzed model is capable of generating text, one can resort to selfrationalization by prompting the trained model to generate an explanation for its decision (Marasovic et al., 2022). In contrast to *post-hoc* explanations, inherently interpretable models produce explanations as part of their decision process, either by masking a proportion of input tokens and then performing prediction based on the remaining *rationale* (Lei et al., 2016; Bastings et al., 2019; Jain et al., 2020), or jointly performing prediction and rationale generation in cases where datasets with annotated rationales are available (Camburu et al.,
2018). For some time, the attention mechanism
(Bahdanau et al., 2014) has also been considered inherently interpretable. However, the jury is still out on whether such explanations can be considered faithful (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Tutek and Šnajder, 2020; Bastings and Filippova, 2020).
Faithfulness is one of the most important desiderata of explanation methods (Jacovi and Goldberg, 2020) - faithful explanations are those that are true to the inner decision-making process of the model. Approaches to evaluating faithfulness rely on measuring how the confidence of the model changes when inputs are perturbed (Kindermans et al., 2019) or completely dropped from the model
(Li et al., 2016; Serrano and Smith, 2019). However, perturbations to input often result in corrupted instances that fall off the data manifold and appear nonsensical to humans (Feng et al., 2018) or fail to identify all salient tokens properly (ROAR; Hooker et al., 2019) - raising questions about the validity of perturbation-based evaluation. Recursive ROAR
(Madsen et al., 2022) alleviates the issues of its predecessor at the cost of requiring many prohibitively expensive retraining steps, further motivating us to seek efficient solutions which do not require retraining the model multiple times. Another option is to leverage the *evaluation-by-agreement* (Jain and Wallace, 2019) paradigm, which states that an interpretability method should be highly agreeable with other methods to be considered faithful.
However, since empirical evidence has shown that saliency methods exhibit poor agreement between their explanations (Neely et al., 2021), Atanasova et al. (2020) recommend practitioners consider alternative methods for evaluating the quality of interpretability methods, such as diagnostic tests. Finally, methods such as data staining (Sippy et al.,
2020) and lexical shortcuts (Bastings et al., 2022)
artificially introduce tokens that act as triggers for certain classes - creating a ground truth for faithfulness which can be used as a comparison. Nevertheless, such methods have a certain drawback in that they only offer the ground truth importance of a few artificially inserted tokens, but offer no insight regarding the relative importance of the remainder of the input. Each of the aforementioned methods for estimating faithfulness of interpretability methods has its drawbacks (Jacovi and Goldberg, 2020), and we argue each should be taken in conjunction with others to increase the credibility of their collective verdict.
## 3 Preliminaries
In this section, we delineate our experimental setup, detailing the considered datasets, models, their training procedure, the saliency methods which we use to interpret the decisions of the models, and the regularization techniques we use to improve agreement between saliency methods.
## 3.1 Datasets
Leaning on the work of Neely et al. (2021), which motivated us to explore the valley of explainability, we aim to investigate the protruding problem of low agreement between saliency methods. We investigate three different types of single-sequence binary classification tasks on a total of four datasets. In particular, we evaluate sentiment classification on the movie reviews (**IMDB**; Maas et al., 2011) and the Stanford Sentiment Treebank (**SST-2**; Socher et al., 2013) datasets, using the same data splits as Jain and Wallace (2019). We include two more tasks, examining the subjectivity dataset (**SUBJ**;
Pang and Lee, 2004), which classifies movie snippets into subjective or objective, and question type classification (**TREC**; Li and Roth, 2002). To frame the TREC task as binary classification, we select only the examples labeled with the two most frequent classes (ENTY - entities, HUM - human beings) and discard the rest.
## 3.2 Models
For comparability, we opt for the same models as Neely et al. (2021). Specifically, we employ the Bi-LSTM with additive self-attention (JWA; Jain and Wallace, 2019). We initialize word representations for the JWA model to 300-d GloVe embeddings (Pennington et al., 2014). We also employ a representative model from the Transformer family (Vaswani et al., 2017) in DistilBERT (**DBERT**;
Sanh et al., 2019).
Both models work similarly: the input sequence of tokens {x1*, . . . , x*T } is first embedded {e1*, . . . , e*T } and then contextualized
{h1*, . . . , h*T } by virtue of an LSTM network or a Transformer. The sequence of contextualized hidden states is then aggregated to a sequence representation h, which is then fed as input to a decoder network.
## 3.3 Explainability Methods
We make use of ready-made explainability methods from the propagation- and gradient-based families used by Neely et al. (2021): Deep-LIFT (Shrikumar et al., 2017), Integrated Gradients (Int-Grad; Sundararajan et al., 2017) and their Shapley variants
(Lundberg and Lee, 2017), Deep-SHAP and GradSHAP.2 Since we evaluate agreement on the entire test set instead of an instance subset (Neely et al.,
2021), we exclude LIME (Ribeiro et al., 2016) from the comparison as it is not computationally feasible to train the surrogate model for all test instances across all training setups.
Each saliency method produces a set of importance scores for each input (sub)word token.
When evaluating the agreement between different saliency methods for a single trained model, one would expect the importance scores for the same input instance to be similar, as the same set of parameters should produce a unique and consistent importance ranking of input tokens.
## 3.4 Regularization Methods
As alluded to earlier, we suspect one cause of disagreement between saliency method explanations to be rooted in representation entanglement. To counteract this issue, we employ two regularization schemes that have been shown to improve the faithfulness of the attention mechanism as a method of interpretability: CONICITY (Mohankumar et al.,
2020) and TYING (Tutek and Šnajder, 2020). Both of these methods address what we believe is the same underlying issue in *recurrent* models - the fact that hidden representations ht are often very similar to each other, indicating that they act more as a sequence representation rather than a contextualization of the corresponding input token xt.
Each regularization method tackles this problem in a different manner. CONICITY aims to increase the angle between each hidden representation and the mean of the hidden representations of a single instance. The authors first define the *alignment* to mean (ATM) for each hidden representation as the cosine similarity of that representation to the average representation:
$$\mathrm{ATM}(h_{i},\mathbf{H})=\mathrm{cosine}(h_{i},{\frac{1}{T}}\sum_{j=1}^{T}h_{j})$$
$$\mathrm{(1)}$$
hj ) (1)
where H = {h1*, . . . , h*T } is the set of hidden representations for an instance of length T. Conicity is then defined as the average ATM for all hidden states hi ∈ H:
$${\mathrm{conicity}}(\mathbf{H})={\frac{1}{T}}\sum_{i=1}^{T}{\mathrm{ATM}}(h_{i},H)\qquad(2)$$
A conicity value implies that all hidden representations exist in a narrow cone and have high similarity
- to counteract this unwanted effect, during training, we minimize this regularization term weighted by λcon along with the binary cross entropy loss.
Similarly, TYING also aims to incentivize differences between hidden states by enforcing them to
"stay true to their word" through minimizing the L2 norm of the difference between each hidden state ht and the corresponding input embedding et = embed(xt):
$${\mathrm{tying}}(\mathbf{H},\mathbf{E})={\frac{1}{T}}\sum_{i=1}^{T}\|h_{i}-e_{i}\|_{2}^{2}\qquad(3)$$
where E = {e1*, . . . , e*T } is the sequence of embedded tokens. During training, we minimize this regularization term weighted by λ*tying* along with the binary cross entropy loss.
By penalizing the difference between hidden representations and input embedding, one achieves two goals: (1) the embedding and hidden state representation spaces become better aligned, and (2)
each hidden representation comes closer to its input embedding. The latter enforces hidden states to differ from each other: because different embeddings represent the semantics of different tokens, their representations should also differ, and this effect is then also evident in the hidden representations.
Although both works introduced other methods of enforcing differences between hidden states, namely orthogonal-LSTM and masked language modeling as an auxiliary task, we opt for CONICITY
and TYING as they were both shown to be more efficient and more stable in practice.
## 4 Improving Agreement
In this section, we present two modifications of the existing *evaluation-by-agreement* procedure:
(1) complementing rank-correlation with a linear correlation measure more robust to rank changes caused by small differences in importance weights, and (2) regularizing the models with the goal of reducing entanglement in the hidden space, and as a consequence, improving agreement.
## 4.1 Choice Of Correlation Metric
Previous work (Jain and Wallace, 2019; Neely et al.,
2021) has evaluated the agreement between two explainability methods by using rank-correlation as measured by Kendall-τ (Kendall, 1938). Although Kendall-τ is generally more robust than Spearman's rank correlation, i.e., it has smaller gross-error sensitivity (Croux and Dehon, 2010),
we still face difficulties when using Kendall-τ for evaluating agreement. As Jain and Wallace (2019)
also note, perturbations in ranks assigned to tokens in the tail of the saliency distribution have a large influence on the agreement score. In addition, rankings are also unstable when saliency scores for the most relevant tokens are close to one another. In Figure 1, we illustrate the deficiencies of using rank correlation on a toy example of explaining sentiment classification. While saliency scores attributed to tokens differ slightly, the differences in rank order are significant, lowering agreement according to Kendall-τ due to the discretization of raw saliency scores when converted into ranks.
We believe that a better approach to approximating agreement is to use a linear correlation metric such as Pearson's r, as it evaluates whether both saliency methods assign similar importance scores to the same tokens - which is a more robust setup if we assume small amounts of noise in importance attribution between different methods.
| D-SHAP | G-SHAP | Int-Grad | | | | | |
|-----------|----------|------------|-----|-----|-----|-----|-----|
| kτ | pr | kτ | pr | kτ | pr | | |
| DeepLIFT | SUBJ | 1. | 1. | .31 | .45 | .43 | .64 |
| SST | 1. | 1. | .30 | .47 | .35 | .54 | |
| TREC | 1. | 1. | .12 | .31 | .15 | .33 | |
| IMDB | 1. | 1. | .29 | .59 | .28 | .60 | |
| SUBJ | .31 | .45 | .43 | .64 | | | |
| D-SHAP | SST | .30 | .47 | .35 | .54 | | |
| TREC | .12 | .31 | .15 | .33 | | | |
| IMDB | .29 | .60 | .28 | .60 | | | |
| SUBJ | .62 | .78 | | | | | |
| G-SHAP | SST | .70 | .87 | | | | |
| TREC | .66 | .85 | | | | | |
| IMDB | .68 | .94 | | | | | |
| (a) JWA | | | | | | | |
| D-SHAP | G-SHAP | Int-Grad | | | | | |
| kτ | pr | kτ | pr | kτ | pr | | |
| SUBJ | .24 | .44 | .10 | .19 | .12 | .21 | |
| DeepLIFT | SST | .19 | .34 | .09 | .17 | .10 | .20 |
| TREC | .16 | .30 | .12 | .25 | .12 | .26 | |
| IMDB | .28 | .51 | .11 | .24 | .13 | .27 | |
| D-SHAP | SUBJ | .11 | .22 | .13 | .24 | | |
| SST | .10 | .19 | .11 | .23 | | | |
| TREC | .13 | .28 | .14 | .30 | | | |
| IMDB | .12 | .26 | .14 | .30 | | | |
| G-SHAP | SUBJ | .36 | .58 | | | | |
| SST | .31 | .54 | | | | | |
| TREC | .42 | .71 | | | | | |
| IMDB | .29 | .55 | | | | | |
| (b) DBERT | | | | | | | |
We now investigate how Pearson-r (pr) compares to Kendall-τ (kτ ) when evaluating agreement.
In Table 1 we compare agreement scores produced by pr and kτ across all datasets for JWA and DBERT,
respectively. For JWA, saliency methods display agreement greater than 0.5 only in 8/24 cases according to kτ and in 16/24 cases as per pr. For DBERT, these figures are 0/24 and 5/24, respectively. While the overall agreement is subpar, we posit that kτ further exacerbates the evaluation process. We believe this is caused by tokens with approximately equal relative importance assigned slightly different rankings by explainability methods, which kτ harshly penalizes. To address this,
![4_image_0.png](4_image_0.png)
we recommend using the Pearson correlation coefficient as an additional measure in the evaluation of agreement, as it is more robust to rank changes caused by small differences in saliency scores.
## 4.2 Regularizing Models
Our next goal is to improve agreement between saliency methods through intervention in the training procedure, namely by applying regularization to promote disentanglement in the hidden space.
In Table 2 we report correlation scores on the test splits of all datasets for regularized models
(CONICITY, TYING) and their unregularized variants (BASE). We notice that both regularization techniques have a positive effect on agreement across both correlation metrics, indicating that regularization techniques alleviate a deeper issue that also affects the interpretability of attention weights.
In Table 3 we report F1 scores on the test set for the regularized and unregularized models with the best performance on the validation split. We observe that regularized models generally perform comparably well to unregularized ones on downstream tasks, indicating that the improvement in the agreement does not come at a cost for downstream performance. When selecting regularized models, we choose ones with the strongest regularization scale hyperparameter that performs within 3 F1 points on the validation set compared to the unregularized model (cf. details in Appendix A.2).
## 5 The Cartography Of Agreement
We have shown that by using a more appropriate correlation measure and applying regularization, the agreement of saliency methods increases sig-
| τk | rp | | | | | | |
|-------|------|------|------|------|------|------|------|
| B | C | T | B | C | T | | |
| SUBJ | .52 | .48 | .65† | .66 | .70 | .88† | |
| JWA | SST | .50 | .67 | .68 | .65 | .90† | .86 |
| TREC | .37 | .77† | .68 | .52 | .98† | .93 | |
| IMDB | .47 | .52 | .60† | .72 | .64 | .80† | |
| SUBJ | .18 | .28 | .36† | .31 | .48 | .57† | |
| DBERT | SST | .15 | .15 | .33† | .28 | .27 | .60† |
| TREC | .18 | .17 | .28† | .35 | .34 | .53† | |
| IMDB | .18 | .20 | .24† | .36 | .42 | .51† | |
| Base | Conicity | Tying |
|--------|------------|---------|
DBERT
SUBJ .93.01 .90.02 .93.00
SST .83.00 .83.01 .82.01
TREC .92.01 .92.01 .91.01
IMDB .86.01 .86.01 .88.00
JWA
SUBJ .92.00 .90.00 .89.00
SST .78.04 .76.02 .78.02
TREC .89.02 .86.01 .89.01
IMDB .89.00 .88.00 .86.00
nificantly. In this section, we are interested in finding out the cause of increased agreement obtained through applying regularization - are there certain instance groups in the dataset that benefit the most, and if so, what changes in the representation space resulted in the increased agreement? We leverage methods from dataset cartography (Swayamdipta et al., 2020) to distribute instances into easy-tolearn, *hard-to-learn*, and *ambiguous* categories based on their prediction confidence and variability.
Concretely, if an instance exhibits low prediction variability and **high** prediction confidence between epochs, this implies that the model can quickly and accurately classify those instances, making them easy-to-learn. Instances that also exhibit low variability but low prediction confidence, align with the idea that the model is consistently unable to correctly classify them, making them *hard-to-learn*.
Finally, instances that exhibit **high** variability and confidence close to the decision threshold indicate that the model is likely often changing its prediction between class labels for those instances, making them *ambiguous*. Since ambiguous instances are characterized by confidence near the prediction threshold, Swayamdipta et al. (2020) complement variability and *confidence* with another statistic introduced by Chang et al. (2017), namely *closeness*,
defined as ci = p
(i)· (1 − p
(i)), where p
(i)is the average correct class probability of instance x
(i)
across all training epochs. A **high** closeness value denotes that the instance is consistently near the decision boundary and, thus, is a good indicator of ambiguity within the model.
Intuitively, one would expect high agreement between saliency methods on instances that are easy to learn and low agreement otherwise. However, we find the converse is true when computing how agreement distributes across instance groups. In unregularized models, we observe that easy-to-learn instances exhibit low average agreement, while ambiguous instances have a high average agreement.
In Table 4, we report average agreement scores across all pairs of saliency methods on representative samples from each cartography group.3 We observe a clear distinction in agreement for both the base and regularized models, which is higher for ambiguous instances when compared to easy- and hard-to-learn instance groups. Furthermore, we can observe a consistently high increase in agreement when the models are regularized across all instance groups for all datasets, indicating that regularization techniques reduce representation entanglement.
One might wonder how the increase in agreement distributes across instances and dataset cartography attributes. In Figure 2, we visualize how the relationship between agreement and cartography attributes changes when the models are regularized.
We observe that for the JWA model, all datasets ex-3We select representative samples for each group through the relative frequency of their correct classification. If out of 5 epochs, an instance was correctly classified 5 times, it is representative of the *easy-to-learn* category. If it was correctly classified 0 times, it is representative of the *hard-to-learn* category, and if the number of correct classifications is 2 or 3, it is representative of the *ambiguous* category.
![6_image_0.png](6_image_0.png)
hibit a consistent and significant increase in agreement. Furthermore, we notice that for the DBERT
model, apart from increasing the agreement, regularization reduces the confidence of the model predictions and increases variability - indicating that it reduces the known problem of overconfidence present in pre-trained language models.
## 5.1 The Curvature Of Agreement
To better understand the cause of this distinction between various feature groups, we now analyze local curvature and density in the representation space.
We are interested in: (1) how densely the instances are distributed in the representation space across cartography categories and (2) whether the local space around an instance is sharp or smooth. For both models and all instances, we obtain sequence representations h used as inputs to the decoder. We estimate instance density as the average distance to the nearest instance in the dataset. We estimate local smoothness around an instance representation as the L2 norm of the gradient of the hidden representation with respect to the input embeddings. If the gradient norm is high, the local space is sharp and minor perturbations can have a large effect on the prediction probability.
In Table 5, we report correlations between each of these two statistics and dataset cartography attributes. We observe that for the unregularized model, there is a significant negative correlation between confidence and both gradient norm and minimum distance to the nearest example, indicating that the local space around easy instances is smooth and densely populated. On the other hand, there is a high positive correlation between both closeness and variability and both gradient norm and minimum distance to the nearest example - indicating that the local space around ambiguous instances is sharp and sparsely populated. When we turn our attention to the regularized model, we observe that the correlation between the gradient norm and any of the cartography attributes vanishes, while the correlations between distance and the attributes are reduced in absolute value and their sign is flipped.
From these observations, we hypothesize that the cause of low agreement on easy-to-learn instances is the multitude of possible explanations as to why such an instance should be correctly classified. From the viewpoint of *plausibility*, this hypothesis is in line with the Rashomon effect
(Breiman, 2001) which is about there often existing a multitude of adequate descriptions that end up with the same error rate, or in our case, prediction probability - however it should not apply to *faithfulness*, as a single model instance should adhere to a single explanation. However, due to a plethora of corroborating evidence for easy-tolearn instances, the representation space around them is smooth to such an extent that perturbations do not significantly affect the prediction probability, which in turn adversely affects gradient- and propagation-based explanation methods. The converse is true for ambiguous instances, where we hypothesize the model observes evidence for both classes and is unable to reach a confident decision. However, this difficulty in reaching a decision also causes saliency methods to have a precise definition of what the evidence is - as the local curvature is sharp, and any minor perturbation could significantly affect prediction probability. We believe that local curvature statistics could be used as a metric for measuring whether a trained model is better suited to analysis through explainability methods.
## 6 Conclusion
We analyzed two prototypical models from different families in JWA and DBERT with the goal of finding out the cause of low agreement between saliency method interpretations. We first take a closer look at Kendall-τ , the previously used rankorder correlation metric, and demonstrate that it can be prone to exhibiting high differences in agreement for small perturbations in importance scores.
To account for this, when analyzing agreement be-
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
tween saliency methods, we propose researchers include a linear correlation metric such as Pearson-r, which is robust to small importance perturbations.
Taking a step further, we applied two regularization techniques, TYING and CONICITY, originally aimed at increasing faithfulness of attention explanations, with the hypothesis that the issue underpinning disagreements and unfaithfulness is the same - representation entanglement in the hidden space. We showed that regularization consistently and significantly improves agreement scores across all models and datasets with a minimal penalty for classification performance. Having demonstrated that it is possible to improve upon the low agreement scores, we attempted to offer intuition on which instance categories saliency methods agree the least and show that surprisingly, *easy-to-learn* instances are *hard to agree* on. Lastly, we offered insights into how the representation space morphs when regularization is applied and linked these findings with dataset cartography categories, paving the way for further work on understanding what properties of neural models affect interpretability.
## Limitations
Our work has a number of important limitations that affect the conclusions we can draw from it.
First and foremost, evaluating the faithfulness of model interpretations is problematic as we do not have ground truth annotations for token importances. Thus, when applying the *agreement-asevaluation* paradigm, we implicitly assume that most saliency methods are close to the truth - an assumption that we cannot verify. However, every method of evaluating faithfulness has its own downsides. Token and representation erasure runs the risk of drawing conclusions from corrupted inputs that fall off the data manifold. We argue that while agreement-as-evaluation is far from an ideal way of evaluating faithfulness, it still increases credibility when used along with other techniques.
Secondly, our work is limited both with respect to the datasets and models considered. Specifically, we only evaluate one Transformer-based model from the masked language modeling family, and it is entirely possible that the findings do not generalize to models pre-trained on different tasks.
Also, we only consider single sequence classification datasets - mainly due to the fact that the issues with the faithfulness of attention were most prevalent in those setups, which we assumed would be the same for agreement due to the same hypothesized underlying issue. We believe that tasks that require retention of token-level information in hidden states, such as sequence labeling and machine translation, would exhibit higher agreement overall, even without intervention through regularization.
We leave this analysis for future work.
## References
Samira Abnar and Willem Zuidema. 2020. Quantifying attention flow in transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4190–4197.
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. A diagnostic
study of explainability techniques for text classification. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 3256–3274, Online. Association for Computational Linguistics.
Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PloS one*, 10(7):e0130140.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. *arXiv preprint* arXiv:1409.0473.
Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019.
Interpretable neural predictions with differentiable binary variables. In *Proceedings of the 57th Annual* Meeting of the Association for Computational Linguistics, pages 2963–2977, Florence, Italy. Association for Computational Linguistics.
Jasmijn Bastings, Sebastian Ebert, Polina Zablotskaia, Anders Sandholm, and Katja Filippova. 2022. "will you find these shortcuts?" a protocol for evaluating the faithfulness of input salience methods for text classification. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 976–991, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jasmijn Bastings and Katja Filippova. 2020. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP,
pages 149–155.
Leo Breiman. 2001. Statistical modeling: The two cultures (with comments and a rejoinder by the author).
Statistical science, 16(3):199–231.
Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. *Advances in Neural Information Processing* Systems, 31.
Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In *Proceedings of* the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 1721–
1730.
Haw-Shiuan Chang, Erik Learned-Miller, and Andrew McCallum. 2017. Active bias: Training more accurate neural networks by emphasizing high variance samples. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Christophe Croux and Catherine Dehon. 2010. Influence functions of the spearman and kendall correlation measures. *Statistical methods & applications*,
19(4):497–515.
Misha Denil, Alban Demiraj, and Nando De Freitas.
2014. Extraction of salient sentences from labelled documents. *arXiv preprint arXiv:1412.6815*.
Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018.
Pathologies of neural models make interpretations difficult. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 3719–3728.
Rory Mc Grath, Luca Costabello, Chan Le Van, Paul Sweeney, Farbod Kamiab, Zhao Shen, and Freddy Lecue. 2018. Interpretable credit application predictions with counterfactual explanations. arXiv preprint arXiv:1811.05245.
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. 2019. A benchmark for interpretability methods in deep neural networks. Advances in neural information processing systems, 32.
Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198–4205, Online. Association for Computational Linguistics.
Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 3543–3556.
Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and Byron C Wallace. 2020. Learning to faithfully rationalize by construction. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 4459–4473.
Danielle Leah Kehl and Samuel Ari Kessler. 2017. Algorithms in the criminal justice system: Assessing the use of risk assessments in sentencing.
Maurice G Kendall. 1938. A new measure of rank correlation. *Biometrika*, 30(1/2):81–93.
Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. 2019. The (un) reliability of saliency methods. In *Explainable AI: Interpreting, Explaining and Visualizing Deep Learning*,
pages 267–280. Springer.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016.
Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117, Austin, Texas. Association for Computational Linguistics.
Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. *arXiv preprint arXiv:1612.08220*.
Xin Li and Dan Roth. 2002. Learning question classifiers. In *COLING 2002: The 19th International* Conference on Computational Linguistics.
Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. *CoRR*,
abs/1711.05101.
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In *Advances in Neural Information Processing Systems*,
volume 30. Curran Associates, Inc.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Andreas Madsen, Nicholas Meade, Vaibhav Adlakha, and Siva Reddy. 2022. Evaluating the faithfulness of importance measures in NLP by recursively masking allegedly important tokens and retraining. In *Findings of the Association for Computational Linguistics:*
EMNLP 2022, pages 1731–1751, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Ana Marasovic, Iz Beltagy, Doug Downey, and Matthew Peters. 2022. Few-shot self-rationalization with natural language prompts. In *Findings of the Association for Computational Linguistics: NAACL 2022*,
pages 410–424, Seattle, United States. Association for Computational Linguistics.
Clara Meister, Stefan Lazov, Isabelle Augenstein, and Ryan Cotterell. 2021. Is sparse attention more interpretable? In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers),
pages 122–129.
Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M Khapra, Balaji Vasan Srinivasan, and Balaraman Ravindran. 2020. Towards transparent and explainable attention models. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4206–
4216.
Michael Neely, Stefan F Schouten, Maurits JR Bleeker, and Ana Lucic. 2021. Order in the court: Explainable ai methods prone to disagreement. arXiv preprint arXiv:2105.03287.
Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271–278, Barcelona, Spain.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In *Proceedings of* the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–
1144.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. *CoRR*, abs/1910.01108.
Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931–2951.
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In *International* conference on machine learning, pages 3145–3153.
PMLR.
Jacob Sippy, Gagan Bansal, and Daniel S Weld. 2020.
Data staining: A method for comparing faithfulness of explainers. In Proceedings of ICML Workshop on Human Interpretability in Machine Learning.
Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with compositional vector grammars. In *Proceedings of the 51st* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 455–465, Sofia, Bulgaria. Association for Computational Linguistics.
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017.
Axiomatic attribution for deep networks. In *International Conference on Machine Learning*, pages 3319–3328. PMLR.
Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9275–9293, Online. Association for Computational Linguistics.
Martin Tutek and Jan Šnajder. 2020. Staying true to your word:(how) can attention become explanation?
In *Proceedings of the 5th Workshop on Representation Learning for NLP*, pages 131–142.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc.
Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),
pages 11–20.
| Base | Conicity | Tying | | |
|--------|------------|---------|-----|-----|
| SUBJ | .94 | .89 | .94 | |
| DBERT | SST | .85 | .85 | .85 |
| TREC | .94 | .89 | .91 | |
| IMDB | .90 | .89 | .89 | |
| SUBJ | .93 | .90 | .91 | |
| SST | .82 | .79 | .81 | |
| JWA | TREC | .91 | .87 | .89 |
| IMDB | .90 | .87 | .88 | |
## A Reproducibility A.1 Experimental Results A.1.1 Setup
For both JWA and DBERT, we use the same preprocessing pipeline on all four datasets. First, we filter out instances with fewer than three tokens to achieve stable agreement evaluation.4 Next, we lowercase the tokens, remove non-alphanumeric tokens, and truncate the sequence to 200 tokens if the sequence length exceeds this threshold. We set the maximum vocabulary size to 20k for models which do not leverage subword vocabularies.
## A.1.2 Validation Set Performance
We report the validation set performance in Table 6.
## A.1.3 Computing Infrastructure
We conducted our experiments on 2× *AMD Ryzen* Threadripper 3970X 32-Core Processors and 2×
NVIDIA GeForce RTX 3090 GPUs with 24GB of RAM. We used *PyTorch* version 1.9.0 and CUDA
11.4.
## A.1.4 Average Runtime
Table 9 shows the average experiment runtime for each model across the datasets we used.
## A.1.5 Number Of Parameters
The JWA and DBERT models that we used contained 1, 714, 951 and 66, 954, 241 trainable parameters, respectively.
| JWA | DBERT | |
|-------|---------|-------|
| SUBJ | 3.4 | 11.2 |
| SST | 2.7 | 8.9 |
| TREC | 1.2 | 3.7 |
| IMDB | 6.1 | 107.5 |
| JWA | DBERT | | | |
|-------|---------|-----|-----|-----|
| C | T | C | T | |
| SUBJ | 1. | 1. | 5. | 1. |
| SST | 1. | 0.5 | 0.1 | 0.5 |
| TREC | 1. | 1. | 0.1 | 0.3 |
| IMDB | 0.3 | 1. | 1. | 1. |
## A.2 Hyperparameter Search
We used the following parameter grids for JWA:
[10−1, 10−2, 10−3, 10−4, 10−5, 10−6] for learning rate, and [50, 100, 150, 200] for the hidden state dimension. We yield best average results on validation sets across all datasets when the learning rate is set to 10−3and the hidden size is set to 150. For DBERT, we find that the most robust initial learning rate on the four datasets is 2 × 10−5, among the options we explored [5 × 10−4, 10−4, 10−5, 2 ×
10−5, 5 × 10−5, 10−6]. Additionally, we clip the gradients for both models such that the gradient norm ≤ 1. We use the Adam (Kingma and Ba, 2015) optimizer for JWA and AdamW (Loshchilov and Hutter, 2017) for DBERT. We run both models for 5 epochs and repeat the experiments 5 times with different seeds: [1, 2, 3, 4, 5].
For regularization methods, we conducted a grid search with parameter grid [0.1, 0.3, 0.5, 1, 5, 10]
for CONICITY and [0.1, 0.3, 0.5, 1, 5, 10, 20] for TYING. We select the models with the strongest regularization scale, which is within 3 F1 points from the unregularized model. Table 8 shows the selected values for each model across all datasets.
| JWA | DBERT | |
|-------|---------|-------|
| SUBJ | 3.4 | 11.2 |
| SST | 2.7 | 8.9 |
| TREC | 1.2 | 3.7 |
| IMDB | 6.1 | 107.5 |
Table 9: Experiment duration in minutes for both models across datasets. We report the average runtime over 5 different runs.
| Train | Validation | Test | Total | |
|---------|--------------|--------|---------|---------|
| SUBJ | 7, 000 | 1, 000 | 2, 000 | 10, 000 |
| SST | 6, 819 | 868 | 1, 810 | 9, 497 |
| TREC | 1, 987 | 159 | 486 | 2, 632 |
| IMDB | 17, 212 | 4, 304 | 4, 363 | 25, 879 |
Table 10: Number of instances in each split and the total number of instances in each dataset after we excluded too short examples (see section 3.1).
## A.3 Dataset Statistics
We report the number of instances per split for each dataset in Table 10. We note that all of the datasets we used contain predominantly texts in English.
## B Additional Experiments
![12_image_0.png](12_image_0.png)
We show the full version of local curvature statistics in Table 11 (without averaging over datasets).
![12_image_1.png](12_image_1.png)
| Confidence | Ambiguity | Variability | | | | |
|--------------|-------------|---------------|--------|---------|--------|---------|
| B | T | B | T | B | T | |
| SUBJ | −.37.00 | −.06.00 | .48.00 | .05.02 | .45.00 | .00.83 |
| SST | −.40.00 | .02.34 | .58.00 | .08.00 | .42.00 | −.23.00 |
| TREC | −.32.00 | .05.32 | .39.00 | −.12.01 | .38.00 | .18.00 |
| IMDB | −.46.00 | −.11.00 | .61.00 | .17.00 | .60.00 | .30.00 |
| SUBJ | −.59.00 | .01.70 | .79.00 | −.10.00 | .74.00 | .06.00 |
| SST | −.45.00 | .30.00 | .70.00 | −.44.00 | .49.00 | .30.00 |
| TREC | −.55.00 | .17.00 | .70.00 | −.26.00 | .68.00 | .25.00 |
| IMDB | −.53.00 | .50.00 | .70.00 | −.74.00 | .64.00 | .04.00 |
![13_image_0.png](13_image_0.png)
![13_image_1.png](13_image_1.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
nguyen-etal-2023-enhancing | Enhancing Cross-lingual Transfer via Phonemic Transcription Integration | https://aclanthology.org/2023.findings-acl.583 | Previous cross-lingual transfer methods are restricted to orthographic representation learning via textual scripts. This limitation hampers cross-lingual transfer and is biased towards languages sharing similar well-known scripts. To alleviate the gap between languages from different writing scripts, we propose PhoneXL, a framework incorporating phonemic transcriptions as an additional linguistic modality beyond the traditional orthographic transcriptions for cross-lingual transfer. Particularly, we propose unsupervised alignment objectives to capture (1) local one-to-one alignment between the two different modalities, (2) alignment via multi-modality contexts to leverage information from additional modalities, and (3) alignment via multilingual contexts where additional bilingual dictionaries are incorporated. We also release the first phonemic-orthographic alignment dataset on two token-level tasks (Named Entity Recognition and Part-of-Speech Tagging) among the understudied but interconnected Chinese-Japanese-Korean-Vietnamese (CJKV) languages. Our pilot study reveals phonemic transcription provides essential information beyond the orthography to enhance cross-lingual transfer and bridge the gap among CJKV languages, leading to consistent improvements on cross-lingual token-level tasks over orthographic-based multilingual PLMs. | # Enhancing Cross-Lingual Transfer Via Phonemic Transcription Integration
Hoang H. Nguyen1, Chenwei Zhang2, Tao Zhang1, Eugene Rohrbaugh3**, Philip S. Yu**1 1 Department of Computer Science, University of Illinois at Chicago, Chicago, IL, USA
2 Amazon, Seattle, WA, USA
3 Harrisburg University of Science and Technology , Harrisburg, PA, USA
{hnguy7,tzhang90,psyu}@uic.edu, [email protected], [email protected]
## Abstract
Previous cross-lingual transfer methods are restricted to orthographic representation learning via textual scripts. This limitation hampers cross-lingual transfer and is biased towards languages sharing similar well-known scripts. To alleviate the gap between languages from different writing scripts, we propose **PhoneXL**, a framework incorporating phonemic transcriptions as an additional linguistic modality beyond the traditional orthographic transcriptions for cross-lingual transfer. Particularly, we propose unsupervised alignment objectives to capture (1) local one-to-one alignment between the two different modalities, (2) alignment via multi-modality contexts to leverage information from additional modalities, and (3) alignment via multilingual contexts where additional bilingual dictionaries are incorporated. We also release the first phonemic-orthographic alignment dataset on two token-level tasks (Named Entity Recognition and Part-of-Speech Tagging) among the understudied but interconnected Chinese-Japanese-Korean-Vietnamese
(CJKV) languages. Our pilot study reveals phonemic transcription provides essential information beyond the orthography to enhance cross-lingual transfer and bridge the gap among CJKV languages, leading to consistent improvements on cross-lingual token-level tasks over orthographic-based multilingual PLMs.1
## 1 Introduction
Despite recent advances in cross-lingual pretrained language models (PLM) such as mBERT
(Devlin et al., 2019), XLM-R (Conneau et al.,
2020), PLMs remain heavily biased towards highresourced languages due to the skewed amount of available pre-training data under parameter capacity constraints. This heavily affects the downstream task performance of less-represented languages during pre-training. In addition, as most
| meanings. | Orthographic | Phonemic |
|-------------|-----------------------|-----------------------|
| EN | electronic industry | IlEktôAnIk Ind@stôi |
| ZH (src) | 电子 行业 | tjEn tsW xAN jE |
| VI (tgt) | Công nghiệp Điện tử | koN Ni@p di@n tW |
| JA (src) | 電子 産業 | dEnSi sæNju |
| KO (tgt) | 전자 산업 | >ÃE@nj@ sæniawp |
| EN | Vietnam News Agency | viEtnAm nuz ej>Ã@nsi |
| ZH (src) | 越南 通讯社 | 4œ nan t hUN Cyn s7 |
| VI (tgt) | Thông tấn xã Việt Nam | t hoN t7n sa vi@t nam |
| JA (src) | ベトナム 通信社 | bIt@nAmu ts2uSInS@ |
| KO (tgt) | 베트남 통신사 | bEtunæm tONsIns@ |
high-resourced languages share the similar scripts
(i.e. mostly Latin scripts), PLMs tend to perform well on languages sharing similar scripts with those languages (Pires et al., 2019; Muller et al., 2021; Fujinuma et al., 2022). As most challenging lowresourced languages do not share similar scripts with common high-resourced languages, this limitation leaves them significantly behind.
To alleviate the challenges on low-resource or zero-resource target languages, recent works attempt to transfer knowledge from high-resourced languages (typically English (EN)) to lowresourced target languages via augmentations from bilingual dictionaries and parallel corpora. However, these approaches are restricted to English source language and results in less significant performance gain on languages that are more distant from English (Yang et al., 2022; Fujinuma et al., 2022). Languages can be considered distant due to the differences in orthography, phonology, morphology and grammatical structures. The fact that performance drop occurs on distant languages 2For the sake of simplicity and clearer comparison of phonemic representation similarity between source and target languages, we omit the tonal IPA characters. Tonal IPA
characters are preserved as a part of the phonemic inputs for tokenization and training purposes.
1Our code and datasets are publicly available at https://github.com/nhhoang96/phonemic_xlingual when transferring from English indicates that additional works are needed to exploit connectivity between closely related languages, especially under extreme parameter constraints.
Besides purely relying on the orthographic representation in the format of written scripts, additional modality of languages such as articulatory signals can provide essential information beyond the written scripts to enhance textual representations
(Bharadwaj et al., 2016; Chaudhary et al., 2018).
Phonemic transcriptions which capture linguistic articulatory signals are beneficial to understanding non-Latin-based languages when integrated with PLMs (Sun et al., 2021). They can also facilitate for knowledge transfer between languages sharing lexical similarities in phonology but possessing different writing scripts. As demonstrated in Table 1, despite differences in orthographic representations, the terms "电子" (ZH) and "Điện tử" (VI)
possess significant phonemic similarities when encoded into International Phonetic Alphabet (IPA).
Similarly, although "ベトナム" (JA) and "베트 남" (KO) are different, their phonemic representations ("bIt@nAmu" and "bEtunæm" respectively) are almost identical in terms of articulatory features.
Motivated by the inherent lexical similarities in terms of phonology among CJKV languages, we propose a novel cross-lingual transfer framework to integrate and synthesize two specific linguistic modalities (1) textual orthographic input scripts,
(2) phonemic transcription, represented in International Phonetic Alphabet (IPA) format. Our unified cross-lingual transfer framework aims to effectively (1) align both orthographic and phonemic transcriptions via multi-modality learning, (2) capture additional alignment between the two modalities via contextual information, (3) enhance crosslingual transfer of the two modalities with additional bilingual dictionary. Our work specifically targets Chinese-Vietnamese-Japanese-Korean languages which are not well-studied in cross-lingual transfer and possess lexical similarities with one another in terms of phonology. Our contributions can be summarized as follows:
- We provide the first pre-processed orthographic-phonemic transcription alignment dataset for token-level tasks
(i.e. Part-of-Speech Tagging (POS) and Named Entity Recognition (NER)) among CJKV languages (Chinese-Japanese-KoreanVietnamese).
- We propose a multi-modality learning paradigm with unsupervised alignment objectives to fuse the knowledge obtained from both modalities/ transcriptions to enhance cross-lingual transfer.
- Our proposed framework yields consistent improvements over the orthographic-based multilingual PLMs (mBERT and XLM-R) on both POS and NER tasks.
## 2 Related Work
Cross-lingual Transfer Recent works in Crosslingual transfer focus on generating multilingual contextualized representation for different languages based on the Pre-trained Language Models
(PLM) via bilingual dictionaries (Qin et al., 2021)
and/or machine translation approaches (Fang et al.,
2021; Yang et al., 2022). Qin et al. (2021) proposes a comprehensive code-switching technique via random selection of languages, sentences, and tokens to enhance multilingual representations, leading to improved performance on target languages on different downstream tasks. On the other hand, other approaches leverage parallel corpora generated by Machine Translation to (1) distill knowledge from source languages to target languages
(Fang et al., 2021) or augment source language data with target language knowledge during training (Yang et al., 2022; Zheng et al., 2021). However, current cross-lingual efforts concentrate on single source language (EN) to multiple target languages. Under parameter capacity constraints, cross-lingual transfer has been shown to be biased towards high-resourced languages which share similar scripts and possess larger corpora of unlabeled data during pre-training (Fujinuma et al., 2022).
Unlike previous works, we specifically target enhancing performance of low-resourced languages by exploiting inherent linguistic similarities between closely-connected languages (Nguyen and Rohrbaugh, 2019; Zampieri et al., 2020).
Multi-modality Learning Multi-modality learning (Radford et al., 2021; Li et al., 2021, 2022) was initially proposed for the task of Visual-Question Answering (Goyal et al., 2017). The objective is to find alignment between the given images and textual input (i.e. caption). The two aligned modalities are trained to maximize the agreement with ground truth textual-image alignment. Despite its simple objectives, the CLIP (Radford et al., 2021)
pre-training mechanism is considered the state-ofthe-art in multi-modality representation learning.
Motivated by multi-modality learning, we integrate multi-modality learning approaches in unifying two modalities of transcriptions (orthographic and phonemic) for better representation enrichment.
## 3 Problem Formulation
In this work, we study the problem of Cross-lingual Transfer in a bilingual setting where there exists annotated data collection of high-resource language S, namely D*train* S = {(X
(S)
i, Y (S)
i)}
Ns i=1, and unlabeled data collection of low-resource target language T, denoted as D*test* T = {(X
(T)
j)}
Nt j=1. Ns, Nt denote the sample size of source language training data and target language inference dataset respectively.
Formally, given an i-th input source language utterance with the length of M orthographic tokens x
(S)
i = [x
(S) i,1
, x
(S) i,2
..., x
(S)
i,M ] and the corresponding phonemic transcriptions z
(S)
i =
[z
(S)
i,1
, z
(S)
i,2
..., z
(S)
i,M ] and token-level labels y
(S)
i =
[y
(S)
i,1
, y
(S)
i,2
..., y
(S)
i,M ], the overall training objective is summarized as:
$$\theta_{S}=\underset{\theta}{\operatorname{argmin}}\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}l(F(x_{i}^{(S)},z_{i}^{(S)};\theta),y_{i}^{(S)})\tag{1}$$
where F(·) denotes the transformation function that takes an input of both x
(S)
i, z
(S)
ito output probability prediction of label y
(S)
ifor individual tokens. θ denotes the parameters of the transformation framework and l(·) is the token-level crossentropy loss.
The overall trained framework is then evaluated in a zero-shot setting on target language T as follows:
$$p(y^{(T)}|x^{(T)},z^{(T)})=\underset{k}{argmax}F(x^{(T)},z^{(T)};\theta_{S}),\tag{2}$$
(2)
where k denotes the label space of token-level tasks.
In our cross-lingual transfer settings, no labeled target data or parallel data between source and target language is leveraged during training.
## 4 Proposed Framework
In this section, we introduce our proposed Crosslingual Transfer Framework, namely **PhoneXL**,
with 3 major learning objectives: (1) OrthographicPhonemic Alignment (L*align*), (2) Contextual Cross-modality Alignment (LMLM) , (3) Contextual Cross-lingual Alignment (L*XMLM* ). The overall learning objective is summarized as follows:
L = Ltask+αLalign+βLMLM +γL*XMLM* (3)
where L*task* denotes the corresponding downstream token-level task and *λ, β, γ* correspond to weights of the respective losses for balanced loss integration. For L*task* is computed based on the training objective in Equation 1 as we leverage the generic CRF layer on top of the sequence output from PLM to generate the probability distribution of each token over the token-level class labels.
## 4.1 Orthographic-Phonemic Alignment
Traditional PLMs such as BERT (Devlin et al.,
2019), XLM-R (Conneau et al., 2020) encode the pre-tokenized input texts into 3 separate trainable embeddings: (1) token embedding (−→wt), (2)
positional embedding(−→wp), (3) segment embedding (−→ws) where −→wt,−→wp,−→ws ∈ R
D and D denotes the hidden dimensions of corresponding PLM. In our work, we name the token embedding as orthographic embedding (OE) to distinguish from
(1) phonemic embedding, (2) unified embedding from both phonemic and orthographic inputs. The overall representation of individual tokens is computed as a summation of three types of embedding
−→w =−→wt +−→wp +−→ws.
With the goal of enhancing textual representations via both orthographic and phonemic transcriptions, we introduce the Phonemic Embedding
(PE) to directly capture phonemic transcriptions.
Phonemic embedding, namely −−→wP E ∈ R
D, encodes the representations of phonemic transcription inputs. Phonemic Embedding is integrated with orthographic embedding, positional and segment embedding to form the token representations 3.
Motivated by previous works (Conneau and Lample, 2019; Chaudhary et al., 2020), we introduce additional Language Embedding (−→wl ∈ R
D)
to encode the input language types. These signals are beneficial to training objectives in cross-lingual settings with code-switched inputs introduced in Section 4.3.
The final word representation for the PLM Encoder is −→w =−→wt +−→wp +−→ws +−−→wP E +−→wl. We 3To ensure the alignment between length-variable phonemic and orthographic input resulted from tokenization, we meanpool the embedding of sub-tokens of individual inputs to construct representation for token-level tasks.
![3_image_0.png](3_image_0.png)
denote −→v = Q(−→w ) as the word representation produced by PLM where Q(·) denotes the PLM
Encoder function.
To encourage the alignment between the orthographic textual input and its corresponding phonemic representation, we leverage cross-modality alignment and propose the computation of the phonemic-orthographic alignment loss:
LOtoP = CrossEntropy(simOtoP *, labels*) (4)
The similarity matrices between phonemic and orthographic inputs (sim*OtoP* ) is computed as:
$$sim_{OtoP}=\sum_{m}^{M}\frac{\overline{w_{m,P}E}}{||\overline{w_{m,P}E}||}*\frac{\overline{w_{m,t}}}{||\overline{w_{m,t}}||}*\tau\tag{5}$$ where $\tau$ denotes the learnable soft tempera
where τ denotes the learnable soft temperature parameter and *|| · ||* is L2-normalization.
−−−−→ wm,P E,−−−→ w*im,t* denote OE and PE of the m-th token in a sentence of length M.
Similarly to text-image alignment in crossmodality learning (Radford et al., 2021), the alignment is computed as a bi-directional interaction between orthographic and phonemic transcriptions.
Therefore, the overall alignment loss is as follows:
$${\mathcal{L}}_{a l i g n}=({\mathcal{L}}_{O t o P}+{\mathcal{L}}_{P t o O})/2\qquad\qquad(6)$$
## 4.2 Contextual Cross-Modality Alignment
The introduced alignment from 4.1 is restricted to 1-1 alignment between IPAs and tokens. However, contexts can significantly affect alignment between IPA and tokens. For instance, the orthography of 行 usually corresponds to [C¯iN]. However, the orthography of 行业 corresponds to [x´AN j`E]
To overcome the challenges, we propose introducing additional Masked Language Modeling to further align the two modalities. In other words, we randomly mask µ% of input orthographic tokens and train the models to predict the masked tokens via (1) contextual/ non-masked orthographic tokens
, (2) all of the phonemic transcriptions (including those of masked tokens). This objective encourages the alignment between phonemic and orthographic inputs via contextual information from both modalities of languages. Specifically, given a masked orthographic input and its corresponding phonemic representation, the model aims at predicting the masked tokens correctly. The loss is summarized as follows:
$${\mathcal{L}}_{M L M}=-\sum_{j\in C}l o g(P(y_{j}|{\overrightarrow{v_{j}}};\theta)\qquad(7)$$
Table 2: Details of processed PANX and UDPOS
datasets. We report statistics of source language training set and target language testing set for ZH-VI language pair.
| PANX | UDPOS | | | |
|-----------------------------------|---------|--------|--------|-------|
| Source | Target | Source | Target | |
| # Labels | 7 | 7 | 18 | 18 |
| # Samples | 20000 | 10000 | 13320 | 1710 |
| Avg Token Length | 25.88 | 21.47 | 20.88 | 10.97 |
| Avg Tokenized Orthographic Length | 25.88 | 21.47 | 32.15 | 25.92 |
| Avg Tokenized Phonemic Length | 47.61 | 45.03 | 59.71 | 67.94 |
where yj ,−→vj denote the j-th location ground truth MLM label and input representation produced by PLM as introduced in 4.1 respectively. C denotes the number of masked tokens in the input training sentence.
## 4.3 Cross-Lingual Contextual Alignment
Up to this point, the model does not leverage any specific cross-lingual signals for knowledge transfer between source and target languages. Therefore, we further introduce the Cross-lingual Contextual Alignment objective via bilingual dictionary.
Similarly to Contextual Multi-modal Alignment as introduced in Section 4.2, we leverage MLM
objectives to encourage the recognition of source language orthographic tokens given the phonemic inputs and multilingual orthographic contexts. The major difference between XMLM and MLM is that the input to XMLM is the code-switched input utterances which contain a mix of tokens from both source and target languages. Specifically, following (Qin et al., 2021), we conduct random codeswitching of tokens of the source input utterances with ratio of r% where r is a hyperparameter. The code-switched inputs follow the same procedure of MLM as introduced in Section 4.2. The XMLM
training objective is summarized as follows:
$${\mathcal{L}}_{X M L M}=-\sum_{j\in C^{\prime}}l o g(P(y_{j}|\overrightarrow{\hat{v}_{j}};\theta)\qquad(8)$$
where −→
v˜j is the PLM representation of j-th token for the code-switched input sentences of source language and C′is the number of masked tokens based on percentage of code-switched source language inputs. Depending on the selected tokens for code-switching and its corresponding target language tokens, the absolute values of C and C′are not necessarily the same since the number of tokens in source samples and code-switched samples might not be identical.
## 5 Experiments 5.1 Datasets & Preprocessing
We evaluate our proposed framework on tokenlevel tasks, including Named Entity Recognition
(NER) and Part-of-Speech Tagging (POS) among four languages: Chinese (ZH), Vietnamese (VI),
Japanese (JA) and Korean (KO). Based on linguistic structural similarities (SVO vs SVO structural orders) and lexical similarities in terms of phonemic representations, we divide the four languages into 2 groups: (JA,KO) and (ZH,VI) where the former in each pair is considered a high-resourced language and the latter is a low-resourced counterpart.
During training, only high-resourced languages are leveraged and we conduct zero-shot evaluation on the target low-resourced languages.
To evaluate our proposed framework on tokenlevel tasks, we first construct a new dataset by preprocessing the alignment between orthographic and phonemic transcriptions. Specifically, we leverage NER and POS datasets (namely PANX and UDPOS) from XTREME benchmark datasets (Hu et al., 2020). Given input utterances from the datasets, we generate the corresponding phonemic transcriptions on token-level. As phonemic transcriptions can either **Romanized Transcriptions**
(i.e. Pinyin for ZH, Romaji for JA, Romaja for KO)
or **IPA Transcriptions**, we generate both types of phonemic transcriptions and conduct empirical study on both in Section 6.
Generating Romanized Transcriptions As VI
is written in Latin, we preserve the original characters as the corresponding Romanized transcriptions.
For ZH, JA, KO, we directly obtain the corresponding Romanized transcriptions via dragonmapper, pykakasi and korean_romanizer respectively.
Generating IPA Transcriptions As PanPhon
(Mortensen et al., 2016) does not support some of our targeted languages (JA, KO), we leverage external open-source tools to generate IPA transcriptions for individual languages. Specifically, we use dragonmapper, viphoneme to generate IPA
transcriptions for ZH, VI respectively. As there are no direct open-source IPA transcription tools available for JA,KO, we generate IPA transcriptions via a two-stage process. First, we generate Romanized transcrptions for JA, KO via pykakasi and korean_romanizer respectively. Then, as the aforementioned Romanized transcriptions are in Latin-based format, we treat them as Latin char-
| Model | PANX | UDPOS | | | | | | |
|--------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|
| ZH->VI | JA->KO | ZH->VI | JA->KO | | | | | |
| Source (ZH) | Target (VI) | Source (JA) | Target (KO) | Source (ZH) | Target (VI) | Source (JA) | Target (KO) | |
| mBERT | 78.10 ± 0.25 | 49.94 ± 1.44 | 69.85 ± 0.17 | 26.64 ± 0.17 | 89.93 ± 0.02 | 48.62 ± 0.66 | 86.24 ± 0.13 | 43.63 ± 1.28 |
| PhoneXL (full) | 80.42 ± 0.07 | 52.28 ± 0.98 | 72.90 ± 0.37 | 29.25 ± 0.59 | 90.53 ± 0.04 | 50.71 ± 0.40 | 90.00 ± 0.15 | 46.75 ± 0.09 |
| PhoneXL (w Lalign) | 79.71 ± 0.21 | 51.09 ± 0.42 | 72.01 ± 0.11 | 28.23 ± 0.32 | 90.42 ± 0.03 | 50.29 ± 0.13 | 89.56 ± 0.07 | 45.96 ± 0.47 |
| PhoneXL (w LMLM ) | 79.70 ± 0.17 | 50.23 ± 1.63 | 72.62 ± 0.02 | 27.90 ± 0.11 | 90.44 ± 0.03 | 50.49 ± 0.67 | 89.53 ± 0.16 | 45.94 ± 0.45 |
| PhoneXL (w LXMLM ) | 79.69 ± 0.15 | 50.83 ± 0.63 | 72.57 ± 0.57 | 28.85 ± 0.71 | 90.40 ± 0.05 | 50.20 ± 1.63 | 89.63 ± 0.16 | 45.25 ± 0.73 |
| XLM-R | 75.31 ± 0.46 | 35.68 ± 0.66 | 66.31 ± 0.06 | 14.80 ± 0.97 | 91.28 ± 0.04 | 50.40 ± 0.51 | 89.94 ± 0.17 | 46.16 ± 0.24 |
| PhoneXL (full) | 77.00 ± 0.24 | 38.88 ± 0.15 | 69.02 ± 0.24 | 16.39 ± 0.13 | 91.43 ± 0.24 | 52.73 ± 0.86 | 90.06 ± 0.04 | 48.82 ± 0.43 |
| PhoneXL (w Lalign) | 76.41 ± 0.09 | 37.04 ± 0.68 | 68.76 ± 0.25 | 15.34 ± 0.13 | 91.39 ± 0.02 | 52.46 ± 0.17 | 90.01 ± 0.12 | 47.96 ± 0.62 |
| PhoneXL (w LMLM ) | 76.70 ± 0.07 | 37.29 ± 0.34 | 67.62 ± 0.13 | 15.16 ± 0.58 | 91.14 ± 0.02 | 51.88 ± 1.53 | 90.02 ± 0.05 | 47.83 ± 0.39 |
| PhoneXL (w LXMLM ) | 76.52 ± 0.15 | 37.15 ± 0.30 | 68.68 ± 1.39 | 15.89 ± 0.79 | 91.04 ± 0.05 | 51.15 ± 1.40 | 89.90 ± 0.37 | 47.85 ± 0.56 |
| Model | Assumption | PANX | UDPOS | | | | | | |
|----------------|--------------|---------------------------|--------------|---------------------------|---------------------------|---------------------------|---------------------------|--------------|--------------|
| Dict | MT | ZH->VI | JA->KO | ZH->VI | JA->KO | | | | |
| Source (ZH) | Target (VI) | Source (JA) | Target (KO) | Source (ZH) | Target (VI) | Source (JA) | Target (KO) | | |
| mBERT | 78.10 ± 0.25 | 49.94 ± 1.44 | 69.85 ± 0.17 | 26.64 ± 0.17 | 89.93 ± 0.02 | 48.62 ± 0.66 | 86.24 ± 0.13 | 43.63 ± 1.28 | |
| CoSDA-ML | ✓ | 78.48 ± 0.34 | 47.82 ± 1.43 | 70.42 ± 0.50 | 25.76 ± 1.75 | 89.76 ± 0.19 | 49.84 ± 0.49 | 87.63 ± 0.14 | 41.19 ± 1.16 |
| X-MIXUP | ✓ | 78.87 ± 0.17 52.98 ± 0.05 | 68.10 ± 0.69 | 26.41 ± 1.06 | 89.41 ± 0.10 | 50.05 ± 0.95 | 87.58 ± 0.17 48.47 ± 0.37 | | |
| PhoneXL (full) | ✓ | 80.42 ± 0.07 52.28 ± 0.98 | 72.90 ± 0.37 | 29.25 ± 0.59 | 90.53 ± 0.04 | 50.71 ± 0.40 | 90.00 ± 0.15 46.75 ± 0.09 | | |
| XLM-R | 75.31 ± 0.46 | 35.68 ± 0.66 | 66.31 ± 0.06 | 14.80 ± 0.97 | 91.28 ± 0.04 | 50.40 ± 0.51 | 89.94 ± 0.17 | 46.16 ± 0.24 | |
| FILTER | ✓ | 72.55 ± 0.11 | 40.17 ± 1.35 | 62.92 ± 0.26 | 18.60 ± 1.02 | 90.57 ± 0.05 55.85 ± 0.27 | 90.81 ± 0.19 43.25 ± 1.52 | | |
| xTune | ✓ | ✓ | 77.48 ± 0.08 | 40.94 ± 0.87 | 68.02 ± 0.26 21.95 ± 1.02 | 91.75 ± 0.10 51.91 ± 0.74 | 89.75 ± 0.31 51.03 ± 1.26 | | |
| X-MIXUP | ✓ | 75.89 ± 0.46 | 38.22 ± 0.72 | 65.33 ± 0.69 | 16.43 ± 2.98 | 90.67 ± 0.06 | 50.30 ± 1.23 | 88.48 ± 0.23 | 50.63 ± 0.95 |
| PhoneXL (full) | ✓ | 77.00 ± 0.24 | 38.88 ± 0.15 | 69.02 ± 0.24 16.39 ± 0.13 | 91.43 ± 0.24 | 52.73 ± 0.86 | 90.06 ± 0.04 | 48.82 ± 0.43 | |
acters and input them to PanPhon to generate IPA
transcriptions for JA and KO. Details of our constructed datasets are provided in Table 2.
## 5.2 Implementation Details
For evaluation of both NER and POS tasks, we report the F-1 score for different individual language pairs. We report performance on both development and test set of both source and target languages.
As IPA involves unique characters that are outside the typical orthographic vocabulary (i.e. /N/,
/C/, /Ð/, /Z/), we extend PLM vocabulary to account for these special characters. Therefore, both OEs and PEs are resized to account for the newly extended vocabulary. The impact of extended vocabulary will be further discussed in Section 6.
For each language pair (ZH-VI vs JA-KO) and token-level tasks (NER vs POS), we tune hyperparameters of our framework based on the development set of the source language. Specifically, we conduct grid search for *λ, β, γ* over the space [0.1, 0.01, 0.001, 0.0001]. Mask ratio (µ) and cs_ratio
(r) are tuned over the space [0.1, 0.4] inclusive with a step of 0.05. Hyperparameter details for each task and language pair are reported in Table 5.
We train our model with the batch size of 32 and 16 for mBERT and XLM-R respectively. Both Table 5: Hyperparameters for PANX and UDPOS
datasets (NER and POS tasks respectively) on experimental language pairs ZH->VI and JA->KO
| PANX | UDPOS | | | |
|--------|---------|--------|--------|------|
| ZH->VI | JA->KO | ZH->VI | JA->KO | |
| λ | 0.01 | 0.1 | 0.01 | 0.1 |
| β | 0.01 | 0.001 | 0.001 | 0.01 |
| γ | 0.01 | 0.001 | 0.01 | 0.01 |
| µ | 0.20 | 0.25 | 0.10 | 0.05 |
| r | 0.40 | 0.30 | 0.40 | 0.30 |
multilingual base versions (L=12, H=12, D=768 where L,H,D denote the number of hidden layers, the number of attention heads per layer and hidden dimension respectively) are used as backbone PLM architectures for our experiments. Both training and inference of our framework are conducted on NVIDIA TITAN RTX and NVIDIA RTX 3090 GPUs. We report our experimental results as the average performance of 3 runs from different random seed initializations with standard deviations.
Due to space constraints, we report our empirical studies on test sets in Table 3 and 4. Additional results on development (dev) sets for both datasets are summarized in the Appendix A.
In our study, we leverage publicly available MUSE bilingual dictionary (Conneau et al., 2017).
As bilingual dictionary is only available between EN and target languages, we construct bilingual dictionaries for our language pairs (ZH-VI and JAKO) by leveraging EN as a bridge for semantic alignment between source and target languages.
## 5.3 Baseline
We compare our method with previously proposed cross-lingual transfer frameworks for token-level tasks. We conduct experiments with both mBERT
and XLM-R backbone PLM architecture. We compare our work with both approaches leveraging Machine Translation (MT) and/or Bilingual Dictionary (Dict), including:
- CoSDA-ML (Qin et al., 2021): Multi-level Code-switching augmentation for various cross-lingual NLP tasks, including POS and NER tasks.
- FILTER (Fang et al., 2021): Cross-lingual Transfer via Intermediate Architecture Disentanglement with Knowledge Distillation objectives from source to target languages.
- xTune (Zheng et al., 2021): Two-stage augmentation mechanisms with four exhaustive augmentation methods for cross-lingual transfer.
- XMIXUP (Yang et al., 2022): Cross-lingual transfer via Manifold Mixup Augmentation and Machine Translation Alignment between source and target languages.
As *FILTER, xTune* and *XMIXUP* require training parallel corpora, we generate translation of training data from source languages (ZH, JA) to target languages (VI, KO) via third-party MT package4.
Ground-truth label sequence of MT data is the same as the original source language data.
As *CoSDA-ML* and *xTune* both require bilingual dictionary for cross-lingual transfer, we leverage the reconstructed MUSE bilingual dictionary for our setting as introduced in 5.2.
## 6 Result & Discussion
Our experimental results for NER and POS tasks are summarized in Table 3 and 4. Based on the empirical study, our proposed *PhoneXL* framework consistently outperforms by the backbone PLM
architecture of mBERT and XLM-R in both evaluated token-level tasks for low-resourced target languages VI and KO. For instance, with ZH-VI
pair, we observe the target language's F1 evaluation metric improvement of 2.34 points and 2.01 points for NER and POS tasks respectively as compared to the fine-tuned backbone mBERT architecture. This improvement implies the phonemic information provides essential information beyond orthographic representation to further bridge the gap between the source and target languages.
In NER task, the larger and state-of-the-art multilingual PLM XLM-R yields worse performance than mBERT in both of our language pair performance on source and target languages. On the other hand, for POS task, XLM-R based architecture only results in marginal performance gains when compared with mBERT. Interestingly, our mBERT-based framework achieves competitive performance with XLM-R backbone architecture on POS task despite leveraging smaller vocabulary size and less pre-training data. We hypothesize this might be due to the fact that XLM-R
has been trained with more languages, leading to biases towards certain types of languages during pre-training that might not share common properties with CJKV languages.
Despite the performance gain over XLM-R
based architecture observed in Table 3, our PhoneXL framework does not consistently outperform the previous baselines such as *FILTER,*
xTune and *XMIXUP* in Table 4. However, these baselines require extra parallel corpora obtained from machine translation which might not always be readily available for all languages, especially for low-resourced ones. On the other hand, our proposed method achieves state-of-the-art performance among methods leveraging only bilingual dictionary. In addition, despite its impressive performance, *xTune* requires two-stage training procedures, four different exhaustive augmentation methods as well as the knowledge of both machine translation and bilingual dictionary. Therefore, it is deemed more time-consuming and resourceintensive than our approach.
## Romanized Vs Ipa Phonemic Transcriptions
As observed in Table 6, leveraging Romanized transcriptions from individual languages instead of IPA
degrades the downstream POS task performance on both low-resourced target languages (averaged 0.93 and 1.57 points of performance drop on VI and KO
respectively from PhoneXL-IPA (full) to *PhoneXLRomanized (full)*). We hypothesize it might be due 4https://pypi.org/project/googletrans/
Table 6: Ablation study of the impact of vocabulary
extension and IPA embedding on target language F1score in POS task (VI,KO respectively) with mBERT
backbone architecture.
ZH->VI JA->KO
mBERT (w/o PE, w/o extension) 48.62 ± 0.66 43.63 ± 1.28
mBERT (w PE, w/o extension) 48.95 ± 0.52 43.85 ± 0.14
mBERT (w PE, w extension) 49.14 ± 0.98 44.42 ± 0.36
PhoneXL-IPA (w/o Lang Embedding) 49.74 ± 0.28 45.84 ± 0.07
PhoneXL-Romanized (full) 49.78 ± 1.99 45.18 ± 2.03
PhoneXL-IPA (full) 50.71 ± 0.40 46.75 ± **0.09**
to the lack of phonemic consistency among Romanized transcriptions of different languages. In addition, as certain low-resourced languages might not have their own Romanized transcriptions, IPA
phonemic transcriptions provide a more generalized and consistent pathway to generate phonemic transcriptions across different language families.
Impact of Phonemic Embeddings Based on Table 6, we also observe that the introduction of IPA
embedding, even without any unsupervised objectives, also provide additional improvements as compared to the backbone orthographic-based mBERT
architecture. However, further training with our introduced objectives provide stronger boost in target language performance improvements on the downstream task.
Impact of Vocabulary Extension As observed in Table 6, vocabulary extension is important to learn effective Phonemic Embedding. Vocabulary extension allows the model to differentiate unique IPA characters when encoding tokens, leading to 0.19 and 1.42 points of F1 score improvements on mBERT for VI and KO respectively. It is intuitive since additional special phonemic characters possess different meanings than typical orthographic characters. However, we still observe a significant gap between *mBERT with PE* and *PhoneXL* framework. It is due to the lack of alignment between embedding of phonemic and orthographic inputs.
Impact of Unsupervised Objectives As observed in Table 3, each introduced alignment objective between phonemic and orthographic inputs provides additional performance gain over the original backbone mBERT and XLM-R PLMs on both language groups. Additionally, from Table 6, the existence of performance gap between simple introduction of PE (*mBERT (w PE, w extensions)*) and PhoneXL (i.e. 1.57 points on VI and 2.33 points on KO in POS task) implies that the unsupervised alignment objectives are crucial to bringing about Impact of Language Embedding Language Embedding is crucial to our framework performance, leading to consistent performance gain on both target languages in POS task. In fact, Language Embedding is especially important to L*XMLM* as inputs are code-switched sentences which are made up of tokens from different languages. Without language indication from Language Embedding, the model is unable to predict the correct masked tokens in the correct language.
## 7 Conclusion & Future Work
In our work, we propose **PhoneXL**, a novel mechanism to integrate phonemic transcription with orthographic transcription to further enhance representation capability of language models in cross-lingual transfer settings. By encouraging the alignment between the two linguistic modalities via direct one-to-one alignment, indirect contextual alignment and additional code-switching via bilingual dictionaries, our proposed **PhoneXL**
yields consistent performance improvements over the backbone orthographic-based PLM architecture in downstream cross-lingual token-level task among the CJKV languages. We also release the first aligned phonemic-orthographic datasets for CJKV languages for two popular token-level tasks
(NER and POS). In future work, we plan to train our proposed unsupervised objectives with larger CJKV corpora as pre-training mechanisms to evaluate effectiveness of the representations in multigranularity downstream tasks (i.e. sentence-level classification tasks to Question-Answering tasks).
Further extensions towards few-shot learning settings (Nguyen et al., 2020; Xia et al., 2020) where a small number of target language examples can be leveraged to exploit orthographic-phonemic similarity between source and target languages is a promising direction for our future work.
## Limitations
Our approach is heavily dependent on the quality of the pre-processed orthographic-phonemic transcription data as it provides the ground-truth for unsupervised alignment objectives. Generating phonemic transcriptions and aligning them correctly with orthographic representations can be costly. Despite our significant efforts, the alignment is still far from perfect optimality.
Secondly, our approach might not be effective in improving performance on randomly chosen language pairs. As our framework aims to exploit phonemic similarities of languages with different orthographic representations, the methods are only effective in cross-lingual transfer between lexically similar languages in terms of phonology such as CJKV languages. Languages that do not fall into this category might observer little to no performance gains with our proposed framework.
## References
Akash Bharadwaj, David Mortensen, Chris Dyer, and Jaime Carbonell. 2016. Phonologically aware neural model for named entity recognition in low resource transfer settings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1462–1472, Austin, Texas. Association for Computational Linguistics.
Aditi Chaudhary, Karthik Raman, Krishna Srinivasan, and Jiecao Chen. 2020. Dict-mlm: Improved multilingual pre-training using bilingual dictionaries.
Aditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R Mortensen, and Jaime G Carbonell.
2018. Adapting word embeddings to new languages with morphological and phonological subword representations. *arXiv preprint arXiv:1808.09500*.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. Advances in neural information processing systems, 32.
Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017.
Word translation without parallel data. *arXiv preprint* arXiv:1710.04087.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–
4186.
Yuwei Fang, Shuohang Wang, Zhe Gan, Siqi Sun, and Jingjing Liu. 2021. Filter: An enhanced fusion method for cross-lingual language understanding. In
Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12776–12784.
Yoshinari Fujinuma, Jordan Boyd-Graber, and Katharina Kann. 2022. Match the script, adapt if multilingual: Analyzing the effect of multilingual pretraining on cross-lingual transferability. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1500–1512, Dublin, Ireland. Association for Computational Linguistics.
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904–6913.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson.
2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In *International Conference on Machine Learning*, pages 4411–4421. PMLR.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. *arXiv preprint arXiv:2201.12086*.
Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation.
Advances in neural information processing systems, 34:9694–9705.
David R. Mortensen, Patrick Littell, Akash Bharadwaj, Kartik Goyal, Chris Dyer, and Lori S. Levin. 2016. Panphon: A resource for mapping IPA segments to articulatory feature vectors. In *Proceedings of* COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3475–3484. ACL.
Benjamin Muller, Antonios Anastasopoulos, Benoît Sagot, and Djamé Seddah. 2021. When being unseen from mbert is just the beginning: Handling new languages with multilingual language models.
In NAACL-HLT 2021-2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Hoang Nguyen and Gene Rohrbaugh. 2019. Crosslingual genre classification using linguistic groupings.
Journal of Computing Sciences in Colleges, 34(3):91–
96.
Hoang Nguyen, Chenwei Zhang, Congying Xia, and S Yu Philip. 2020. Dynamic semantic matching and aggregation network for few-shot intent detection.
In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1209–1218.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019.
How multilingual is multilingual bert? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996–5001.
Libo Qin, Minheng Ni, Yue Zhang, and Wanxiang Che.
2021. Cosda-ml: multi-lingual code-switching data augmentation for zero-shot cross-lingual nlp. In *Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial* Intelligence, pages 3853–3860.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763.
PMLR.
Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu, and Jiwei Li. 2021. ChineseBERT: Chinese pretraining enhanced by glyph and Pinyin information. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 2065–2075, Online. Association for Computational Linguistics.
Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei
Zhang, and Philip Yu. 2020. Cg-bert: Conditional text generation with bert for generalized few-shot intent detection. *arXiv preprint arXiv:2004.01881*.
Huiyun Yang, Huadong Chen, Hao Zhou, and Lei Li.
2022. Enhancing cross-lingual transfer by manifold mixup. *arXiv preprint arXiv:2205.04182*.
Marcos Zampieri, Preslav Nakov, and Yves Scherrer.
2020. Natural language processing for similar languages, varieties, and dialects: A survey. Natural Language Engineering, 26(6):595–612.
Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, and Furu Wei. 2021. Consistency regularization for cross-lingual fine-tuning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 3403–3417.
## A Additional Experiments
We provide additional experiments on dev set of PANX and UDPOS datasets of XTREME benchmark datasets in Table 7 and 8. Our observations are mostly consistent between dev and test sets on both evaluated token-level tasks.
Table 7: NER and POS Experimental Results on PANX and UDPOS dev datasets respectively.
Model PANX UDPOS
ZH->VI JA->KO ZH->VI JA->KO
Source (ZH) **Target (VI)** Source (JA) **Target (KO)** Source (ZH) **Target (VI)** Source (JA) **Target (KO)**
mBERT 78.54 ± 0.09 49.30 ± 0.25 69.05 ± 0.12 27.27 ± 0.19 92.72 ± 0.09 46.75 ± 0.66 92.42 ± 0.06 42.15 ± 0.74
PhoneXL (full) 79.89 ± 0.16 51.81 ± 0.62 72.20 ± 0.06 30.00 ± 0.77 93.75 ± 0.03 49.04 ± 0.29 96.41 ± 0.11 44.24 ± **0.19**
PhoneXL (w L*align*) 79.32 ± 0.24 50.78 ± 0.45 71.46 ± 0.09 28.80 ± 0.40 93.52 ± 0.05 48.40 ± 0.28 96.19 ± 0.06 43.65 ± 0.71
PhoneXL (w LMLM ) 79.40 ± 0.35 49.84 ± 1.72 72.12 ± 0.37 27.97 ± 0.22 93.41 ± 0.13 48.70 ± 0.73 96.29 ± 0.03 42.95 ± 0.58
PhoneXL (w L*XMLM* ) 79.34 ± 0.04 50.55 ± 0.73 72.07 ± 0.49 29.28 ± 0.80 93.43 ± 0.06 48.59 ± 0.46 96.12 ± 0.08 43.18 ± 0.70
XLM-R 75.33 ± 0.71 35.77 ± 0.75 66.14 ± 0.016 14.56 ± 0.80 94.92 ± 0.08 48.96 ± 0.46 96.44 ± 0.08 44.15 ± 0.17
PhoneXL (full) 76.54 ± 0.16 38.90 ± 0.31 68.85 ± 0.16 17.26 ± **0.23** 94.84 ± 0.41 51.45 ± 1.55 97.18 ± 0.02 45.97 ± **0.48**
PhoneXL (w L*align*) 76.27 ± 0.24 36.42 ± 0.23 68.17 ± 0.14 16.08 ± 0.01 95.00 ± 0.05 51.47 ± **0.30** 96.67 ± 0.08 45.00 ± 0.44
PhoneXL (w LMLM ) 76.16 ± 0.10 37.20 ± 0.11 68.12 ± 0.15 15.90 ± 0.35 94.26 ± 0.17 50.84 ± 1.56 96.70 ± 0.04 45.27 ± 0.41
PhoneXL (w L*XMLM* ) 76.09 ± 0.06 36.81 ± 0.28 68.26 ± 1.27 16.11 ± 0.23 94.22 ± 0.14 50.48 ± 1.06 96.67 ± 0.05 44.88 ± 0.62
Table 8: NER and POS Baseline Results on PANX and UDPOS dev datasets respectively. **Dict** denotes the
assumptions of available bilingual dictionary and MT refers to the assumptions of available Machine Translations
between source and target languages. Cross-lingual Transfer methods leverage either Dict or MT or both.
Model Assumption PANX UDPOS
Dict MT ZH->VI JA->KO ZH->VI JA->KO
Source (ZH) **Target (VI)** Source (JA) **Target (KO)** Source (ZH) **Target (VI)** Source (JA) **Target (KO)**
mBERT 78.54 ± 0.09 49.30 ± 0.25 69.05 ± 0.12 27.27 ± 0.19 92.72 ± 0.09 46.75 ± 0.66 92.42 ± 0.06 42.15 ± 0.74
CoSDA-ML ✓ 78.06 ± 0.19 47.22 ± 1.39 70.46 ± 0.43 26.10 ± 1.57 92.97 ± 0.17 48.46 ± 0.59 94.66 ± 0.18 40.85 ± 0.91
X-MIXUP ✓ 78.68 ± 0.21 53.62 ± **0.45** 67.70 ± 0.58 26.70 ± 0.87 94.26 ± **0.23** 48.61 ± 0.62 96.02 ± 0.07 48.70 ± **0.51**
PhoneXL (full) ✓ 79.89 ± **0.16** 51.81 ± 0.62 72.20 ± 0.06 30.00 ± **0.77** 93.75 ± 0.03 49.04 ± 0.29 96.41 ± **0.11** 44.24 ± 0.19
XLM-R 75.33 ± 0.71 35.77 ± 0.75 66.14 ± 0.016 14.56 ± 0.80 94.92 ± 0.08 48.96 ± 0.46 96.44 ± 0.08 44.15 ± 0.17
FILTER ✓ 72.13 ± 0.16 39.76 ± 1.31 62.96 ± 0.38 19.46 ± 0.95 93.14 ± 0.19 53.89 ± **0.28** 96.99 ± 0.07 39.39 ± 1.56
xTune ✓ ✓ 77.38 ± 0.10 40.58 ± **0.67** 68.26 ± 0.38 23.05 ± 0.95 95.34 ± **0.17** 50.29 ± 0.69 97.08 ± 0.08 49.54 ± **0.79**
X-MIXUP ✓ 73.20 ± 0.22 38.17 ± 0.78 64.46 ± 0.47 17.00 ± 2.81 94.80 ± 0.14 50.25 ± 1.05 96.42 ± 0.05 48.05 ± 0.96
PhoneXL (full) ✓ 76.54 ± 0.16 38.90 ± 0.31 68.85 ± **0.16** 17.26 ± 0.23 94.84 ± 0.41 51.45 ± 1.55 97.18 ± **0.02** 45.97 ± 0.48
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitation section
✓ A2. Did you discuss any potential risks of your work?
Limitation section
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5
✓ B1. Did you cite the creators of artifacts you used?
Section 5
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section 5 and Limitation B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section 5
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 5
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
chen-etal-2023-human | Human-in-the-loop Abstractive Dialogue Summarization | https://aclanthology.org/2023.findings-acl.584 | Abstractive dialogue summarization has received increasing attention recently. Despite the fact that most of the current dialogue summarization systems are trained to maximize the likelihood of human-written summaries and have achieved significant results, there is still a huge gap in generating high-quality summaries as determined by humans, such as coherence and faithfulness, partly due to the misalignment in maximizing a single human-written summary. To this end, we propose to incorporate different levels of human feedback into the training process. This will enable us to guide the models to capture the behaviors humans care about for summaries. Specifically, we ask humans to highlight the salient information to be included in summaries to provide the local feedback, and to make overall comparisons among summaries in terms of coherence, accuracy, coverage, concise and overall quality, as the global feedback. We then combine both local and global feedback to fine-tune the dialog summarization policy with Reinforcement Learning. Experiments conducted on multiple datasets demonstrate the effectiveness and generalization of our methods over the state-of-the-art supervised baselines, especially in terms of human judgments. | # Human-In-The-Loop Abstractive Dialogue Summarization
Jiaao Chen† Mohan Dodda† **Diyi Yang**⋄
† Georgia Institute of Technology ⋄ Stanford University
{jchen896,mohandodda}@gatech.edu [email protected]
## Abstract
Abstractive dialogue summarization has received increasing attention recently. Despite the fact that most of the current dialogue summarization systems are trained to maximize the likelihood of human-written summaries and have achieved significant results, there is still a huge gap in generating high-quality summaries as determined by humans, such as coherence and faithfulness, partly due to the misalignment in maximizing a single human-written summary. To this end, we propose to incorporate different levels of human feedback into the training process. This will enable us to guide the models to capture the behaviors humans care about for summaries. Specifically, we ask humans to highlight the salient information to be included in summaries to provide the *local feedback*, and to make overall comparisons among summaries in terms of coherence, accuracy, coverage, concise and overall quality, as the *global feedback*. We then combine both local and global feedback to fine-tune the dialog summarization policy with Reinforcement Learning. Experiments conducted on multiple datasets demonstrate the effectiveness and generalization of our methods over the stateof-the-art supervised baselines, especially in terms of human judgments1.
## 1 Introduction
Abstractive conversation summarization, which aims at processing, organizing, and distilling human interaction activities into natural, concise, and informative text (Murray et al., 2006; Wang and Cardie, 2013), is one of the most challenging and interesting tasks in text summarization. Growing attention has been paid to neural abstractive conversation summarization through a variety of designs including transferring document summarization models (Gliwa et al., 2019; Yu et al., 2021; Jia et al., 2022), utilizing conversational structures
(Chen and Yang, 2020; Feng et al., 2020b; Zhu et al., 2020a; Chen and Yang, 2021b; Liu et al.,
2019b; Lin et al., 2022; Zhang et al., 2022; Liu et al., 2021), introducing conversational data augmentation (Chen and Yang, 2021a), incorporating controllable signals (Narayan et al., 2021b; Wu et al., 2021) and pre-training conversation models
(Zhong et al., 2021). Most of them are trained with supervised learning, which maximizes the log probability of human written summaries. While they have gained impressive performances, there are still huge gaps in generating high-quality summaries as determined by humans such as coherence or faithfulness(Chen and Yang, 2021b), largely due to a misalignment between the fine-tuning objective
(maximizing the likelihood of single human-written summary) and the actual needs (generating more human-favored summaries) (Ziegler et al., 2019).
To train the summarization models on objectives that can more closely capture the behaviors humans care about, Reinforcement Learning (RL) has been used to directly optimize the rewards learned and constructed from human feedback (Ziegler et al., 2019; Stiennon et al., 2020; Bohm et al. ¨ , 2019; Ye and Simpson, 2021). Different kinds of feedback have been explored to construct the reward functions such as human ratings over CNN/DM
summaries (Bohm et al. ¨ , 2019), overall preferences among pairs of summaries (Ziegler et al., 2019),
and the similarity-redundancy matrix (Bohm et al. ¨ ,
2019). While achieving promising performances, they are mainly designed for document summarization with a single reward function learned from overall assessments on summaries(Bohm et al. ¨ ,
2019; Ziegler et al., 2019). As a result, they might not be directly adapted to dialogue summarization because of the intrinsic differences between documents and conversations. Compared to documents, conversations are generally less structured and more complex (Chen and Yang, 2020). There are diverse interactions between multiple speakers and complex structures such as interruptions, discourse relations, and speaker roles in dialogues
(Chen and Yang, 2020). Therefore, more subtle levels of human feedback with the consideration of *conversation structural information* is needed to provide more comprehensive, consistent, and generalizable rewards, which may lead to better performances for dialogue summarization.
To fill in this gap, we introduce Human-In-TheLoop (HITL) abstractive dialogue summarization with different levels of human feedback to leverage various conversation structures. Specifically, we incorporate two levels of human feedback: (1) **Local**
Feedback, which consists of highlighted words or phrases in dialogues to capture the salient structural information, including but not limited to speaker's intents, *identifiable events/topics*, and *discourse relations* (e.g., causal relationships and *important* emotions), and (2) **Global** Feedback, which includes dimensions like Coherence, Accuracy, Coverage, *Concise* and the *Overall Quality*, to provide more comprehensive human preferences on the given summary. We hire and train human annotators to provide the introduced two levels of human feedback on 1,000 randomly sampled conversations from the DialogSum dataset (Chen et al.,
2019). With the collected human feedback, we construct the **local reward (**rl) based on the similarities between the generated summaries and the annotated highlights and learn the **global reward**
(rg) models via supervised learning which predict the human preferences. Finally, we train the summarization policy via RL to maximize the rewards predicted by rl and rg. Specifically, the policy generates a token of text at each time step and is updated using the PPO algorithm (Ziegler et al.,
2019) based on the reward given to the entire generated summary. We conducted extensive experiments and ablation studies in different settings on the recent conversation summarization dataset, DialogSum (Chen et al., 2019) and SAMSum (Gliwa et al., 2019), to demonstrate the superiority of our methods compared to the state-of-the-art supervised learning baselines, especially in terms of human judgments and generalization abilities.
To summarize, our contributions are: (1) we introduced and collected the local and global feedback tailored for abstractive conversation summarization; (2) we designed the HITL to learn better conversation summarization policies via reinforcement learning where different levels of human feedback are directly optimized; (3) we performed extensive experiments to study the effectiveness and generation abilities of our HITL methods on DialogSum and SAMSum datasets.
## 2 Related Work 2.1 Abstractive Dialogue Summarization
Neural abstractive dialogue summarization has received intensive attention recently with the introduction of large-scale datasets (Gliwa et al., 2019; Chen et al., 2019; Tuggener et al., 2021). Besides directly transferring documents summarization methods to conversations (Gliwa et al., 2019),
models tailored for conversation have been proposed to achieve better performances (Zhao et al.,
2019; Zhu et al., 2020b; Feng et al., 2021), which make use of the rich structured information in conversations such as dialogue acts (Goo and Chen, 2018), key point/entity sequences (Liu et al., 2019a; Narayan et al., 2021a), topic segments (Liu et al.,
2019c; Li et al., 2019), stage developments (Chen and Yang, 2020), discourse relations (Chen and Yang, 2021b; Feng et al., 2020a), action mentions
(Zhang et al., 2022), and coreferences (Liu et al.,
2021). Recent work has also explored learning in a data-efficient way through data augmentation and semi-supervised learning (Chen and Yang, 2021a),
generating more controllable summaries (Wu et al.,
2021; Narayan et al., 2021b). Moreover, external information such as commonsense knowledge is incorporated to help understand the global conversation context (Feng et al., 2020b). Zhong et al.
(2021) pre-trained a language model on conversational data to help the summarization as well.
Most of the current dialogue summarization systems are still trained to maximize the likelihood of human-written text and have led to significant performances, but there is still a huge gap in generating high-quality summaries as determined by humans such as coherence, faithfulness, conciseness, and concreteness (Chen and Yang, 2020).
This is mainly due to the misalignment between the training objective and model evaluation. For example, models never plan and look ahead for overall summarization goals. To fill in this gap, we directly learn the summarization policy that maximizes the rewards constructed from human feedback via Reinforcement Learning to generate more human-favored summaries.
![2_image_0.png](2_image_0.png)
## 2.2 Learning With Human Feedback
Recent research has started to explore incorporating human feedback into the training process to achieve human-preferred systems in different tasks such as dialogue generation (Jaques et al., 2019; Yi et al., 2019 ; Hancock et al., 2019 ), story generation (Zhou and Xu, 2020), document summarization (Ziegler et al., 2019 ; Stiennon et al., 2020 ; Böhm et al., 2019 ) and etc. Our work is most related to previous work which utilizes human feedback to train document summarization models with Reinforcement Learning (RL) (Ziegler et al., 2019 ;
Stiennon et al., 2020 ; Böhm et al., 2019 ; Ye and Simpson, 2021 ), where human ratings/comparisons over summaries are usually used to learn the reward models to serve as the value networks in RL.
Despite the effectiveness, it is challenging to directly apply them to conversation summarization, largely due to the complex structures in conversations, which requires subtle reward design.
Inspired by these prior work, we introduce two levels of human feedback to guide the dialogue summarization model to generate more human-favored summaries instead of only collecting pairwise-comparing binary global rating annotations, including the (1) Local Feedback which highlights the important conversation structures to summarize and the (2) Global Feedback which consists of different fine-grained dimensions to provide more comprehensive judgments. Our work is also related to using RL to optimize automatic metrics for summarization, such as ROUGE (Ranzato et al.,
2015 ; Wu and Hu, 2018 ; Gao et al., 2019 ; Parnell et al., 2021 ), while we are directly optimizing human preferences with RL.
## 3 Methods
In this section, we introduce our Human-in-the-
Loop conversation summarization (HITL) pipeline
(in Figure 1 ) where we incorporate two levels of human feedback, the local and global feedback, into the learning process. Inspired by Stiennon et al. (2020), our pipeline for abstractive conversation summarization includes 3 stages: (1) Collecting two levels of human feedback from conversationsummary pairs where summaries are generated with baseline models; (2) Learning and designing reward models from two levels of human feedback; (3) Learning the summarization policy which could generate higher-quality summaries as judged by humans against the reward model.
## 3.1 Datasets
We utilize DialogSum (Chen et al., 2019 ), a recent large-scale dialogue summarization dataset emphasizing real-life daily conversations, to study humanin-the-loop conversation summarization. We selected DialogSum because the summaries in DialogSum are less extractive with more novel n-
The summaries are more compressed grams.
compared to the other conversation summariza-
| Dataset | # Turns | # Words | # Words in Sum |
|-----------|-----------|-----------|------------------|
| Sampled | 9.6 | 127.6 | 22.9 |
| DialogSum | 9.8 | 131.0 | 23.6 |
Table 1: Data statistics of sampled 1000 dialogues and DialogSum including the average number of turns and words in conversations and the average number of words in ground truth summaries.
tion datasets (Chen et al., 2019)
2, which makes the datasets more challenging and requires human knowledge to generate better summaries.
## 3.2 Collecting Human Feedback
Here we describe the process of getting the desired global and local human feedback.
## 3.2.1 Annotation Setup
Sampling Dialogues From this DialogSum dataset, we randomly sample 1,000 dialogues from 13,360 dialogues to collect our designed two levels of human feedback. As the data statistics shown in Table 1, the distribution of our sampled examples is close to that of the original DialogSum dataset.
Baseline Summaries We generate a set of baseline summaries with different models for the global feedback annotation. Specifically, for every dialogue, we generate 4 summaries with 4 different summarization systems: (1) BART-large fine-tuned on SAMSum and XSUM 3 with a 30.4/11.5/24.8 ROUGE score on DialogSum, (2) DistilBART finetuned on CNN/Daily Mail and SaumSUM 4 with a 33.8/13.6/27.8 ROUGE score , (3) BART-large fine-tuned on SAMSum 5and with a 33.0/13.5/27.0 ROUGE score (4) BART-large-xsum 6 fine-tuned on SAMSum 7 with a 26.6/10.2/21.4 ROUGE score.
These different summaries are then compared by humans to provide global feedback.
Hiring and Training Annotators We hire two annotators through Upwork8and provide them with extensive training for the task. During multiple training sessions, we explain how to highlight salient information and compare summaries using our interfaces. We go through selected example dialogues and discuss with them to resolve inconsistencies and disagreements. To further reaffirm the training, we also perform test runs on the sampled dialogues. From these test cases, we make sure that they annotate the data properly and achieve good agreements. We pay the annotators $25 per hour.
We get 41.67 hours of work for the first member and 39.83 hours for the second member 9.
## 3.2.2 Local Feedback
For the local feedback, we ask annotators to highlight the salient information in the provided dialogues. The highlighted information needs to be helpful in generating a summary. The information can be phrases, sentences, or a couple of words in the given dialogue. Specifically, we ask annotators to look for some important aspects including (1)
speaker's intents, (2) *identifiable events/topics*, (3) discourse relations such as causal relationships and
(4) important *emotions* in the conversation. For every conversation, we ask the annotator to annotate 3 to 8 highlights. After 3 rounds of training sessions, we examine the quality by asking them to annotate the same set of 50 dialogues and computing the agreement scores between the two annotators
(0.865 BERT-score between their annotated spans)
10. We also make sure the highlights match the important information in ground truth summaries
(0.792 BERT-score between annotated spans and corresponding summaries) 11. Annotators then annotate the remaining dialogues by themselves independently. After annotation, we collect 6.1 spans for every dialogue with 59.5 words on average.
## 3.2.3 Global Feedback
After highlighting the salient information, we provide the annotators with 3 pairs of summaries sampled from the set of baseline summaries. We then ask them to make comparisons in terms of *Coherence, Accuracy, Coverage, Conciseness*, and Overall Quality. For every comparison between summary A and summary B, the annotators need to grade on a scale of 5 points: summary A is mostly better, summary A is partially better, equal, summary B is partially better, the summary B is mostly better. We provide detailed guidelines to the annotators about those different dimensions12. After 3 rounds of training sessions, we show the annotators 50 dialogues with 150 pairs of summaries and ask them to make comparisons, resulting in 150 comparisons. We then calculate the Fleiss Kappa scores to measure the agreements among different annotators. In the end, we obtain an average score of 0.342 for Coverage, 0.381 for Coherence, 0.376 for Conciseness, 0.373 for Accuracy, and 0.369 for Overall Quality, indicating moderate agreement
(Landis and Koch, 1977). Annotators then annotate the remaining dialogues by themselves independently. In total, we collect 3000 pairs of comparisons for every dimension.
## 3.3 Method
This section focuses on how to incorporate the annotated feedback into the training process to assist the summarization systems in generating more human-favored summaries.
## 3.3.1 Rewards Modeling
We first describe how to train the reward models and compute the rewards for any given conversation-summary pairs.
Local Rewards Our goal is to encourage the summarization systems to generate summaries that cover the important information mentioned in the dialogues while avoiding redundant information. Thus here we propose to model the local rewards based on these highlights from annotators. For a given conversation C with a set of human-annotated salient spans M = M1:m (e.g.,
phrases/sentences/words in the dialogues), suppose the model would generate a summary s. We view the list of highlights M annotated by humans as information needed by the summaries, and the other sentences without highlights as possible redundant information set N = N1:n = C − M. We then calculate the local coverage rewards rl(*C, s, M*)
by calculating the cosine distances between the embeddings of the summary and the information in the dialogues:
**The Introduction** $$r_{l}(C,s,M)=\sum_{i}^{m}cos(s,M_{i})-\sum_{j}^{n}cos(s,N_{j})\tag{1}$$
${}^{12}$The detailed guidelines are shown in the Appendix.
Here we embed the summaries and the dialogue information utilizing sentence-transformers (allmpnet-base-v2) 13 (Reimers and Gurevych, 2019).
Global Rewards Generating high-quality summaries with better human preferences is essential for building better summarization systems.
To this end, we design the global rewards by learning human preferences from their annotations. For a given set of annotated conversations C = {C1*, ..., C*n} with baseline summaries S = {(s 11
, s12
, s13
, s14
)*, ...,*(s n 1
, sn 2
, sn 2
, sn 3
)} with different dimensions of global human feedback, we first learn a set of reward models rgj
(C, s; θe, θj )
to measures the quality or impact of the dimension j on summary s for a conversation C. Here, θe are the parameters to encode the conversation and summaries; θj stands for the parameters of linear heads for the dimension j, which outputs a scalar on top of θe. Specifically, we initialize θe with a BART-large model fine-tuned on the DialogSum dataset, and randomly initialize θj for every dimension in the global feedback. During training, we train the model to predict which summary in a summary pair {s in, sim} of conversation Ciis preferred by humans, by optimizing the following objective:
$$\begin{array}{c}{{{\mathcal{L}}=-\mathbb{E}_{(C_{i},s_{n}^{i},s_{m}^{i})\sim(C,S)}\Sigma_{j}[\log(\sigma(r_{g_{j}}(C_{i},s_{n}^{i};}}\\ {{\theta_{e},\theta_{j})-r_{g_{j}}(C_{i},s_{m}^{i};\theta_{e},\theta_{j})))]}}\end{array}$$
where s in is the summary preferred by humans.
Implementations are shown in Section 4.2. We select the hyper-parameter based on the loss on the validation set (8:2 split), and further evaluate the learned reward models in Section 4.4.
We then combine different dimensions to provide the global rewards rg(*C, s*):
$$r_{g}(C,s)=\Sigma_{j}r_{g_{j}}$$
$$(2)$$
## 3.3.2 Hitl Summarization Policy Learning
Here we train a summarization policy with human feedback for generating high-quality outputs as judged by humans. We utilize reinforcement learning to learn the summarization policy π RL
ϕ. Specifically, we initialize the policy with a supervised learning BART-large baseline model π B fine-tuned on DialogSum. We use the PPO algorithm (Schulman et al., 2017) to maximize the rewards from the above local and global reward models rl and rg,
$${\frac{\mathbf{\Pi}^{13}\mathbf{h t t p s://g i t h u b.c o m/U K P L a b/}}{\mathbf{s e n t e n c e-t n a r s f o r m e r s}}}$$
| Methods | # Training Data | Rewards | ROUGE-1 | ROUGE-2 | ROUGE-L |
|----------------|-------------------|-----------|-----------|-----------|-----------|
| BART-large | Full | - | 47.28 | 21.18 | 44.83 |
| HITL-synthesis | Full | rg | 46.87 | 21.03 | 45.12 |
| HITL-synthesis | Full | rl | 47.27 | 22.18 | 45.15 |
| HITL-synthesis | Full | rg + rl | 47.46 | 22.13 | 45.24 |
| HITL-synthesis | 1000 | rg | 46.25 | 20.79 | 44.37 |
| HITL-synthesis | 1000 | rl | 46.18 | 21.12 | 45.13 |
| HITL-synthesis | 1000 | rg + rl | 46.38 | 21.26 | 45.08 |
| HITL† | 1000 | rg | 47.54 | 23.05 | 45.38 |
| HITL† | 1000 | rl | 47.88 | 23.17 | 45.87 |
| HITL† | 1000 | rg + rl | 48.29 | 23.65 | 46.23 |
Table 2: ROUGE-1, ROUGE-2 and ROUGE-L scores for different models on the DialogSum Corpus test set. †
means our model. We performed Pitman's permutation test (Dror et al., 2018) and found that *HITL* significantly outperformed the supervised baseline BART-large (p < 0.05). The results are averaged over three random runs.
| Methods | Human Preferred % |
|------------------|---------------------|
| BART-large | 18% |
| HITL-(rg + rl) † | 82% |
Table 3: Human preferences when comparing summaries generated by supervised baseline model (BARTlarge) and our best HITL model (rg + rl). † means our method.
where each time step is a BPE token 14. The full reward R(*C, s, M*) is:
$$R(C,s,M)=w_{l}r_{l}(C,s,M)+w_{g}r_{g}(C,s)$$ $$-\beta\log\left[\frac{\pi_{\phi}^{\mathrm{RL}}(s|C)}{\pi^{\mathrm{B}}(s|C)}\right]\tag{3}$$
We introduce a KL divergence term between the HITL policy and the supervised baseline model
(Jaques et al., 2019). This term could prevent the learned policy generating outputs that are too different from the supervised models and encourage the learned policy to explore instead of collapsing to a single model (Stiennon et al., 2020). wl, wg and β are weights to balance different sub-rewards.
Following Stiennon et al. (2020), we use a Transformer with separate parameters from the policy for the PPO value function. And we initialize the value function to the parameters of the reward model. In our experiments, the reward model, policy and value function are the same size.
14The reward model would give the rewards after the entire summary generated. Each episode terminates when the policy outputs the EOS token, and the discount factor γ = 1.
| Metric | Agree with Human % |
|--------------|----------------------|
| ROUGE | 55.3% |
| Coherence | 62.4% |
| Accuracy | 56.8% |
| Coverage | 63.6% |
| Concise | 59.5% |
| Over Quality | 65.5% |
| rg | 69.8% |
## 4 Experiments 4.1 Baselines
We compare our models with several baselines:
- **BART-large** (Lewis et al., 2020): We utilized BART-large as our backbone model as well as the supervised learning baseline. Utterances are separated by a special token.
- **HITL-synthesis**: We use heuristics to approximate the local and global feedback, via which we then learn synthesized reward models and the HITL summarization policy. Specifically, for the local feedback, we utilize a greedy algorithm (Nallapati et al., 2016; Zhang et al.,
2022) to obtain the synthesis highlights based on ground truth. For the global feedback, we
| Methods | Training Data | Transferred Parts | ROUGE-1 | ROUGE-2 | ROUGE-L |
|-----------------|-----------------|---------------------|-----------|-----------|-----------|
| BART-large | DialogSum | Whole Model | 31.74 | 5.93 | 29.79 |
| HITL-(rg + rl)† | DialogSum | Whole Model | 33.58 | 7.84 | 32.63 |
| BART-large | SAMSum | Whole Model | 53.12 | 27.95 | 49.15 |
| HITL-(rg) † † | SAMSum | Global Reawrds | 53.76 | 28.04 | 50.56 |
| Quality | R1 | R2 | RL |
|-----------|-------|-------|-------|
| Synthesis | 46.38 | 21.26 | 45.08 |
| Noisy | 46.32 | 21.38 | 44.76 |
| Clean | 47.58 | 22.58 | 45.56 |
utilize the randomly sampled utterances as negative summaries compared to the ground truth summary.
## 4.2 Implementation Details
For the supervised baseline, we initialize the model with BART-large and fine-tune it on the full DialogSum for 10 epochs with a 3e-5 learning rate and 120 warm-up steps. We use a batch size of 8. For the global reward models, we set the hidden size of the linear head 256. We use a batch size of 8 and train the reward model for 2 epochs with a 3e-5 learning rate. For PPO, we initialize our policies with the supervised baseline and our value functions with the reward models. We set γ = 1 and λ = 0.95 for the advantage estimation (Schulman et al., 2015), do 4 epochs of optimization with a batch size of 8 and run for 5,000 episodes. We set wl = 1, wg = 1.5 and β = 0.05 based on grid search among {0.05, 0.5, 1, 1.5, 2, 2.5} for the full reward R. All experiments were performed on 8 NVIDIA V100 (32GB memory).
## 4.3 Automatic Evaluation
We first evaluated all the models with the widely used automatic metric ROUGE(Lin and Och, 2004)
and reported ROUGE-1, ROUGE-2, and ROUGEL in Table 2. We found that the performances were not better for synthesis data when there were less
![6_image_0.png](6_image_0.png)
training data. When there was plenty of synthesis feedback, (*HITL-synthesis with Full data*) can help improve over the supervised baseline, where the local reward was more important compared to the global reward. After incorporating ground-truth human feedback, our *HITL-(*rg + rl) model with both global and local rewards achieved the best performances even with less training data compared to synthesis baselines. The local rewards consistently brought in more performance boost because the conversation structural information in local rewards can help the systems more directly capture the important factors in the conversation. This indicates the effectiveness of our HITL framework for conversation summarization, as the human judgements were directly guiding the learning process.
## 4.4 Human Evaluation
Following Bohm et al. ¨ (2019) and Stiennon et al.
(2020), we randomly sampled 200 conversations from the DialogSum test set and asked annotators from Amazon Mechanical Turk to select the preferred summary from pairs of summaries generated by *BART-large* and *HITL-(*rg + rl). Turkers were asked to judge coherence, accuracy, coverage, and conciseness 15. To increase the annotation quality, we required Turkers to have a 98% approval rate and at least 10,000 approved tasks for their previous work. Each conversation was rated by three workers, and we used majority voting to decide the preferred summaries. The pay rate was 0.5$ per hit. We measured the agreement by computing the Intra-Class Correlation (ICC) was 0.693, showing moderate agreement (Koo and Li, 2016).
Main Results From Table 3, we observed that summaries from our introduced (*HITL-(*rg + rl))
are much more preferred (favored in 82% cases)
by human compared to supervised baseline (*BARTlarge*). These significant improvements came from comparably *small amount of annotations* (1000 dialogues). These indicated that the systems (*HITL-*
(rg + rl)) that directly learn from a small amount of global and local human feedback could generate higher-quality summaries with better human preferences compared to supervised baselines.
Evaluating the Global Reward Models Based on human preferences, we further examined the global reward model and compared it with its subdimensions as well as the ROUGE metric. Basically, we assume that the reward model agrees with human preferences when the model is assigning higher scores to these human-preferred summaries.
As shown in Table 4, reward models learned from humans generally agree well with humans, where our global reward rg receives the highest agreement rate. This showed the high quality and effectiveness of our global feedback collection as well as the global reward models. As a result, our HITL-(rg +rl) model achieves better performances compared to baselines. The results also showed the potential of our global reward models to be used to better automatically evaluate the summaries (Fabbri et al., 2020).
## 4.5 Generalization
We then evaluated the generalization abilities of our HITL summarization system and our learned global reward model rg. We transferred the knowledge learned on DialogSum to another corpus, SAMSum
(Gliwa et al., 2019) which summarizes messengerlike conversations about daily topics, such as arranging meetings and discussing events. The good generalization shown below also lower the amortized cost (Rajani et al., 2021) of our methods.
Generalization of HITL models We first directly applied the whole *HITL-(*rg + rl) models trained on DialogSum to the SAMSum corpus. The results were visualized in Table 5. The zero-shot evaluations on SAMSum got lower ROUGE scores compared to the models trained on SAMSum data, while our best model, *HITL-(*rg + rl), achieved better performances compared to the supervised baseline model (*BART-large*). This showed that our policy empowered with human feedback can better generalize to other domains compared to supervised learning models because our policy was learned from rewards that explicitly indicated human preferences. Such rewards are more general to different domains compared to supervised learning objectives which are specific to one dataset.
Generalization of the Global Reward Model We then re-trained the *HITL-(*rg) policy on SAMSum corpus while we directly utilized the global reward model rg(*C, s*) learned from human feedback on DialogSum data as the reward functions.
We reported the results in Table 5 and observed that the *HITL-(*rg) outperformed the supervised BARTlarge model on SAMSum in terms of ROUGE
scores. This showed that our global reward models rg can be directly applied to other conversation summarization datasets to provide reinforcement learning rewards and boost performance because the global rewards learned from human feedback are implying the qualities of summaries in general rather than being limited to one specific domain.
## 4.6 Ablation Study
Here we performed two ablation studies to further study the impact of the quality and the quantity of human feedback in our HITL pipeline.
## Perturbing The Qualities Of Annotations We
compared *HITL-(*rg + rl) policy trained with annotations on the same 400 dialogues of three levels of qualities: (1) *Synthesis Annotations* as described in Section 4.1, (2) *Noisy* which was the annotation from annotators without extensive training sessions,
(3) *Clean* which was the annotation after the training sessions. We visualized the comparisons in Table 6, and found that the performances were better with higher quality annotations. This suggests that the quality of human feedback matters.
Increasing the Annotations We then varied the number of annotations from 400 to 1000 in our HITL-(rg + rl) model in Table 2. The ROUGE
scores were higher with more human annotations because of better reward learning and policy learning with more training data. This implies the importance of enough human feedback to learn and design better rewards.
## 5 Conclusion
In this work, we introduced two levels of conversation human feedback into the abstractive conversation summarization to generate human-preferred summaries. Specifically, we first collected local and global human feedback to design the corresponding reward functions. We then learned the summarization policies via reinforcement learning to optimize the designed rewards. Extensive experiments in different settings and ablation studies on DialogSum and SAMSum corpus via both automatic and human evaluations demonstrated the effectiveness and generalization of our introduced HITL pipeline. For future work, we would like to explore incorporating human feedback in natural languages which are more general and explicit to indicate how to summarize conversations to improve the abstractive conversation summarization.
## 6 Limitation
In this work, we collect extensive and comprehensive human feedback with high qualities to facilitate our human-in-the-loop conversation summarization framework. While the learned rewards and models are showing good generalization abilities, further attention is still needed to deeply understand what types of feedback or what amount of feedback is necessary. Our current work only considers human feedback collected using the required forms (i.e., rankings and highlighting). We encourage future work to explore how to incorporate human preferences with more open-ended feedback such as through natural languages. Furthermore, we mainly focus on conversation summarization with human feedback in this work, and other types of summarization tasks (e.g., multi-document summarization, email to-do summarization, meeting summarization and etc.) could be further explored to incorporate human knowledge.
## Acknowledgements
We thank members of the SALT Lab and the anonymous reviewers for their helpful feedback. This work was supported in part by an Amazon Faculty Research Award and an NSF grant IIS-2247357.
## References
Florian Bohm, Yang Gao, Christian M. Meyer, Ori ¨
Shapira, Ido Dagan, and Iryna Gurevych. 2019. Better rewards yield better summaries: Learning to summarise without references.
Jiaao Chen and Diyi Yang. 2020. Multi-view sequenceto-sequence models with conversational structure for abstractive dialogue summarization. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4106–
4118, Online. Association for Computational Linguistics.
Jiaao Chen and Diyi Yang. 2021a. Simple conversational data augmentation for semi-supervised abstractive dialogue summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6605–6616, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jiaao Chen and Diyi Yang. 2021b. Structure-aware abstractive conversation summarization via discourse and action graphs. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 1380–1391, Online. Association for Computational Linguistics.
Jiyu Chen, Karin Verspoor, and Zenan Zhai. 2019. A
bag-of-concepts model improves relation extraction in a narrow knowledge domain with limited data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 43–52, Minneapolis, Minnesota. Association for Computational Linguistics.
Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing.
In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392.
Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´
Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2020. Summeval: Re-evaluating summarization evaluation.
Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2021.
A survey on dialogue summarization: Recent advances and new frontiers.
Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng, and Ting Liu. 2020a. Dialogue discourseaware graph convolutional networks for abstractive meeting summarization.
Xiachong Feng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2020b. Incorporating commonsense knowledge into abstractive dialogue summarization via heterogeneous graph networks. *arXiv preprint* arXiv:2010.10044.
Yang Gao, Christian M. Meyer, Mohsen Mesgar, and Iryna Gurevych. 2019. Reward learning for efficient reinforcement learning in extractive document summarisation.
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70–79, Hong Kong, China. Association for Computational Linguistics.
Chih-Wen Goo and Yun-Nung Chen. 2018. Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts. 2018 IEEE Spoken Language Technology Workshop (SLT).
Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. ´ Learning from dialogue after deployment: Feed yourself, chatbot!
Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. 2019. Way offpolicy batch deep reinforcement learning of implicit human preferences in dialog.
Qi Jia, Yizhu Liu, Haifeng Tang, and Kenny Q. Zhu.
2022. Post-training dialogue summarization using pseudo-paraphrasing.
Terry K Koo and Mae Y Li. 2016. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of chiropractic medicine, 15(2):155–163.
J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data.
Biometrics, 33(1):159–174.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Manling Li, Lingyu Zhang, Heng Ji, and Richard J.
Radke. 2019. Keep meeting summaries on topic:
Abstractive multi-modal meeting summarization. In
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2190–
2196, Florence, Italy. Association for Computational Linguistics.
Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 605.
Association for Computational Linguistics.
Haitao Lin, Junnan Zhu, Lu Xiang, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2022. Other roles matter! enhancing role-oriented dialogue summarization via role interactions.
Chunyi Liu, Peng Wang, Jiang Xu, Zang Li, and Jieping Ye. 2019a. Automatic dialogue summary generation for customer service. In *Proceedings of the* 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD19, page 1957–1965, New York, NY, USA. Association for Computing Machinery.
Zhengyuan Liu, Hazel Lim, Nur Farah Ain Suhaimi, Shao Chuen Tong, Sharon Ong, Angela Ng, Sheldon Lee, Michael R Macdonald, Savitha Ramasamy, Pavitra Krishnaswamy, et al. 2019b. Fast prototyping a dialogue comprehension system for nurse-patient conversations on symptom monitoring. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2
(Industry Papers), pages 24–31.
Zhengyuan Liu, Angela Ng, Sheldon Lee, Ai Ti Aw, and Nancy F. Chen. 2019c. Topic-aware pointergenerator networks for summarizing spoken conversations. 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
Zhengyuan Liu, Ke Shi, and Nancy F. Chen. 2021.
Coreference-aware dialogue summarization.
Gabriel Murray, Steve Renals, Jean Carletta, and Johanna Moore. 2006. Incorporating speaker and discourse features into speech summarization. In *Proceedings of the Human Language Technology Conference of the NAACL, Main Conference*, pages 367–
374, New York City, USA. Association for Computational Linguistics.
Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2016.
Summarunner: A recurrent neural network based sequence model for extractive summarization of documents.
Shashi Narayan, Yao Zhao, Joshua Maynez, Gonc¸alo Simoes, and Ryan McDonald. 2021a. Planning with entity chains for abstractive summarization.
Shashi Narayan, Yao Zhao, Joshua Maynez, Gonc¸alo Simoes, Vitaly Nikolaev, and Ryan McDonald. 2021b. Planning with learned entity prompts for abstractive summarization.
Jacob Parnell, Inigo Jauregi Unanue, and Massimo Piccardi. 2021. Rewardsofsum: Exploring reinforcement learning rewards for summarisation.
Vineet Rajani, Marco Gaboardi, Deepak Garg, and Jan Hoffmann. 2021. A unifying type-theory for higherorder (amortized) cost analysis. *Proc. ACM Program.*
Lang., 5(POPL).
Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks.
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms.
Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M.
Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. 2020. Learning to summarize from human feedback.
Don Tuggener, Margot Mieskes, Jan Deriu, and Mark Cieliebak. 2021. Are we summarizing the right way?
a survey of dialogue summarization data sets. In Proceedings of the Third Workshop on New Frontiers in Summarization, pages 107–118, Online and in Dominican Republic. Association for Computational Linguistics.
Lu Wang and Claire Cardie. 2013. Domain-independent abstract generation for focused meeting summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1395–1405.
Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, and Caiming Xiong. 2021. Controllable abstractive dialogue summarization with sketch supervision.
Yuxiang Wu and Baotian Hu. 2018. Learning to extract coherent summary via deep reinforcement learning.
Yuxuan Ye and Edwin Simpson. 2021. A proposal:
Interactively learning to summarise timelines by reinforcement learning. In Proceedings of the First Workshop on Interactive Learning for Natural Language Processing, pages 25–31, Online. Association for Computational Linguistics.
Sanghyun Yi, Rahul Goel, Chandra Khatri, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel, and Dilek Z. Hakkani-Tur. 2019. Towards ¨
coherent and engaging spoken dialog response generation using automatic conversation evaluators. In INLG.
Tiezheng Yu, Zihan Liu, and Pascale Fung. 2021.
AdaptSum: Towards low-resource domain adaptation for abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 5892–5904, Online. Association for Computational Linguistics.
Kexun Zhang, Jiaao Chen, and Diyi Yang. 2022. Focus on the action: Learning to highlight and summarize jointly for email to-do items summarization. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 4095–4106, Dublin, Ireland.
Association for Computational Linguistics.
Zhou Zhao, Haojie Pan, Changjie Fan, Yan Liu, Linlin Li, Min Yang, and Deng Cai. 2019. Abstractive meeting summarization via hierarchical adaptive segmental network learning. In The World Wide Web Conference, WWW '19, page 3455–3461, New York, NY, USA. Association for Computing Machinery.
Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. Dialoglm: Pre-trained model for long dialogue understanding and summarization.
Wangchunshu Zhou and Ke Xu. 2020. Learning to compare for better training and evaluation of open domain natural language generation models.
Chenguang Zhu, Ruochen Xu, Michael Zeng, and Xuedong Huang. 2020a. A hierarchical network for abstractive meeting summarization with cross-domain pretraining. *Findings of the Association for Computational Linguistics: EMNLP 2020*.
Chenguang Zhu, Ruochen Xu, Michael Zeng, and Xuedong Huang. 2020b. A hierarchical network for abstractive meeting summarization with cross-domain pretraining. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. Association for Computational Linguistics.
Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B.
Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences.
## A Data Statistics For Dialoguesum And Samsum B The Annotation Interface
Since we hired and trained our own set of annotators, rather than using a crowd sourcing website such as Amazon Mechanical Turk, we built
| Dataset | # of Dialogues | Avg # of turns | Avg # of Words | Avg Compression Rate |
|--------------------------------------------------|------------------|------------------|------------------|------------------------|
| DialogSUM | 13,406 | 9.5 | 131.0 | 0.18 |
| SAMSum | 16,369 | 11.1 | 94.3 | 0.30 |
| Table 7: Data Statistics of DialogSUM and SAMSum | | | | |
Hello
![11_image_0.png](11_image_0.png)
our own website to allow for a standardized, customized user interface for all annotators. The website contains the information for highlighting, summary comparisons as well as detailed instructions.
From here we collect local and global guidance. For local guidance, we display one of the dialogues on the website. We ask the user to highlight salient information and then press next. Afterward, we display 3 pairs of summaries and ask the user to compare the pairs of summaries in 5 different dimensions. Screenshots from the website are shown in Figure 3. Data collected from the website can be easily ported into a central database containing all of our human data.
## C Global Feedback Guidelines
We provide the annotators with 3 pairs of summaries sampled from the set of baseline summaries, and ask them to make comparisons in terms of *Coherence, Accuracy, Coverage, Conciseness*, and Overall Quality. For every comparison between summary A and summary B, the annotators need to grade upon a scale of 5 points: summary A mostly better, summary A partially better, equal, summary B partially better, summary B mostly better. We provide detailed guidelines to the annotators about those different dimensions:
- **Coherence**: Summary is easy to understand and free of English errors. For comparing summaries against each other in Coherence, we ask the annotators to compare the number and severity of grammatical, syntax, and spelling errors of each summary against each other.
- **Accuracy**: Information that stated in the summary is accurate and does not incorrect information. Summary is not misleading and has too much errors. For comparing summaries against each other in Accuracy, we ask the annotators discover the amount and severity of inaccurate statements that occur in the summaries against each other.
- **Coverage**: Mentions main information of the conversations. It conveys the most salient information from the dialogue. For comparing summaries against each other in Coverage, we ask the annotators to look at the number of events in each summary. Also taking into the factor of importance of events, we ask the annotator to compare the number of events against the pair of summaries.
- **Conciseness**: Summary is short and to the point. It does not have too much unimportant information that is not included in the salient information. For comparing summaries against each other in Conciseness, we ask the annotators to mainly look at the length of the summaries. Then we check if any information doesn't fit, and penalize as such.
- **Overall Quality**: We ask the annotator to use all of the above information and other related context to give an overall rating. Even though we asked the annotator to consider all the information, we asked the annotator to factor coverage and accuracy more into their decision for Overall Quality. This is because it is of at most importance for a dialogue summary to accurately summarize the salient information of the dialogue.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 6 A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Not applicable. Left blank.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Section 3
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Section 3
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Section 3
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Section 3 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
deoghare-etal-2023-multi | A Multi-task Learning Framework for Quality Estimation | https://aclanthology.org/2023.findings-acl.585 | Quality Estimation (QE) is the task of evaluating machine translation output in the absence of reference translation. Conventional approaches to QE involve training separate models at different levels of granularity viz., word-level, sentence-level, and document-level, which sometimes lead to inconsistent predictions for the same input. To overcome this limitation, we focus on jointly training a single model for sentence-level and word-level QE tasks in a multi-task learning framework. Using two multi-task learning-based QE approaches, we show that multi-task learning improves the performance of both tasks. We evaluate these approaches by performing experiments in different settings, viz., single-pair, multi-pair, and zero-shot. We compare the multi-task learning-based approach with baseline QE models trained on single tasks and observe an improvement of up to 4.28{\%} in Pearson{'}s correlation (r) at sentence-level and 8.46{\%} in F1-score at word-level, in the single-pair setting. In the multi-pair setting, we observe improvements of up to 3.04{\%} at sentence-level and 13.74{\%} at word-level; while in the zero-shot setting, we also observe improvements of up to 5.26{\%} and 3.05{\%}, respectively. We make the models proposed in this paper publically available. |
## A Multi-Task Learning Framework For Quality Estimation
Sourabh Deoghare1, Paramveer Choudhary1, Diptesh Kanojia1,2, Tharindu Ranasinghe3, Pushpak Bhattacharyya1 **and Constantin Orasan** ˘
2 1CFILT, Indian Institue of Technology Bombay, Mumbai, India.
2Surrey Institute for People-Centred AI, University of Surrey, United Kingdom.
3Aston University, Birmingham, United Kingdom.
{sourabhdeoghare, paramvc, pb}@cse.iitb.ac.in
{d.kanojia, c.orasan}@surrey.ac.uk
{t.ranasinghe}@aston.ac.uk
## Abstract
Quality Estimation (QE) is the task of evaluating machine translation output in the absence of reference translation. Conventional approaches to QE involve training separate models at different levels of granularity *viz.,*
word-level, sentence-level, and document-level, which sometimes lead to inconsistent predictions for the same input. To overcome this limitation, we focus on jointly training a single model for sentence-level and word-level QE tasks in a multi-task learning framework.
Using two multi-task learning-based QE approaches, we show that multi-task learning improves the performance of both tasks. We evaluate these approaches by performing experiments in different settings, *viz.,* single-pair, multi-pair, and zero-shot. We compare the multi-task learning-based approach with baseline QE models trained on single tasks and observe an improvement of up to 4.28% in Pearson's correlation (r) at sentence-level and 8.46% in F1-score at word-level, in the singlepair setting. In the multi-pair setting, we observe improvements of up to 3.04% at sentencelevel and 13.74% at word-level; while in the zero-shot setting, we also observe improvements of up to 5.26% and 3.05%, respectively.
We make the models proposed in this paper publically available1.
## 1 Introduction
Quality Estimation (QE) is a sub-task in the Machine Translation (MT) field. It facilitates the evaluation of MT output without a reference translation by predicting its quality rather than finding its similarity with the reference (Specia et al., 2010). QE
is performed at different levels of granularity, *viz.,*
word-level QE (Ranasinghe et al., 2021), sentencelevel QE (Ranasinghe et al., 2020b), and documentlevel QE (Ive et al., 2018).
In the sentence-level QE task, current models predict the z-standardized Direct Assessment (DA)
1https://github.com/cfiltnlp/QE_MTL
score when a source sentence and its translation are provided as inputs. The DA score is a number in the range of 0 to 100, denoting the quality of the translation, obtained from multiple human annotators. These scores are then standardized into z-scores, which are used as labels to train the QE
model (Graham et al., 2016).
Unlike the sentence-level QE task, the wordlevel QE task consists of training a model to predict the 'OK' or 'BAD' tag for each token in a source sentence and its translation. These tags are obtained automatically by comparing the translation with its human post-edits using a token-matching approach. Each source sentence token is tagged as
'OK' if its translation appears in the output and is tagged as 'BAD' otherwise. Similarly, a translation token is assigned an 'OK' tag if it is a correct translation of a source sentence token, and 'BAD'
otherwise. Apart from the tokens in the translation, the gaps between the translation tokens are also assigned OK/BAD tags. In case of missing tokens, the gap is tagged as 'BAD', and 'OK' otherwise (Logacheva et al., 2016).
To perform each of these tasks, various deep learning-based approaches are being used (Zerva et al., 2022). While these approaches achieve acceptable performance by focusing on a single task, the learning mechanism ignores information from other QE tasks that might help it do better. By sharing information across related tasks, one can essentially expect the task performance to improve, especially when the tasks are closely related as is the case with the sentence-level and word-level QE.
Also, having a separate model for each QE task can cause problems in practical scenarios, like having higher memory and computational requirements.
In addition, the different models can produce conflicting information e.g. high DA score, but many errors at word level.
In this paper, we utilize two multi-task learning (MTL)-based (Ruder, 2017) approaches for 9191 word-level and sentence-level QE tasks with the help of a single deep neural network-based architecture. We perform experiments with existing QE
datasets (Specia et al., 2020; Zerva et al., 2022)
with both MTL approaches to combine word-level and sentence-level QE tasks. We test the following scenarios: a) single-pair QE, b) multi-pair QE, and c) zero-shot QE. The code and models are made available to the community via GitHub.
To the best of our knowledge, we introduce a novel application of the Nash-MTL (Navon et al.,
2022) method to both tasks in Quality Estimation.
Our **contributions** are:
1. showing that jointly training a single model using MTL for sentence and word-level QE
tasks improves performance on both tasks. In a single-pair setting, we observe an improvement of up to 3.48% in Pearson's correlation
(r) at the sentence-level and 7.17% in F1score at the word-level.
2. showing that the MTL-based QE models are significantly more consistent, on word-level and sentence-level QE tasks for same input, as compared to the single-task learning-based QE models.
We discuss the existing literature in Section 2 and the datasets used in Section 3. The MTL-based QE approach is presented in Section 4. The experimental setup is described in 5. Section 6 discusses the results in detail, including a qualitative analysis of a few sample outputs. We conclude this article in Section 7, where we also propose future research directions in the area.
## 2 Related Work
During the past decade, there has been tremendous progress in the field of machine translation quality estimation, primarily as a result of the shared tasks organized annually by the Conferences on Machine Translation (WMT), since 2012. These shared tasks have produced benchmark datasets on various aspects of quality estimation, including wordlevel and sentence-level QE. Furthermore, these datasets have led to the development and evaluation of many open-source QE systems like QuEst (Specia et al., 2013), QuEst++ (Specia et al., 2015),
deepQuest (Ive et al., 2018), and OpenKiwi (Kepler et al., 2019). Before the neural network era, most of the quality estimation systems like QuEst (Specia et al., 2013), and QuEst++ (Specia et al., 2015)
were heavily dependent on linguistic processing and feature engineering to train traditional machinelearning algorithms like support vector regression and randomized decision trees (Specia et al., 2013).
In recent years, neural-based QE systems such as deepQuest (Ive et al., 2018), and OpenKiwi (Kepler et al., 2019) have consistently topped the leaderboards in WMT quality estimation shared tasks (Kepler et al., 2019). These architectures revolve around an encoder-decoder Recurrent Neural Network (RNN) (referred to as the 'predictor'),
stacked with a bidirectional RNN (the 'estimator')
that produces quality estimates. One of the disadvantages of this architecture is they require extensive predictor pre-training, which means it depends on large parallel data and is computationally intensive (Ive et al., 2018). This limitation was addressed by TransQuest (Ranasinghe et al.,
2020b), which won the WMT 2020 shared task on sentence-level DA. TransQuest eliminated the requirement for predictor by using cross-lingual embeddings (Ranasinghe et al., 2020b). The authors fine-tuned an XLM-Roberta model on a sentencelevel DA task and showed that a simple architecture could produce state-of-the-art results. Later the TransQuest framework was extended to the wordlevel QE task (Ranasinghe et al., 2021).
A significant limitation of TransQuest is that it trains separate models for word-level and sentencelevel QE tasks. While this approach has produced state-of-the-art results, managing two models requires more computing resources. Furthermore, since the two models are not interconnected, they can provide conflicting predictions for the same translation. To overcome these limitations, we propose a multi-task learning approach to QE.
Multitask architectures have been employed in several problem domains, such as those in computer vision (Girshick, 2015; Zhao et al., 2018) and natural language processing (NLP). In NLP, tasks such as text classification (Liu et al., 2017), natural language generation (Liu et al., 2019), part-ofspeech tagging and named entity recognition (Collobert and Weston, 2008) have benefited from MTL.
In QE too, Kim et al. (2019) has developed an MTL
architecture using a bilingual BERT model. However, the model does not provide results similar to or better than state-of-the-art QE frameworks such as TransQuest (Ranasinghe et al., 2021). Some of the recent WMT QE shared task submissions also use MTL to develop QE systems (Specia et al.,
2020, 2021; Zerva et al., 2022). As all these submissions are not evaluated under the same experimental settings and use different techniques along with MTL, the improvements due to MTL alone are difficult to assess. In this paper, we introduce a novel MTL approach for QE that outperforms TransQuest in both word-level and sentence-level QE tasks, in various experimental settings.
## 3 Datasets: Wmt 2022
We use data provided in the WMT21 (Specia et al., 2021), and WMT22 (Zerva et al., 2022)
Quality Estimation Shared tasks for our experiments. We choose language pairs for which word-level and sentence-level annotations are available for the same source-translation pairs. The data consists of three low-resource language pairs:
English-Marathi (En-Mr), Nepali-English (NeEn), Sinhalese-English (Si-En); three mediumresource language pairs: Estonian-English (Et-En),
Romanian-English (Ro-En), Russian-English (RuEn); and one high-resource language pair: EnglishGerman (En-De). For the English-Marathi language pair, the data consists of 20K training instances and 1K instances each for validation and testing2. The training set consists of 7K instances for all other language pairs, and validation and test sets consist of 1K samples each.
Each sample in the word-level QE data for any language pair except English-Marathi consists of a source sentence, its translation, and a sequence of tags for tokens and gaps. For the English-Marathi pair, the WMT22 dataset does not contain tags for gaps in tokens. Therefore, we used the QE corpus builder3to obtain annotations for translations using their post-edited versions.
## 4 Approach
In this section, we briefly discuss the TransQuest framework, explain the architecture of our neural network, and then discuss the MTL approaches we used for the experimentation, along with the mathematical modeling.
## 4.1 Transquest Framework
We use the MonoTransQuest (for sentence-level QE model) (Ranasinghe et al., 2020b) and MicroTransQuest (for word-level QE model) (Ranasinghe et al., 2021) architectures to perform the
![2_image_0.png](2_image_0.png)
single-task-based QE experiments. The MonoTransQuest architecture (1) uses a single XLM-R (Conneau et al., 2020) transformer model. The input of this model is a concatenation of the original sentence and its translation. Both these sequences are separated by a special [SEP] token. The inputs are passed to an embedding layer to obtain embeddings for each token. The Direct Assessment (DA) scores are produced by passing the output of the [CLS]
token through a softmax layer.
Similarly, the MicroTransquest architecture presented in figure 2 also uses the XLM-R transformer.
![2_image_1.png](2_image_1.png)
The input to this model is a concatenation of the original sentence and its translation, separated by the [SEP] token. Additionally, the [GAP] tokens are added between the translation tokens. Finally, an output of each token is passed through a softmax layer to obtain the OK or BAD tag for each token.
![3_image_0.png](3_image_0.png)
## 4.2 Model Architecture
Considering the success that transformers have demonstrated in translation quality estimation (Ranasinghe et al., 2020a; Wang et al., 2021),
we chose to employ the transformer as a base model for our MTL approach. Our approach learns two tasks jointly: sentence-level and word-level quality estimation.
Figure 3 depicts the model's architecture used in our approach. The implemented architecture shares hidden layers between both sentence-level and word-level QE tasks. The shared portion includes the XLM-Roberta (Conneau et al., 2020)
model that learns shared representations (and extracts information) across tasks by minimizing a combined/compound loss function. The taskspecific heads receive input from the last hidden layer of the transformer language model and predict the output for each task (details provided in the next two sections).
Sentence-level Quality Estimation Head By utilizing the hidden representation of the classification token (CLS) within the transformer model, we predict the DA scores by applying a linear transformation:
$${\hat{\mathbf{y}}}_{d a}=\mathbf{W}_{[C L S]}\cdot\mathbf{h}_{[C L S]}+\mathbf{b}_{[C L S]}$$
where · denotes matrix multiplication, W[CLS] ∈
RD×1, b[CLS] ∈ R1×1, and D is the dimension of input layer h (top-most layer of the transformer).
Word-level Quality Estimation Head We predict the word-level labels (OK/BAD) by applying a linear transformation (also followed by the softmax) over every input token from the last hidden
## Layer Of The Model:
$${\hat{\mathbf{y}}}_{w o r d}=\sigma(\mathbf{W}_{t o k e n}\cdot\mathbf{h}_{t}+\mathbf{b}_{t o k e n})\qquad(2)$$
where t marks which token the model is to label within a T-length window/token sequence, Wtoken ∈ RD×2, and btoken ∈ R1×2. This part is similar to the MicroTransQuest architecture in Ranasinghe et al. (2021).
## 4.3 Multi-Task Learning
We use two MTL approaches to train the QE models. In the first approach, task-specific losses are combined into a single loss by summing them. The second approach considers the gradient conflicts and follows a heuristic-based approach to decide the update direction.
Linear Scalarization (LS) We train the system by minimizing the Mean Squared Error (MSE) for the sentence-level QE task and cross-entropy loss for the word-level QE task as defined in Equation 3 and Equation 4, where yda and y*word* represent ground true labels. These particular losses are:
$${\mathcal{L}}_{d a}=M S E{\Big(}\mathbf{y}_{d a},{\hat{\mathbf{y}}}_{d a}{\Big)}$$
$\mathcal{L}_{word}=-\sum_{i=1}^{2}\left(\mathbf{y}_{word}\odot\log(\mathbf{y}_{word})\right)[i]$.
$$({\mathcal{I}})$$
$$\quad(4)$$
$\star\star\star\star\star\star$
[i] (4)
where v[i] retrieves the ith item in a vector v and ⊙ indicates element-wise multiplication. For combining the above two losses into one objective, α and β parameters are used to balance the importance of the tasks. n this study, we assign equal importance to each task in our experiments, therefore we set α = β = 1 in this study. The final loss is shown in Equation 5.
$${\mathcal{L}}_{M u l t i T r a n s Q u e s t}={\frac{\alpha{\mathcal{L}}_{d a}+\beta{\mathcal{L}}_{u v o r d}}{\alpha+\beta}}\quad(5)$$
We set up two baselines - single-task learningbased sentence-level QE and word-level QE models. The sentence-level QE model takes a source sentence and its translation as input and predicts the DA score. We use the MonoTransQuest implementation in Ranasinghe et al. (2020b) for this sentencelevel QE model. The word-level QE model predicts whether each token (word) is OK or BAD using a softmax classifier as well. We use the MicroTransQust implementation in Ranasinghe et al. (2021)
as the word-level QE model.
Nash Multi-Task Learning (Nash-MTL) Joint training of a single model using multi-task learning is known to lower computation costs. However, due to potential conflicts between the gradients of different tasks, the joint training typically results in the jointly trained model performing worse than its equivalent single-task counterparts. Combining per-task gradients into a combined update direction using a specific heuristic is a popular technique for solving this problem. In this approach, the tasks negotiate for a joint direction of parameter update.
## Algorithm 1 Nash_Mtl
Input: θ0 - initial parameter vector, {li}
K
i=1 -
differentiable loss functions, η - learning rate Output: θ T
for t = 1,..., T do Compute task gradients g t i = ∇θ(t−1)li Set G(t)the matrix with columns g
(t)
i Solve for α : (Gt)
T(Gt)α = 1/α to obtain α t Update the parameters θ
(t) = θ
(t) − ηG(t)α
(t)
end for return θ T
For the MTL problem with parameters θ, the method assumes a sphere Bϵ, with a center at zero and a radius ϵ. The update vectors ∆θ are searched inside this sphere. The problem is framed as a bargaining problem by considering the centre as the point of disagreement and the Bϵ as an agreement set. For every player, the utility function is ui(∆θ) = g T
i ∆θ where gi denotes the gradient vector at θ of the loss of task i. Additional details, theoretical proof and empirical results on various tasks can be followed from Navon et al. (2022),
who proposed this gradient combination.
## 5 Experimental Setup
This section describes the different experiments we perform and the metrics we use to evaluate our approach. We also discuss the training details and mention the computational resources used for training the models.
Experiments We perform our experiments under three settings: single-pair, multi-pair, and zeroshot. For each setting, we train one sentence-level, one word-level, and two MTL-based QE models.
The first two models are the Single-Task Learning
(STL)-based QE models (STL QE), and we use their performance as baselines. The TransQuest framework (Ranasinghe et al., 2020b) contains the MonoTransQuest model for the sentence-level QE
task and the MicroTransQuest model (Ranasinghe et al., 2021) for word-level QE task which helped us reproduce baseline results over all the language pairs investigated for this paper. The next two models are the MTL-based QE models (MTL QE)
trained using two different MTL approaches explained in Section 4. For training LS models, we use the Framework for Adapting Representation Models (FARM)4, while for training Nash-MTL models, we used implementation5shared by the authors. All the experiments use all seven language pairs introduced in Section 3.
In the single-pair setting, we only use the data of one particular language pair for training and evaluation. However, in the multi-pair setting, we combine training data of all the language pairs and evaluate the model using test sets of all language pairs. For the transfer-learning experiments (zeroshot setting), we combine training data of all language pairs except the language pair on which we evaluate the model.
Evaluation We use the Pearson Correlation (r)
between the predictions and gold-standard annotations for evaluating the sentence-level QE as it is a regression task. Similarly, for the word-level QE,
which is treated as a token-level classification task, we consider the F1-score as an evaluation metric.
We perform a statistical significance test considering primary metrics using William's significance test (Graham, 2015).
Training Details To maintain uniformity across all the languages, we used an identical set of settings for all the language pairings examined in this work. For the STL and LS-MTL models, we use a batch size of 16. We start with a learning rate of 2e − 5 and use 5% of training data for warm-up.
We use early stopping and patience over the 10 steps. The Nash-MTL models are trained using the configuration outlined in (Navon et al., 2022). Considering the availability of computational resources, the STL QE models are trained using the NVIDIA
RTX A5000 GPUs, and the MTL QE models using the NVIDIA DGX A100 GPUs. Additional training details are provided in **Appendix** A.
LP Word-Level **Sentence-Level**
STL LS-MTL +/- % Nash-MTL +/- % **STL LS-MTL +/- % Nash-MTL +/- %**
En-Mr 0.3930 0.4194 2.64% **0.4662** 7.32% 0.5215 0.5563 3.48% **0.5608** 3.93%
Ne-En 0.4852 0.5383 5.31% **0.5435** 5.83% 0.7702 0.7921 2.19% **0.8005** 3.03%
Si-En 0.6216 0.6556 3.40% **0.6946** 7.30% 0.6402 0.6533 1.31% **0.6791** 3.89%
Et-En 0.4254 0.4971 7.17% **0.5100** 8.46% 0.7646 0.7905 2.59% **0.7943** 2.97%
Ro-En 0.4446 0.4910 4.64% **0.5273** 8.27% 0.8952 **0.8985*** 0.33% 0.8960* 0.08%
Ru-En 0.3928 0.4208 2.80% **0.4394** 4.66% 0.7864 0.7994 1.30% **0.8000** 1.36%
En-De 0.3996 0.4245 2.49% **0.4467** 4.71% 0.4005 0.4310 3.05% **0.4433** 4.28%
Table 2: Results obtained for **word-level and sentence-level QE tasks in the multi-pair** setting. [*indicates the improvement is not significant with respect to the baseline score.]
## 6 Results And Discussion
Results of the single-pair, multi-pair, and zero-shot settings are presented in this section. The tables referred to in this section report performance of the STL, LS-MTL, and Nash-MTL QE models using the Pearson correlation (r) and F1-score for sentence-level and word-level QE, respectively.
We could not conduct a direct performance comparison between our QE models and winning entries of the recent WMT QE shared tasks due to the following reasons: (1) Nature of the word-level QE
task, and its evaluation methodology have changed over the years. Until last year, gaps between translation tokens were a part of the data, and the 'OK'
or 'BAD' tags were predicted for them as well.
But the WMT22 shared task did not consider these gaps; and (2) Not all the language pairs investigated in this paper have been a part of WMT QE tasks in the same year. Therefore, we establish a standard baseline using the Transformers-based framework, TransQuest, and show improvements.
We also compare Pearson correlation coeffi-
| LP | Word-Level (F1) | Sentence-Level (r) | | | | | | | | |
|-------|-------------------|----------------------|----------|--------|--------|--------|---------|----------|---------|--------|
| STL | LS-MTL | +/- % | Nash-MTL | +/- % | STL | LS-MTL | +/- % | Nash-MTL | +/- % | |
| En-Mr | 0.4013 | 0.4349 | 3.36% | 0.4815 | 8.02% | 0.6711 | 0.6514* | -1.97% | 0.6704* | -0.07% |
| Ne-En | 0.4902 | 0.5406 | 5.04% | 0.5560 | 6.58% | 0.7892 | 0.8012 | 1.20% | 0.8001 | 1.09% |
| Si-En | 0.5629 | 0.6392 | 7.63% | 0.7003 | 13.74% | 0.6653 | 0.6837 | 1.84% | 0.6957 | 3.04% |
| Et-En | 0.4348 | 0.4998 | 6.50% | 0.5082 | 7.34% | 0.7945 | 0.7970* | 0.25% | 0.7963* | 0.18% |
| Ro-En | 0.4472 | 0.4925 | 4.53% | 0.5285 | 8.13% | 0.8917 | 0.8883* | -0.34% | 0.8895* | -0.22% |
| Ru-En | 0.3965 | 0.4241 | 2.76% | 0.4211 | 2.46% | 0.7597 | 0.7751 | 1.54% | 0.7772 | 1.75% |
| En-De | 0.3972 | 0.4253 | 2.81% | 0.4499 | 5.27% | 0.4373 | 0.4308* | -0.65% | 0.4298* | -0.75% |
cients obtained by STL and MTL QE models to assess whether MTL QE model predictions on both tasks for the same inputs are consistent (Table 4).
Furthermore, we perform a qualitative analysis of the output for En-Mr, Ro-En, and Si-En language pairs, and show some examples in Table 5. We discuss the analysis in detail in subsection 6.4.
## 6.1 Single-Pair Setting
The results for the first experimental setting are presented in Table 1. The MTL QE approaches provide significant performance improvements for all language pairs in the sentence and word-level QE tasks over the respective STL QE models. In the word-level QE task, the Nash-MTL QE models outperform the STL and LS-MTL models for all language pairs. Our approach achieves the highest improvement of 8.46% in terms of macro F1-score for the Et-En language pair. While for the En-De, we observe the least improvement from the LSMTL QE model (2.49%). The average improvement in the F1-score from Nash-MTL model and LS-MTL model is 6.29% and 4.06%, respectively.
| LP | Word-Level | Sentence-Level | | | | | | | | |
|-------|--------------|------------------|----------|---------|-------|---------|---------|----------|---------|-------|
| STL | LS-MTL | +/- % | Nash-MTL | +/- % | STL | LS-MTL | +/- % | Nash-MTL | +/- % | |
| En-Mr | 0.3800 | 0.3692* | -1.08% | 0.3833 | 0.33% | 0.4552* | 0.3869 | -6.83% | 0.4674 | 1.22% |
| Ne-En | 0.4175 | 0.4472 | 2.97% | 0.4480 | 3.05% | 0.7548 | 0.7601 | 0.53% | 0.7560 | 0.12% |
| Si-En | 0.4239 | 0.4250* | 0.11% | 0.4407 | 1.68% | 0.6416 | 0.6434* | 0.18% | 0.6447* | 0.31% |
| Et-En | 0.4049 | 0.4206 | 1.57% | 0.4291 | 2.42% | 0.5192 | 0.5583 | 3.91% | 0.5598 | 4.06% |
| Ro-En | 0.4179 | 0.4349 | 1.70% | 0.4420 | 2.41% | 0.5962 | 0.6104 | 1.42% | 0.6300 | 3.38% |
| Ru-En | 0.3737 | 0.3761* | 0.24% | 0.3834 | 0.97% | 0.5286 | 0.5605 | 3.19% | 0.5812 | 5.26% |
| En-De | 0.3750 | 0.3763* | 0.13% | 0.3768* | 0.18% | 0.3217 | 0.3227* | 0.10% | 0.3305 | 0.88% |
For the sentence-level QE task, Pearson correlation (r) between the QE system prediction scores and true labels is used as an evaluation metric. For this task, the MTL QE models, again, outperform the STL QE models for all language pairs. Here, the En-De Nash-MTL QE model obtains the most significant performance improvement of 4.28%
over the corresponding STL QE model. A minor performance improvement of 0.33% is observed for the Ro-En language pair using the LS-MTL
QE model. The average improvement in Pearson's correlation (r) from the Nash-MTL model and the LS-MTL model is 2.75% and 2.10%, respectively.
Except for the Ro-En Nash-MTL QE model's performance in the sentence-level QE task, we see the Nash-MTL QE models amass the most improvements over the STL and LS-MTL QE models for all language pairs in both tasks. It shows that the bargaining between the gradient update directions for sentence-level and word-level QE tasks that the Nash-MTL method arranges results in effective learning. The results of both tasks also show that we get more improvements for low-resource and mid-resource language pairs than for the highresource language pair.
We additionally report the results obtained by the WMT QE shared task winning systems in **Appendix** C. The WMT figures are not directly comparable to our results. The WMT figures are higher than ours but that is really not the point. Our aim is to show that multitask learning is more effective than single-task learning. Any QE technique can seriously be considered adopting MTL in preference to the STL. Of course, if the STL figures are already high then the improvement may not be significant which we also have observed.
## 6.2 Multi-Pair Setting
Table 2 tabulates the results for the multi-pair setting. The multi-pair setting can benefit the wordlevel QE task due to vocabulary overlap and the sentence-level QE tasks due to syntactical similarities between the language pairs.
In this setting, MTL improves performance for all language pairs in the word-level QE task. Using the LS-MTL QE model, the highest F1-score improvement of 7.63% is observed for the Si-En language pair, while with the Nash-MTL QE model, the best improvement is of 13.74%. The least improvement with the LS-MTL QE model is observed for the Ru-En pair 2.76%, while for the Nash-MTLbased QE model, it is of 2.46% for the Ru-En pair.
Though the improvements observed in the wordlevel QE task in this setting when using MTL
QE approaches are even higher compared to the single-pair setting, we see an opposite trend in the sentence-level QE task results. At the sentence level, we observe a slight degradation in the results of the En-Mr, En-De, and Ro-En MTL QE models.
We observe the most improvement of the 3.04% in Pearson Correlation over the STL QE model by the Nash-MTL QE model. For the Ro-En pair, both QE models fail to bring improvements over the STL QE model. For Ne-En and Et-En pairs, the LS-MTL QE model outperforms the Nash-MTL
QE model. In this setting, the Nash-MTL technique provides similar results to the LS-MTL technique. Also, we observe that the Nash-MTL QE
approach benefits the most to the low-resource language pairs. We also see higher improvements for the mid-resource language pairs than the highresource language pair.
| LP | Pearson Correlation (r) | Spearman Correlation (ρ) | | | | |
|-------|---------------------------|----------------------------|--------|----------|---------|--------|
| STL | Nash-MTL | +/- | STL | Nash-MTL | +/- | |
| En-Mr | -0.2309 | -0.3645 | 13.36% | -0.1656 | -0.2963 | 13.07% |
| Ne-En | -0.6263 | -0.6604 | 3.41% | -0.6124 | -0.6442 | 3.18% |
| Si-En | -0.5522 | -0.5881 | 3.59% | -0.5380 | -0.5510 | 1.30% |
| Et-En | -0.7202 | -0.7539 | 3.37% | -0.7541 | -0.768 | 1.39% |
| Ro-En | -0.7765 | -0.7794 | 0.29% | -0.7380 | -0.7534 | 1.54% |
| Ru-En | -0.6930 | -0.7187 | 2.57% | -0.6364 | -0.6805 | 4.41% |
| En-De | -0.4820 | -0.5482 | 6.62% | -0.4524 | -0.5099 | 5.75% |
## 6.3 Zero-Shot Setting
Table 3 shows the results for the zero-shot setting. The MTL QE models achieve better performance for both tasks over their STL-based counterparts for all the language pairs, except for the En-Mr language pair in the sentence-level QE task. Surprisingly, for the Ne-En pair, the LS-MTL model outperforms the Nash-MTL QE model in the sentencelevel QE task by a small margin (0.0053). While for all other language pairs, the Nash-MTL QE models outperform the respective LS-MTL QE models.
Similar to the trend in the previous two settings, the MTL QE approaches bring more benefits to the low-resource and mid-resource language pairs than the high-resource language pair.
In **Appendix** B, for each low-resource language pair, we include a table showing the comparison of STL, LS-MTL, and Nash-MTL QE models. These tables show that *the multi-pair setting helps the* low-resource scenario.
## 6.4 Discussion
Consistent Predictions Improvements shown by the MTL QE models in varied experimental settings on both tasks show that the tasks complement each other. We further assess the potential of the MTL QE models in predicting consistent outputs for both tasks over the same inputs. We do so by computing a correlation between the predicted DA
scores and the percentage of tokens in a sentence for which the 'BAD' tag was predicted. Therefore, a *stronger negative correlation denotes better consistency*. Table 4 shows Pearson and Spearman correlations between sentence-level and word-level QE predictions on the test sets, in a single pair setting. For all the language pairs, Nash-MTL QE
models show a stronger correlation than the STL
QE models. We also perform a qualitative analysis of the STL and MTL QE models for the En-Mr, Ro-En, and Si-En language pairs.
Qualitative Analysis The first English-Marathi example is shown in Table 5. It contains a poor translation of the source sentence meaning, "The temple is close to the holy place where ages ago the Buddha was born." The STL word-level QE and MTL QE models predict the same output assigning correct tags to tokens, yet we observe a significant difference in the sentence-level scores predicted by the models. The STL sentence-level QE model outputs a high score of 0.25, while the score given by the MTL QE model is -0.64. It supports the observation that the *MTL QE model outputs are* more consistent.
Unlike the STL sentence-level QE models, the MTL QE models predict more justified quality scores when translations have only minor mistakes.
The translation in the first Ro-En example in Table 5 is a high-quality translation. In this translation, the word "overwhelming" could have been replaced with a better lexical item. The STL QE
model harshly penalizes the translation by predicting the z-score at -0.0164, while the MTL model predicts a more justifiable score (0.8149). Similar behaviour is reflected in the second Si-En example as well (last row). Even though the translation reflects the meaning of the source sentence adequately and is also fluent, the STL QE model predicts a low score of -0.35, while the MTL QE
| Source | STL | Nash-MTL Label | | |
|------------------------------------------------------|--------------------------------------------|------------------|-------|-------|
| Target | | | | |
| [En] It is close to the holy site where the Buddha | [Mr] ज्या पवित्र स्थळावर शतकानुशतकांपूर्वी बुद्धांचा | 0.25 | -0.64 | -0.64 |
| ages ago had turned wheel of Dharma and | जन्म झ ाला होता, त्या जागेच्या जवळच हे मंदिर आहे. | | | |
| Bddhism was born. | | | | |
| [En] Representative species of the reserve include | [Mr] या संरक्षित क्षेत्राच्या प्रजातींमध्ये बोम्बॅक्स सिबा | 0.08 | 0.27 | |
| Bombax ceiba (Cotton tree), Sterculia villosa (Hairy | ( कॉटन ट्री), स्टर्कुलिया विलोसा (हेरी स्टर्कुलिया) आणि | 0.14 | | |
| Sterculia) and Cassia fistula (Golden shower tree). | कसिया फिस्टुला (गोल्डन शाँवर ट्री) यांचा समावेश आहे. | | | |
| [Ro] Ulterior, SUA au primit mulţi dintre elefanţii | [En] Later, the US received many of the | | | |
| africani captivi din Zimbabwe, unde erau | -0.02 | 0.81 | 0.95 | |
| captive African elephants from Zimbabwe, | | | | |
| supraabundenţi. | where they were overwhelming. | | | |
| [En] The gold and silver were extracted from | | | | |
| [Ro] Aurul şi argintul erau extrase din Munţii | the Apuseni Mountains in Zlatna, Abrud, | | | |
| Apuseni la Zlatna, Abrud, Roşia, Brad, Baia de Cris | -0.37 | 0.67 | 0.83 | |
| Red, Brad, Baia de Cris and Baia de Arieş, | | | | |
| şi Baia de Arieş, Baia Mare, Rodna. | Baia Mare, Rodna. | | | |
| [En] Later in the morning, helicopter aircraft | | | | |
| [Si] రසුදා උදැසන හෙලිකොප්වර් යනා මගින් බලඝණ 2ක් | carried two powered triangular aircraft to | 0.43 | -0.51 | -1.03 |
| ත්රිකුණාමලය ගුවන් කඳවුරට ගෙනයනලදී. | the base. | | | |
| [Si] අනෙකුත් ගොවීහු කෘෂිකර්මාන්තයේ විවිධ ක්රම අත්හදා | [En] Other farmers who experimented with | -0.35 | 0.66 | 0.71 |
| බැලූ අය වූහ. | various methods of agriculture. | | | |
model rates the translation appropriately by predicting 0.66 as score.
We also observed that the MTL QE models have an edge when rating translations with many named entities. This can be seen through the second English-Marathi (Row 3), second Romanian-
English (Row 5), and first Sinhala-English (Row 6) examples in Table 5. The translations are of high quality in both examples, and the MTL QE
models rate them more appropriately than the STL
QE models.
## 7 Conclusion And Future Work
In this paper, we showed that jointly training a single, pre-trained cross-lingual transformer over the sentence-level and word-level QE tasks improves performance on both tasks. We evaluated our approach in three different settings: single-pair, multipair, and zero-shot. The results on both the QE
tasks show that the MTL-based models outperform their STL-based counterparts for multiple language pairs in the single-pair setting. Given the performance in the zero-shot setting, we see promising transfer-learning capabilities in our approach. Consistent scores across both QE tasks for the same inputs demonstrate the effectiveness of the MTL
method to QE. We release our MTL-based QE models and our code under the CC-BY-SA 4.0 license publicly for further research.
In future, we wish to extend this work and evaluate the MTL-based QE models in a few-shot setting to assess the effectiveness of transfer learning. Further, we would like to explore the usage of wordlevel QE and sentence-level QE to assist in the task of automatic post-editing for MT. We also wish to explore the use of language-relatedness for building multi-pair MTL-based QE models.
## Limitations
The experimental results suggest the possibility of our MTL-based QE approach being biased towards the word-level QE task, as the jointly trained QE
models show better performance improvements for the word-level QE task as compared to the sentencelevel QE task. Further, we also observe that our approach does not work well for language pairs with English as a source language (En-De and En-Mr).
The qualitative analysis of the English-Marathi MTL-based QE model shows that the model performs poorly when inputs are in the passive voice. Our multi-pair setting experiments use all seven language pairs. We do not consider properties like the similarity between the languages, translation directions, etc. , to group the language pairs. So it may be possible to achieve comparable performance using a subset of languages. We choose the Nash-MTL approach for MTL-based experiments because it has been compared with around ten other MTL techniques and it has been shown that the Nash-MTL approach outperforms them on different combinations of the tasks. In the current work, we have not experimentally analyzed how the Nash-MTL approach gives better improvements than the LS-MTL approach.
## Ethics Statement
Our MTL architectures are trained on multiple publicly available datasets referenced in this paper.
These datasets have been previously collected and annotated, and no new data collection has been carried out as part of this work. Furthermore, these are standard benchmarks that have been released in recent WMT shared tasks. No user information was present in the datasets protecting users' privacy and identity. We understand that every dataset is subject to intrinsic bias and that computational models will inevitably learn biased information from any dataset. That said, we also believe that our MTL
models will help diminish biases in QE as they provide an explainable aspect to the predictions through token-level labels.
## References
Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, page 160–167, New York, NY,
USA. Association for Computing Machinery.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Ross Girshick. 2015. Fast r-cnn. In *2015 IEEE International Conference on Computer Vision (ICCV)*, pages 1440–1448.
Yvette Graham. 2015. Improving evaluation of machine translation quality estimation. In *Proceedings* of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 1804–1813, Beijing, China. Association for Computational Linguistics.
Yvette Graham, Timothy Baldwin, Meghan Dowling, Maria Eskevich, Teresa Lynn, and Lamia Tounsi.
2016. Is all that glitters in machine translation quality estimation really gold? In Proceedings of COLING
2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3124–
3134, Osaka, Japan. The COLING 2016 Organizing Committee.
Julia Ive, Frédéric Blain, and Lucia Specia. 2018. deepQuest: A framework for neural-based quality estima-
tion. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 3146–
3157, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Fabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, and André F. T. Martins. 2019. OpenKiwi:
An open source framework for quality estimation.
In *Proceedings of the 57th Annual Meeting of the* Association for Computational Linguistics: System Demonstrations, pages 117–122, Florence, Italy. Association for Computational Linguistics.
Hyun Kim, Joon-Ho Lim, Hyun-Ki Kim, and SeungHoon Na. 2019. QE BERT: Bilingual BERT using multi-task learning for neural quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2),
pages 85–89, Florence, Italy. Association for Computational Linguistics.
Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017.
Adversarial multi-task learning for text classification.
In *Proceedings of the 55th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1–10, Vancouver, Canada.
Association for Computational Linguistics.
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487–4496, Florence, Italy. Association for Computational Linguistics.
Varvara Logacheva, Chris Hokamp, and Lucia Specia.
2016. MARMOT: A toolkit for translation quality estimation at the word level. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3671–
3674, Portorož, Slovenia. European Language Resources Association (ELRA).
Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, and Ethan Fetaya. 2022. Multi-task learning as a bargaining game. In International Conference on Machine Learning, pages 16428–16446. PMLR.
Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020a. TransQuest at WMT2020: Sentencelevel direct assessment. In Proceedings of the Fifth Conference on Machine Translation, pages 1049–
1055, Online. Association for Computational Linguistics.
Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020b. TransQuest: Translation quality estimation with cross-lingual transformers. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5070–5081, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2021. An exploratory analysis of multilingual word-level quality estimation with cross-lingual transformers. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 434–440, Online. Association for Computational Linguistics.
Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. *CoRR*, abs/1706.05098.
Lucia Specia, Frédéric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzmán, and André F. T. Martins. 2020. Findings of the WMT
2020 shared task on quality estimation. In *Proceedings of the Fifth Conference on Machine Translation*,
pages 743–764, Online. Association for Computational Linguistics.
Lucia Specia, Frédéric Blain, Marina Fomicheva, Chrysoula Zerva, Zhenhao Li, Vishrav Chaudhary, and André F. T. Martins. 2021. Findings of the WMT
2021 shared task on quality estimation. In *Proceedings of the Sixth Conference on Machine Translation*,
pages 684–725, Online. Association for Computational Linguistics.
Lucia Specia, Gustavo Paetzold, and Carolina Scarton.
2015. Multi-level translation quality prediction with QuEst++. In *Proceedings of ACL-IJCNLP 2015 System Demonstrations*, pages 115–120, Beijing, China.
Association for Computational Linguistics and The Asian Federation of Natural Language Processing.
Lucia Specia, Dhwaj Raj, and Marco Turchi. 2010. Machine translation evaluation versus quality estimation.
Machine translation, 24(1):39–50.
Lucia Specia, Kashif Shah, Jose G.C. de Souza, and Trevor Cohn. 2013. QuEst - a translation quality estimation framework. In *Proceedings of the 51st* Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 79–84, Sofia, Bulgaria. Association for Computational Linguistics.
Jiayi Wang, Ke Wang, Boxing Chen, Yu Zhao, Weihua Luo, and Yuqi Zhang. 2021. QEMind: Alibaba's submission to the WMT21 quality estimation shared task.
In Proceedings of the Sixth Conference on Machine Translation, pages 948–954, Online. Association for Computational Linguistics.
Chrysoula Zerva, Frédéric Blain, Ricardo Rei, Piyawat Lertvittayakumjorn, José G. C. de Souza, Steffen Eger, Diptesh Kanojia, Duarte Alves, Constantin Orăsan, Marina Fomicheva, André F. T. Martins, and Lucia Specia. 2022. Findings of the wmt 2022 shared task on quality estimation. In *Proceedings* of the Seventh Conference on Machine Translation, pages 69–99, Abu Dhabi. Association for Computational Linguistics.
Xiangyun Zhao, Haoxiang Li, Xiaohui Shen, Xiaodan Liang, and Ying Wu. 2018. A modulation module for multi-task learning with applications in image retrieval. In *Computer Vision - ECCV 2018*, pages 415–432, Cham. Springer International Publishing.
## A Additional Training Details
The number of parameters for our STL QE models trained using the TransQuest framework is 125M
since we use the XLM-R base model variant for all experiments. This language model has 12 heads, with an embedding dimension of 768. The number of parameters in our MTL QE model is also approximately 125M.
Our total computation time for the STL models was approximately 60 hours, whereas the computation time for all experiments under LS-MTL
was approximately 22.5 hours. However, our bestperforming approach, *i.e.,* Nash-MTL, took approximately 41.25 hours.
Model **Setting** F1 r
STL
Single-Pair 0.3930 0.5215
Multi-Pair 0.4013 0.6711
Zero-Shot 0.3800 0.4552
LS-MTL
Single-Pair 0.4194 0.5563
Multi-Pair 0.4349 0.6514
Zero-Shot 0.3692 0.3869
Nash-MTL
Single-Pair 0.4662 0.5608
Multi-Pair 0.4815 **0.6704**
Zero-Shot 0.3833 0.4674
Table 6: Results obtained for the **En-Mr** Language pair.
Model **Setting** F1 r STL
Single-Pair 0.4852 0.7702
Multi-Pair 0.4902 0.7892
Zero-Shot 0.4175 0.7548
LS-MTL
Single-Pair 0.5383 0.7921
Multi-Pair 0.5406 **0.8012**
Zero-Shot 0.4472 0.7601
Nash-MTL
Single-Pair 0.5435 0.8005
Multi-Pair **0.5560** 0.8001
Zero-Shot 0.4480 0.7560
Table 7: Results obtained for the **Ne-En** Language pair.
## B Low-Resource Setting Results
Here, we try to compare the performance of our proposed approaches on low-resource language pairs, in all three settings and for both tasks, in a concise manner. Table 6, Table 7, and Table 8 show that the Nash-MTL-based QE approach in the multi-pair setting outperforms the single-pair settings for all the low-resources languages. Table 6 shows this comparison in terms of F1 for word-level QE and
Model **Setting** F1 r
STL
Single-Pair 0.6216 0.6402
Multi-Pair 0.5629 0.6653 Zero-Shot 0.4239 0.6416
LS-MTL
Single-Pair 0.6556 0.6533 Multi-Pair 0.6392 0.6837
Zero-Shot 0.4250 0.6434
Nash-MTL
Single-Pair 0.6946 0.6791
Multi-Pair 0.7003 **0.6957**
Zero-Shot 0.4407 0.6447
Pearson's (r) for the En-Mr language pair. Table 7, and 8 show the same results for Ne-En and Si-En, respectively.
## C Additional Single Pair Setting Results
We additionally report the results of winning submissions to the WMT21 and WMT22 QE shared tasks for the single-pair setting. Table 9 tabulates the results. Results obtained by the winning systems of WMT21 QE shared tasks are reported for all language pairs except English-Marathi. For the English-Marathi pair, we report the result achieved by the WMT22 shared task-winning systems. We report the F1-multi results for the word-level QE
task and Pearson's correlation (r) for the sentencelevel QE shared task.
| LP | Word-level | Sentence-level | | | | | | | | | | |
|-------|--------------|------------------|----------|--------|-------|--------|--------|--------|----------|--------|-------|-------|
| STL | LS-MTL | +/- % | Nash-MTL | +/- % | WMT | STL | LS-MTL | +/- % | Nash-MTL | +/- % | WMT | |
| En-Mr | 0.3930 | 0.4194 | 2.64% | 0.4662 | 7.32% | 0.5827 | 0.5215 | 0.5563 | 3.48% | 0.5608 | 3.93% | 0.604 |
| Ne-En | 0.4852 | 0.5383 | 5.31% | 0.5435 | 5.83% | 0.5693 | 0.7702 | 0.7921 | 2.19% | 0.8005 | 3.03% | 0.867 |
| Si-En | 0.6216 | 0.6556 | 3.40% | 0.6946 | 7.30% | 0.7140 | 0.6402 | 0.6533 | 1.31% | 0.6791 | 3.89% | 0.605 |
| Et-En | 0.4254 | 0.4971 | 7.17% | 0.5100 | 8.46% | 0.5140 | 0.7646 | 0.7905 | 2.59% | 0.7943 | 2.97% | 0.812 |
| Ro-En | 0.4446 | 0.4910 | 4.64% | 0.5273 | 8.27% | 0.5777 | 0.8952 | 0.8985 | 0.33% | 0.8960 | 0.08% | 0.908 |
| Ru-En | 0.3928 | 0.4208 | 2.80% | 0.4394 | 4.66% | 0.4480 | 0.7864 | 0.7994 | 1.30% | 0.8000 | 1.36% | 0.806 |
| En-De | 0.3996 | 0.4245 | 2.49% | 0.4467 | 4.71% | 0.4267 | 0.4005 | 0.4310 | 3.05% | 0.4433 | 4.28% | 0.584 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
It is an unnumbered section on page 8/9.
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Introduction (Section 1)
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?**
We create computational models for the task of Quality Estimation. We discuss the complete details of model training and the dataset used in Section 5 and Section 3, respectively.
✓ B1. Did you cite the creators of artifacts you used?
Section 3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Conclusion (Section 7)
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. We are using publically available datasets from mlqe-pe github repository licensed under the CC-0 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 3 (Datasets)
## C ✓ **Did You Run Computational Experiments?** Section 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 6
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
All libraries used are referred to or discussed in the paper. (Section 5)
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
peng-etal-2023-devil | The Devil is in the Details: On the Pitfalls of Event Extraction Evaluation | https://aclanthology.org/2023.findings-acl.586 | Event extraction (EE) is a crucial task aiming at extracting events from texts, which includes two subtasks: event detection (ED) and event argument extraction (EAE). In this paper, we check the reliability of EE evaluations and identify three major pitfalls: (1) The data preprocessing discrepancy makes the evaluation results on the same dataset not directly comparable, but the data preprocessing details are not widely noted and specified in papers. (2) The output space discrepancy of different model paradigms makes different-paradigm EE models lack grounds for comparison and also leads to unclear mapping issues between predictions and annotations. (3) The absence of pipeline evaluation of many EAE-only works makes them hard to be directly compared with EE works and may not well reflect the model performance in real-world pipeline scenarios. We demonstrate the significant influence of these pitfalls through comprehensive meta-analyses of recent papers and empirical experiments. To avoid these pitfalls, we suggest a series of remedies, including specifying data preprocessing, standardizing outputs, and providing pipeline evaluation results. To help implement these remedies, we develop a consistent evaluation framework OmniEvent, which can be obtained from \url{https://github.com/THU-KEG/OmniEvent}. | # The Devil Is In The Details: On The Pitfalls Of Event Extraction Evaluation
Hao Peng1∗, Xiaozhi Wang1∗, Feng Yao2∗**, Kaisheng Zeng**1, Lei Hou1,3, Juanzi Li1,3†, Zhiyuan Liu1,3**, Weixing Shen**2 1Department of Computer Science and Technology, BNRist; 2School of Law, Institute for AI and Law; 3KIRC, Institute for Artificial Intelligence, Tsinghua University, Beijing, 100084, China
{peng-h21, wangxz20, yaof20}@mails.tsinghua.edu.cn
## Abstract
Event extraction (EE) is a crucial task aiming at extracting events from texts, which includes two subtasks: event detection (ED) and event argument extraction (EAE). In this paper, we check the reliability of EE evaluations and identify three major pitfalls: (1) The data preprocessing discrepancy makes the evaluation results on the same dataset not directly comparable, but the data preprocessing details are not widely noted and specified in papers. (2) The **output space discrepancy**
of different model paradigms makes differentparadigm EE models lack grounds for comparison and also leads to unclear mapping issues between predictions and annotations. (3)
The **absence of pipeline evaluation** of many EAE-only works makes them hard to be directly compared with EE works and may not well reflect the model performance in realworld pipeline scenarios. We demonstrate the significant influence of these pitfalls through comprehensive meta-analyses of recent papers and empirical experiments. To avoid these pitfalls, we suggest a series of remedies, including specifying data preprocessing, standardizing outputs, and providing pipeline evaluation results. To help implement these remedies, we develop a consistent evaluation framework OMNIEVENT, which can be obtained from https://github.com/THU-KEG/OmniEvent.
## 1 Introduction
Event extraction (EE) is a fundamental information extraction task aiming at extracting structural event knowledge from plain texts. As illustrated in Figure 1, it is typically formalized as a two-stage pipeline (Ahn, 2006). The first subtask, event detection (ED), is to detect the event triggers (keywords or phrases evoking events, e.g., *quitting* in Figure 1)
and classify their event types (e.g., End-Position).
The second subtask, event argument extraction
∗ Equal contribution. Random Order. † Corresponding author: J.Li
![0_image_0.png](0_image_0.png)
Figure 1: An illustration for the event extraction (EE)
pipeline, including two stages: event detection (ED) and event argument extraction (EAE).
(EAE), is to extract corresponding event arguments and their roles (e.g., *Elon Musk* and its argument role Person) based on the first-stage ED results.
Since events play an important role in human language understanding and broad applications benefit from structural event knowledge (Ji and Grishman, 2011; Glavaš and Šnajder, 2014; Hogenboom et al., 2016; Zhang et al., 2020a), EE has attracted much research attention, and novel models have been continually developed. Beyond the conventional paradigms like classification (Chen et al., 2015; Wang et al., 2021) and sequence labeling (Nguyen et al., 2016; Chen et al., 2018),
new model paradigms such as span prediction (Liu et al., 2020a; Du and Cardie, 2020b) and conditional generation (Lu et al., 2021; Li et al., 2021b)
are proposed. These sophisticated models push evaluation results to increasingly high levels.
However, due to the complex input/output formats and task pipeline of EE, there are some hidden pitfalls in EE evaluations, which are rarely noted and discussed in EE papers (Wadden et al., 2019; Wang et al., 2020, 2022). These pitfalls make many competing EE methods actually lack grounds for comparison, and the reported scores cannot reflect real-world model performances well.
In this paper, we summarize three major pitfalls:
(1) **Data preprocessing discrepancy**. If two EE
works conduct evaluations on the same dataset but adopt different preprocessing methods, their results are not directly comparable. Since EE datasets have complex data formats (involving multiple heterogeneous elements including event triggers, arguments, entities, temporal expressions, etc.), data preprocessing methods of existing works often disagree on some design choices, like whether to include multi-token triggers, which results in major data discrepancies. For instance, for the widely-used English subset of ACE 2005 (Walker et al., 2006), the preprocessing of Wadden et al. (2019) gets 5, 055 event triggers, but Wang et al. (2021) have 5, 349.
(2) **Output space discrepancy**. Different model paradigms have inconsistent output spaces, which makes the evaluation metrics of different-paradigm models often not calculated on the same bases.
For example, the phrase *Elon Musk* is one argument candidate in the output space of conventional classification-based methods, and it is regarded as one error case when the model misclassifies it. But other model paradigms, like the sequence labeling, have more free output formats and can make two independent predictions for the two tokens *Elon* and *Musk*, which will account for two error cases in the evaluation metric calculation. Larger output spaces of the new model paradigms also result in unclear mappings between predictions and annotations in some cases, which are often overlooked in EE evaluation implementations and lead to problematic results. These details are presented in § 3.3.
(3) **Absence of pipeline evaluation**. Recent works handling only the EAE subtask often evaluate the performances based on gold event triggers (Subburathinam et al., 2019; Xi et al., 2021; Ma et al.,
2022). In contrast, conventional EE works often conduct pipeline evaluation, i.e., evaluate EAE performances based on triggers predicted at the ED
stage. The absence of pipeline evaluation makes these EAE-only works hard to be directly compared with EE works. This has discouraged the research community from considering all the EE subareas in a holistic view. Moreover, only using gold triggers in evaluation cannot evaluate EAE models' resistance to the noise of predicted triggers, which is important in real-world application scenarios.
We conduct systematic meta-analyses of EE papers and empirical experiments, demonstrating the pitfalls' broad and significant influence. We suggest a series of remedies to avoid these pitfalls, including specifying data preprocessing methods, standardizing outputs, and providing pipeline evaluation results. To help conveniently achieve these remedies, we develop a consistent evaluation framework, OMNIEVENT, which contains implementations for data preprocessing and output standardization, and off-the-shelf predicted triggers on widelyused datasets for easier pipeline evaluation.
To summarize, our contributions are two-fold:
(1) We systematically analyze the inconspicuous pitfalls of EE evaluations and demonstrate their significant influence with meta-analyses and experiments. (2) We propose corresponding remedies to avoid the pitfalls and develop a consistent evaluation framework to help implement them.
## 2 Related Work
Traditional methods (Ji and Grishman, 2008; Gupta and Ji, 2009; Hong et al., 2011; Li et al., 2013)
rely on human-crafted features and rules to extract events. Most modern EE models automate feature learning with neural networks (Nguyen and Grishman, 2015; Nguyen et al., 2016; Nguyen and Grishman, 2018) and adopt different model paradigms to model the EE task. The most common **classification**-based methods view EE as classifying given trigger and argument candidates into different labels (Chen et al., 2015; Feng et al., 2016; Chen et al., 2017; Liu et al., 2018b; Wang et al.,
2019a; Lai et al., 2020; Wang et al., 2021, 2022).
Sequence labeling methods (Nguyen et al., 2016; Chen et al., 2018; Araki and Mitamura, 2018; Ding et al., 2019; Ma et al., 2020; Nguyen et al., 2021; Guzman-Nateras et al., 2022) do EE by labeling every word following a certain tagging schema such as BIO (Ramshaw and Marcus, 1995). Recently, some works (Du and Cardie, 2020b; Li et al.,
2020a; Liu et al., 2020a, 2021b; Wei et al., 2021; Sheng et al., 2021; Zhou et al., 2022) propose to cast the task formalization of EE into resource-rich machine reading comprehension tasks and adopt the **span prediction** paradigm to predict the starting and ending positions of event trigger and argument spans. With the development of generative pre-trained language models (Lewis et al., 2020; Raffel et al., 2020; Brown et al., 2020), there have been works (Lu et al., 2021; Xi et al., 2021; Li et al., 2021b, 2022a; Liu et al., 2022c; Huang et al., 2022; Du et al., 2022; Hsu et al., 2022; Zeng et al., 2022)
exploring the **conditional generation** paradigm to generate sequences indicating EE results.
A few previous works (Wadden et al., 2019; Lai et al., 2020; Wang et al., 2020, 2022) have noted that data preprocessing discrepancy may influence evaluation results, but they did not especially study its impact with in-depth analyses. To the best of our knowledge, we are the first to study all three kinds of pitfalls of EE evaluation and propose comprehensive remedies for them.
## 3 Pitfalls Of Event Extraction Evaluation
We first introduce our investigation setup for metaanalysis and empirical analysis (§ 3.1). Then we analyze the three pitfalls: data preprocessing discrepancy (§ 3.2), output space discrepancy (§ 3.3),
and absence of pipeline evaluation (§ 3.4).
## 3.1 Investigation Setup
We adopt the following two investigation methods to analyze the influence of the observed pitfalls.
Meta-Analysis To comprehensively understand the research status and investigate the potential influence of the evaluation pitfalls, we analyze a broad range of recent EE studies in the metaanalysis. Specifically, we manually retrieve all published papers concerning EE, ED, and EAE tasks at four prestigious venues from 2015 to 2022 via keyword1 matching and manual topic rechecking by the authors. The complete paper list is shown in appendix C, including 44 at ACL, 39 at EMNLP,
19 at NAACL, and 14 at COLING.
We conduct statistical analyses of these papers and their released codes (if any) from multiple perspectives. These statistics will be presented to demonstrate the existence and influence of the pitfalls in the following sections, respectively.
Empirical Analysis In addition to the metaanalysis, we conduct empirical experiments to quantitatively analyze the pitfalls' influence on EE
evaluation results. We reproduce several representative models covering all four model paradigms mentioned in § 2 to systematically study the influence. Specifically, the models contain: (1) **Classifcation** methods, including DMCNN (Chen et al.,
2015) , DMBERT (Wang et al., 2019a,b), and CLEVE (Wang et al., 2021). DMCNN and DMBERT adopt a dynamic multi-pooling operation over hidden representations of convolutional neural networks and BERT (Devlin et al., 2019), respectively. CLEVE is an event-aware pre-trained model enhanced with event-oriented contrastive pre-training. (2) **Sequence labeling** methods, including BiLSTM+CRF (Wang et al., 2020) and BERT+CRF (Wang et al., 2020), which adopt the conditional random field (Lafferty et al., 2001)
1We use *event* and *extraction* as keywords for searching.
| ED | EAE | | | | | |
|------------|-------|------|------|------|------|------|
| Metric | P | R | F1 | P | R | F1 |
| DMCNN | 65.0 | 69.7 | 67.2 | 45.3 | 41.6 | 43.2 |
| DMBERT | 72.1 | 77.1 | 74.5 | 50.5 | 60.0 | 54.8 |
| CLEVE | 76.4 | 80.4 | 78.3 | 56.9 | 65.9 | 61.0 |
| BiLSTM+CRF | 72.3 | 79.1 | 75.5 | 27.1 | 32.3 | 29.4 |
| BERT+CRF | 69.9 | 74.6 | 72.1 | 41.4 | 43.6 | 42.5 |
| EEQA | 65.3 | 74.5 | 69.5 | 49.7 | 45.4 | 47.4 |
| PAIE | N/A | N/A | N/A | 70.6 | 73.2 | 71.8 |
| Text2Event | 66.9 | 72.4 | 69.5 | 48.0 | 54.1 | 50.8 |
as the output layer to make structural predictions. (3) **Span prediction** methods, including EEQA (Du and Cardie, 2020b) converting EE
into a question-answering task, and PAIE (Ma et al., 2022), which is a prompt-tuning-based EAE
method. (4) **Conditional generation** method, including Text2Event (Lu et al., 2021), which is a sequence-to-structure generative EE method with constrained decoding and curriculum learning.
The models are reproduced based on the evaluation settings described in their original papers and released open-source codes (if any). From our meta-analysis, 70% of the EE papers adopt the English subset of ACE 2005 dataset (Walker et al.,
2006)
2in their experiments. Hence we also adopt this most widely-used dataset in our empirical experiments to analyze the pitfalls without loss of generality. The reproduction performances are shown in Table 1. Following the conventional practice, we report precision (P), recall (R), and the F1 score.
In the following analyses, we show the impact of three pitfalls by observing how the performances change after controlling the pitfalls' influence.
## 3.2 Data Preprocessing Discrepancy
Due to the inherent task complexity, EE datasets naturally involve multiple heterogeneous annotation elements. For example, besides event triggers and arguments, EE datasets often annotate entities, temporal expressions, and other spans as argument candidates. The complex data format makes the data preprocessing methods easily differ in many details, which makes the reported results on the same dataset not directly comparable. However, this pitfall has not received extensive attention.
To carefully demonstrate the differences brought by data preprocessing discrepancy, we conduct de-2For brevity, refer to as "ACE 2005" in the following.
| Paper% | #Token | #Trigger | #Argument | #Event Type | #Arg. Role | #Tri. Candidate | #Arg. Candidate | |
|-------------|----------|------------|-------------|---------------|--------------|-------------------|-------------------|---------|
| ACE-DYGIE | 14 | 305, 266 | 5, 055 | 6, 040 | 33 | 22 | 305, 266 | 34, 474 |
| ACE-OneIE | 19 | 310, 020 | 5, 311 | 8, 055 | 33 | 22 | 309, 709 | 54, 650 |
| ACE-Full | 4 | 300, 477 | 5, 349 | 9, 683 | 33 | 35 | 300, 165 | 59, 430 |
| Unspecified | 63 | - | - | - | - | - | - | - |
| ACE-DYGIE | ACE-OneIE | ACE-Full | |
|------------------|-------------|------------|---------|
| NLP Toolkit | spaCy | NLTK | CoreNLP |
| Entity Mention | head | head | full |
| Multi-token Tri. | ✕ | ✔ | ✔ |
| Temporal Exp. | ✕ | ✕ | ✔ |
| Value Exp. | ✕ | ✕ | ✔ |
| Pronoun | ✕ | ✔ | ✔ |
| ACE-DYGIE | ACE-OneIE | ACE-Full | | | | |
|-------------|----------------------------------------------|------------|------|------|------|-------|
| Metric | ∆ED F1 ∆EAE F1 ∆ED F1 ∆EAE F1 ∆ED F1 ∆EAE F1 | | | | | |
| DMCNN | −4.7 | −9.2 | −4.3 | −8.0 | − | − |
| DMBERT | −6.3 | −6.7 | −5.2 | −7.6 | − | − |
| CLEVE | −5.4 | −6.2 | −3.3 | −6.3 | − | − |
| BiLSTM+CRF | −3.8 | +3.1 | −4.1 | +3.2 | − | − |
| BERT+CRF | −4.2 | +2.4 | −4.2 | +3.4 | − | − |
| EEQA | − | − | −0.5 | +0.1 | +3.6 | −4.1 |
| PAIE | N/A | − | N/A | −0.7 | N/A | −15.2 |
| Text2Event | − | − | +2.5 | +3.0 | +4.7 | −1.0 |
tailed meta-analyses taking the most widely-used ACE 2005 as an example. From all the 116 surveyed papers, we find three repetitively used opensource preprocessing scripts: ACE-DYGIE (Wadden et al., 2019), ACE-OneIE (Lin et al., 2020), and ACE-Full (Wang et al., 2019b). In addition to these scripts, there are 6 other open-source preprocessing scripts that are only used once. The utilization rates and data statistics of the different preprocessing methods are shown in Table 2. From the statistics, we can observe that: (1) The data differences brought by preprocessing methods are significant. The differences mainly come from the different preprocessing implementation choices, as summarized in Table 3. For instance, ACE-DYGIE and ACE-OneIE ignore the annotated temporal expressions and values in ACE 2005, which results in 13 fewer argument roles compared to ACE-Full.
Intuitively, the significant data discrepancy may result in inconsistent evaluation results. (2) Each preprocessing script has a certain utilization rate and the majority (63%) papers do not specify their preprocessing methods. The high preprocessing inconsistency and Unspecified rate both show that our community has not fully recognized the significance of the discrepancies resulting from differences in data preprocessing.
To further empirically investigate the influence of preprocessing, we conduct experiments on ACE
2005. Table 4 shows the F1 differences keeping all settings unchanged except for the preprocessing scripts. We can observe that the influence of different preprocessing methods is significant and varies from different models. It indicates that the evaluation results on the same dataset are not necessarily comparable due to the unexpectedly large influence of different preprocessing details.
Moreover, besides ACE 2005, there are also data preprocessing discrepancies in other datasets. For example, in addition to the implementation details, the data split of the KBP dataset is not always consistent (Li et al., 2021a, 2022a), and some used LDC3 datasets are not freely available, such as LDC2015E29. Based on all the above analyses, we suggest the community pay more attention to data discrepancies caused by preprocessing, and we propose corresponding remedies in § 4.1.
## 3.3 Output Space Discrepancy
As shown in Figure 3, the diversity of adopted model paradigms in EE studies has substantially increased in recent years. Figure 2 illustrates the different paradigms' workflows in EAE scenario4.
The paradigms inherently have very different output spaces, which results in inconspicuous pitfalls
![4_image_1.png](4_image_1.png)
**Person**
![4_image_0.png](4_image_0.png)
Prediction Mechanisms Output Predictions
![4_image_2.png](4_image_2.png)
in the comparative evaluations across paradigms.
Inconsistent Output Spaces between Different Paradigms As shown in Figure 2, there are substantial differences between the model output spaces of different paradigms. CLS-paradigm models only output a unique label for each candidate in a pre-defined set. While models of SL and SP
paradigms can make predictions for any consecutive spans in the input sequence. The output space of CG-paradigm models is even larger, as their vanilla5 output sequences are completely free, e.g.,
they can even involve tokens unseen in the input.
The inconsistent output spaces make the evaluation metrics of different-paradigm models calculated on different bases and not directly comparable. For instance, when calculating the confusion matrices for the prediction *as Chief Executive of* in Fig-5Indicates excluding tricks like vocabulary constraint, etc.
ure 2, the CLS paradigm takes it as one true positive (TP) and two false positives (FP), while the remaining paradigms only count it as one FP. The CLS paradigm may also have an advantage in some cases since it is constrained by the pre-defined candidate sets and cannot make illegal predictions as other paradigms may have.
Unclear Mappings between Predictions and Annotations Implementing the mappings between model predictions and dataset annotations is a key component for evaluation. The larger output spaces of SL, SP, and CG paradigms often produce unclear mappings, which are easily neglected in the EE evaluation implementations and influence the final metrics. As shown in Figure 2 (bottom right),
we summarize three major unclear mapping issues: ⃝1 **Prediction span overlaps the gold span.**
A prediction span of non-CLS paradigm models may overlap but not strictly align with the annotated span, bringing in an unclear implementation choice. As in Figure 2, it is unclear whether the predicted role Position for the span *as Chief Executive of* should be regarded as a correct prediction for the contained annotated span *Chief Executive*.
⃝2 **Multiple predictions for one annotated span.**
If without special constraints, models of SP and CG paradigms may make multiple predictions for one span. Figure 2 presents two contradictory predictions (Company and Person) for the annotated span *Elon Musk*. To credit the correct one only or penalize both should lead to different evaluation
⚠️
⚠️ ⚠️
![5_image_1.png](5_image_1.png)
results. ⃝3 **Predictions without positions for nonunique spans.** Vanilla CG-paradigm models make predictions by generating contents without specifying their positions. When the predicted spans are non-unique in the inputs, it is unclear how to map them to annotated spans in different positions.
As in Figure 2, the CG model outputs two *Twitter* predictions, which can be mapped to two different input spans.
To quantitatively demonstrate the influence of output space discrepancy, we conduct empirical experiments. Specifically, we propose an output standardization method (details in § 4.2), which unify the output spaces of different paradigms and handle all the unclear mapping issues. We report the changes in metrics between the original evaluation implementations and the evaluation with our output standardization in Table 5. We can see the results change obviously, with the maximum increase and decrease of +2.8 in ED precision and −3.5 in EAE
recall, respectively. It indicates the output space discrepancy can lead to highly inconsistent evaluation results. Hence, we advocate for awareness of the output space discrepancy in evaluation implementations and suggest doing output standardization when comparing models using different paradigms.
## 3.4 Absence Of Pipeline Evaluation
The event extraction (EE) task is typically formalized as a two-stage pipeline, i.e., first event detection (ED) and then event argument extraction
(EAE). In real applications, EAE is based on ED
and only extracts arguments for triggers detected by the ED model. Therefore, the conventional evaluation of EAE is based on predicted triggers and considers ED prediction errors, which we call **pipeline**
evaluation. It assesses the overall performance of
![5_image_0.png](5_image_0.png)
an event extraction system and is consistent with real-world pipeline application scenarios.
However, as shown in Figure 4, more and more works have focused only on EAE in recent years.
For convenience and setting a unified evaluation base between the EAE-only works, 95.45% of them only evaluate EAE taking gold triggers as inputs. We dub this evaluation setting as **gold**
trigger evaluation. The conventional pipeline evaluation of EE works is absent in most EAE-only works, which poses two issues: (1) The absence of pipeline evaluation makes the results of EAE-only works hard to be directly cited and compared in EE studies. In the papers covered by our meta-analysis, there is nearly no direct comparison between EE methods and EAE-only methods. It indicates that the evaluation setting difference has created a gap between the two closely connected research tasks, which hinders the community from comprehensively understanding the research status.
(2) The gold trigger evaluation may not well reflect the real-world performance since it ignores the EAE models' resistance to trigger noise. In real-world applications, the input triggers for EAE
models are noisy predicted triggers. A good EAE
method should be resistant to trigger noise, e.g.,
not extracting arguments for false positive triggers.
The gold trigger evaluation neglects trigger noise.
To assess the potential influence of this pitfall, we compare experimental results under the gold trigger evaluation and pipeline evaluation of various models in Table 6. We can observe different trends from the results of gold trigger evaluation and pipeline evaluation. For example, although DMBERT performs much better than BERT+CRF under gold trigger evaluation, they perform nearly the same under pipeline evaluation (47.2 vs. 47.1).
It suggests that the absence of pipeline evalua-
| Metric | ED F1 | Gold Tri. | Pipeline |
|------------|---------|-------------|------------|
| EAE F1 | EAE F1 | | |
| DMCNN | 62.8 | 51.6 | 35.2 |
| DMBERT | 69.4 | 67.2 | 47.2 |
| CLEVE | 75.0 | 69.6 | 54.7 |
| BiLSTM+CRF | 72.4 | 45.3 | 34.9 |
| BERT+CRF | 69.2 | 64.3 | 47.1 |
| EEQA | 69.1 | 63.9 | 45.0 |
| PAIE | 75.0 | 73.2 | 56.7 |
tion may bring obvious result divergence, which is rarely noticed in existing works. Based on the above discussions, we suggest also conducting the pipeline evaluation in EAE works.
## 4 Consistent Evaluation Framework
The above analyses show that the hidden pitfalls substantially harm the consistency and validity of EE evaluation. We propose a series of remedies to avoid these pitfalls and develop a consistent evaluation framework, OMNIEVENT. OMNIEVENT helps to achieve the remedies and eases users of handling the inconspicuous preprocessing and evaluation details. It is publicly released and continually maintained to handle emerging evaluation pitfalls. The suggested remedies include specifying data preprocessing (§ 4.1), standardizing outputs (§ 4.2), and providing pipeline evaluation results (§ 4.3). We further re-evaluate various EE models using our framework and analyze the results in § 4.4.
## 4.1 Specify Data Preprocessing
As analyzed in § 3.2, preprocessing discrepancies have an obvious influence on evaluation results.
The research community should pay more attention to data preprocessing details and try to specify them. Specifically, we suggest future EE works adopt a consistent preprocessing method on the same dataset. Regarding the example in § 3.2, for the multiple ACE 2005 preprocessing scripts, we recommend ACE-Full since it retains the most comprehensive event annotations, e.g., multi-token triggers and the time-related argument roles, which are commonly useful in real-world applications. If a study has to use different preprocessing methods for special reasons, we suggest specifying the preprocessing method with reference to public codes.
However, there are no widely-used publicly available preprocessing scripts for many EE datasets, which makes many researchers have to re-develop their own preprocessing methods. In our consistent evaluation framework, we provide preprocessing scripts for various widely-used datasets, including ACE 2005 (Walker et al., 2006), TAC KBP
Event Nugget Data 2014-2016 (Ellis et al., 2014, 2015, 2016), TAC KBP 2017 (Getman et al., 2017),
RichERE (Song et al., 2015), MAVEN (Wang et al.,
2020), LEVEN (Yao et al., 2022), DuEE (Li et al.,
2020b), and FewFC (Zhou et al., 2021). We will continually add the support of more datasets, such as RAMS (Ebner et al., 2020) and WikiEvents (Li et al., 2021b), and we welcome the community to contribute scripts for more datasets.
## 4.2 Standardize Outputs
Based on the discussions about output space discrepancy in § 3.3, we propose and implement an output standardization method in our framework.
To mitigate the inconsistency of output spaces between paradigms, we project the outputs of nonCLS paradigm models onto the most strict CLSparadigm output space. Specifically, we follow strict boundary-matching rules to assign the nonCLS predictions to each trigger/argument candidate in pre-defined candidate sets of the CLS paradigm.
The final evaluation metrics are computed purely on the candidate sets, and those predictions that fail to be matched are discarded. The intuition behind this operation is that given the CLS-paradigm candidate sets are automatically constructed, the illegal predictions out of this scope can also be automatically filtered in real-world applications.
Regarding the unclear mappings between predictions and annotations, we consider the scenario of real-world applications and propose several deterministic mapping rules for consistent evaluations.
We respond to the issues mentioned in § 3.3 as follows. ⃝1 **Prediction span overlaps the gold**
span. We follow strict boundary-matching rules and discard such overlapping predictions. For example, the SL prediction of *as Chief Executive of* cannot strictly match any candidate in the candidate set of the CLS paradigm. Hence it is discarded after output standardization. ⃝2 **Multiple predictions for one annotated span.** If the outputs are with confidence scores, we choose the prediction
| Original Evaluation | Consistent Evaluation | | | | | | | | | | | |
|-----------------------|-------------------------|------|------|--------|--------|--------|------|------|------|------|------|------|
| ED | EAE | ED | EAE | | | | | | | | | |
| Metric | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 |
| DMCNN | 75.6 | 63.6 | 69.1 | 62.2 | 46.9 | 53.5 | 65.0 | 69.7 | 67.2 | 45.3 | 41.6 | 43.2 |
| DMBERT | 77.6 | 71.8 | 74.6 | 58.8 | 55.8 | 57.2 | 72.1 | 77.1 | 74.5 | 50.5 | 60.0 | 54.8 |
| CLEVE | 78.1 | 81.5 | 79.8 | 55.4 | 68.0 | 61.1 | 76.4 | 80.4 | 78.3 | 56.9 | 65.9 | 61.0 |
| BiLSTM+CRF | 77.2 | 74.9 | 75.4 | 27.1 ∗ | 32.3 ∗ | 29.5 ∗ | 74.2 | 78.9 | 76.5 | 42.8 | 32.4 | 36.9 |
| BERT+CRF | 71.3 | 77.1 | 74.1 | 41.4 ∗ | 43.6 ∗ | 42.5 ∗ | 72.4 | 74.5 | 73.4 | 55.6 | 43.2 | 48.6 |
| EEQA | 71.1 | 73.7 | 72.4 | 56.9 | 49.8 | 53.1 | 70.5 | 77.3 | 73.6 | 65.8 | 25.5 | 36.4 |
| PAIE | N/A | N/A | N/A | 70.6 ∗ | 73.2 ∗ | 72.7 | N/A | N/A | N/A | 61.4 | 46.2 | 52.7 |
| Text2Event | 69.6 | 74.4 | 71.9 | 52.5 | 55.2 | 53.8 | 76.1 | 74.5 | 75.2 | 59.6 | 43.0 | 50.0 |
with the highest confidence as the final prediction, otherwise, we simply choose the first appearing prediction. The remaining predictions are discarded.
⃝3 **Predictions without positions for non-unique**
spans. We assign such predictions to the annotated spans simply by their appearing order in the output/input sequence to avoid information leakage. We encourage designing new models or postprocessing rules to add positional information for CG predictions so that this issue can be directly solved by strict boundary-matching.
## 4.3 Provide Pipeline Evaluation Results
The absence of pipeline evaluation (§ 3.4) creates a gap between EE and EAE works, and may not well reflect EAE models' performance in real-world scenarios. Therefore, in addition to the common gold trigger evaluation results, we suggest future EAEonly works also provide pipeline evaluation results.
However, there are two difficulties: (1) It is an extra overhead for the EAE-only works to implement an ED model and get predicted triggers on the datasets. (2) If two EAE models use different predicted triggers, their evaluation results are not directly comparable since the trigger quality influences EAE performance. To alleviate these difficulties, our consistent evaluation framework releases off-the-shelf predicted triggers for the widely-used EE datasets, which will help future EAE works conduct easy and consistent pipeline evaluations. The released predicted triggers are generated with existing top-performing ED models so that the obtained pipeline evaluation results shall help the community to understand the possible EE performance of combining top ED and EAE models.
## 4.4 Experimental Results
We re-evaluate various EE models with our consistent evaluation framework. The results are shown in Table 7, and we can observe that: (1) If we are not aware of the pitfalls of EE evaluation, we can only understand EE development status and compare competing models from the "Original Evaluation" results in Table 7. After eliminating the influence of the pitfalls with our framework, the consistent evaluation results change a lot in both absolute performance levels and relative model rankings.
This comprehensively demonstrates the influence of the three identified evaluation pitfalls on EE research and highlights the importance of awareness of these pitfalls. Our framework can help avoid the pitfalls and save efforts in handling intensive evaluation implementation details. (2) Although the changes in F1 scores are minor for some models (e.g., CLEVE), their precision and recall scores vary significantly. In these cases, consistent evaluation is also necessary since real-world applications may have different precision and recall preferences.
## 5 Conclusion And Future Work
In this paper, we identify three pitfalls of event extraction evaluation, which are data preprocessing discrepancy, output space discrepancy, and absence of pipeline evaluation. Meta-analyses and empirical experiments present a huge impact of these pitfalls, which urges the attention of our research community. To avoid the pitfalls, we suggest a series of remedies, including specifying data preprocessing, standardizing outputs, and providing pipeline evaluation results. We develop a consistent evaluation framework OMNIEVENT, to help future works implement these remedies. In the future, we will continually maintain it to well handle more emerging EE datasets, model paradigms, and other possible hidden evaluation pitfalls.
## Limitations
The major limitations of our work are three-fold:
(1) In the empirical experiments, we only train and evaluate models on English datasets. As the analyzed pitfalls are essentially language-independent, we believe the empirical conclusions could generalize to other languages. The developed consistent evaluation framework now includes multiple English and Chinese datasets, and we will extend it to support more languages in the future. (2) The three pitfalls analyzed in this paper are identified from our practical experiences and may not cover all the pitfalls of EE evaluation. We encourage the community to pay more attention to finding other possible hidden pitfalls of EE evaluation. We will also continually maintain the proposed consistent evaluation framework to support mitigating the influence of newly-found pitfalls. (3) Our meta-analysis only covers papers published at ACL,
EMNLP, NAACL, and COLING on mainstream EE research since 2015. Although we believe that we can obtain representative observations from the 116 surveyed papers, some EE works published at other venues and at earlier times are missed.
## Ethical Considerations
We discuss the ethical considerations and broader impact of this work here: (1) **Intellectual property**. The copyright of ACE 2005 belongs to LDC6.
We access it through our LDC membership and strictly adhere to its license. We believe the established ACE 2005 dataset is desensitized. In our consistent evaluation framework, we will only provide preprocessing scripts rather than preprocessed datasets for those datasets whose licenses do not permit redistribution. The ACE-DYGIE preprocessing script7and the used code repositories for DMCNN8, DMBERT8, BiLSTM+CRF8, BERT+CRF8, EEQA9, and Text2Event10 are released under MIT
license11. These are all public research resources.
6https://www.ldc.upenn.edu/ 7https://github.com/dwadden/dygiepp 8https://github.com/THU-KEG/MAVEN-dataset 9https://github.com/xinyadu/eeqa 10https://github.com/luyaojie/Text2Event 11https://opensource.org/licenses/MIT
We use them for the research purpose in this work, which is consistent with their intended use. (2) **Intended use**. Our consistent evaluation framework implements the suggested remedies to avoid the identified pitfalls in EE evaluation. Researchers are supposed to use this framework to conduct consistent evaluations for comparing various competing EE models. (3) **Misuse risks**. The results reported in this paper and the evaluation results produced by our consistent evaluation framework **should not**
be used for offensive arguments or interpreted as implying misconduct of other works. The analyzed pitfalls in this work are inconspicuous and very easy to be accidentally overlooked. Hence the community is generally unaware of them or underestimates their influence. The contribution of our work lies in raising awareness of the pitfalls and helping to avoid them in future works. (4) **Accessibility**.
Many widely-used datasets (such as ACE 2005, KBP, etc.) are not freely available to everyone. The financial fairness issue may influence the broader usage of the data for EE research.
## References
David Ahn. 2006. The stages of event extraction. In Proceedings of ACL Workshop on Annotating and Reasoning about Time and Events, pages 1–8.
Jun Araki and Teruko Mitamura. 2018. Open-domain event detection using distant supervision. In *Proceedings of COLING*, pages 878–891.
Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2019. Sub-event detection from twitter streams as a sequence labeling problem. In Proceedings of NAACL-HLT, pages 745–750.
Ofer Bronstein, Ido Dagan, Qi Li, Heng Ji, and Anette Frank. 2015. Seed-based event trigger labeling: How far can event descriptions get us? In *Proceedings of* ACL-IJCNLP, pages 372–376.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In *Proceedings of NeurIPS*, volume 33, pages 1877–1901.
Hu Cao, Jingye Li, Fangfang Su, Fei Li, Hao Fei, Shengqiong Wu, Bobo Li, Liang Zhao, and Donghong Ji. 2022. OneEE: A one-stage framework for fast overlapping and nested event extraction. In Proceedings of COLING, pages 1953–1964.
Pengfei Cao, Yubo Chen, Jun Zhao, and Taifeng Wang.
2020. Incremental event detection via knowledge consolidation networks. In *Proceedings of EMNLP*,
pages 707–717.
Yee Seng Chan, Joshua Fasching, Haoling Qiu, and Bonan Min. 2019. Rapid customization for event extraction. In *Proceedings of ACL: System Demonstrations*, pages 31–36.
Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun.
2021. Honey or poison? Solving the trigger curse in few-shot event detection via causal intervention. In Proceedings of EMNLP, pages 8078–8088.
Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically Labeled Data Generation for Large Scale Event Extraction. In *Proceedings of ACL*, pages 409–419.
Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In *Proceedings of ACL-IJCNLP*, pages 167–176.
Yubo Chen, Hang Yang, Kang Liu, Jun Zhao, and Yantao Jia. 2018. Collective event detection via a hierarchical and bias tagging networks with gated multi-level attention mechanisms. In Proceedings of EMNLP, pages 1267–1276.
Xin Cong, Shiyao Cui, Bowen Yu, Tingwen Liu, Wang Yubin, and Bin Wang. 2021. Few-Shot Event Detection with Prototypical Amortized Conditional Random Field. In *Findings of ACL-IJCNLP*, pages 28–
40.
Shiyao Cui, Bowen Yu, Tingwen Liu, Zhenyu Zhang, Xuebin Wang, and Jinqiao Shi. 2020. Edge-enhanced graph convolution networks for event detection with syntactic relation. In *Findings of EMNLP*, pages 2329–2339.
Shumin Deng, Ningyu Zhang, Luoqiu Li, Chen Hui, Tou Huaixiao, Mosha Chen, Fei Huang, and Huajun Chen. 2021. OntoED: Low-resource event detection with ontology embedding. In *Proceedings of ACLIJCNLP*, pages 2828–2839.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In *Proceedings of NAACL-HLT*, pages 4171–4186.
Ning Ding, Ziran Li, Zhiyuan Liu, Haitao Zheng, and Zibo Lin. 2019. Event detection with trigger-aware lattice neural network. In *Proceedings of EMNLPIJCNLP*, pages 347–356.
Xinya Du and Claire Cardie. 2020a. Document-level event role filler extraction using multi-granularity contextualized encoding. In *Proceedings of ACL*,
pages 8010–8020.
Xinya Du and Claire Cardie. 2020b. Event extraction by answering (almost) natural questions. In Proceedings of EMNLP, pages 671–683.
Xinya Du, Sha Li, and Heng Ji. 2022. Dynamic global memory for document-level argument extraction. In Proceedings of ACL, pages 5264–5275.
Seth Ebner, Patrick Xia, Ryan Culkin, Kyle Rawlins, and Benjamin Van Durme. 2020. Multi-sentence argument linking. In *Proceedings of ACL*, pages 8057–8077.
Joe Ellis, Jeremy Getman, Dana Fore, Neil Kuster, Zhiyi Song, Ann Bies, and Stephanie M Strassel. 2015. Overview of linguistic resources for the TAC KBP
2015 evaluations: Methodologies and results. In TAC.
Joe Ellis, Jeremy Getman, Dana Fore, Neil Kuster, Zhiyi Song, Ann Bies, and Stephanie M Strassel. 2016.
Overview of Linguistic Resources for the TAC KBP
2016 Evaluations: Methodologies and Results. In TAC.
Joe Ellis, Jeremy Getman, and Stephanie M Strassel.
2014. Overview of linguistic resources for the TAC
KBP 2014 evaluations: Planning, execution, and results. In TAC.
Kurt Junshean Espinosa, Makoto Miwa, and Sophia Ananiadou. 2019. A search-based neural model for biomedical nested and overlapping event detection.
In *Proceedings of EMNLP-IJCNLP*, pages 3679–
3686.
Xiaocheng Feng, Lifu Huang, Duyu Tang, Heng Ji, Bing Qin, and Ting Liu. 2016. A language-independent neural network for event detection. In *Proceedings* of ACL, pages 66–71.
Tao Ge, Lei Cui, Baobao Chang, Zhifang Sui, and Ming Zhou. 2016. Event detection with burst information networks. In *Proceedings of COLING*, pages 3276–
3286.
Jeremy Getman, Joe Ellis, Zhiyi Song, Jennifer Tracey, and Stephanie Strassel. 2017. Overview of linguistic resources for the tac kbp 2017 evaluations: Methodologies and results. In TAC.
Reza Ghaeini, Xiaoli Fern, Liang Huang, and Prasad Tadepalli. 2016. Event nugget detection with forward-backward recurrent neural networks. In *Proceedings of ACL*, pages 369–373.
Goran Glavaš and Jan Šnajder. 2014. Event graphs for information retrieval and multi-document summarization. *Expert systems with applications*, 41(15):6904–
6916.
Prashant Gupta and Heng Ji. 2009. Predicting Unknown Time Arguments based on Cross-Event Propagation.
In *Proceedings of ACL-IJCNLP*, pages 369–372.
Luis Guzman-Nateras, Minh Van Nguyen, and Thien Nguyen. 2022. Cross-lingual event detection via optimized adversarial training. In Proceedings of NAACL-HLT, pages 5588–5599.
Frederik Hogenboom, Flavius Frasincar, Uzay Kaymak, Franciska De Jong, and Emiel Caron. 2016. A survey of event extraction methods from text for decision support systems. *Decision Support Systems*, 85:12–
22.
Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of ACL-HLT, pages 1127–1136.
Andrew Hsi, Yiming Yang, Jaime Carbonell, and Ruochen Xu. 2016. Leveraging multilingual training for limited resource event extraction. In Proceedings of COLING, pages 1201–1210.
I-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei Chang, and Nanyun Peng. 2022. DEGREE: A data-efficient generation-based event extraction model. In *Proceedings of NAACL-HLT*, pages 1890–1908.
Kuan-Hao Huang, I-Hung Hsu, Prem Natarajan, KaiWei Chang, and Nanyun Peng. 2022. Multilingual generative language models for zero-shot crosslingual event argument extraction. In *Proceedings of* ACL, pages 4633–4646.
Kung-Hsiang Huang, Mu Yang, and Nanyun Peng.
2020a. Biomedical event extraction with hierarchical knowledge graphs. In *Findings of EMNLP*, pages 1277–1285.
Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare R. Voss, Jiawei Han, and Avirup Sil. 2016. Liberal Event Extraction and Event Schema Induction.
In *Proceedings of ACL*, pages 258–268.
Lifu Huang and Heng Ji. 2020. Semi-supervised New Event Type Induction and Event Detection. In *Proceedings of EMNLP*, pages 718–724.
Peixin Huang, Xiang Zhao, Ryuichi Takanobu, Zhen Tan, and Weidong Xiao. 2020b. Joint event extraction with hierarchical policy network. In *Proceedings* of COLING, pages 2653–2664.
Yusheng Huang and Weijia Jia. 2021. Exploring sentence community for document-level event extraction.
In *Findings of EMNLP*, pages 340–351.
Ander Intxaurrondo, Eneko Agirre, Oier Lopez de Lacalle, and Mihai Surdeanu. 2015. Diamonds in the rough: Event extraction from imperfect microblog data. In *Proceedings of NAACL-HLT*, pages 641–
650.
Abhyuday N Jagannatha and Hong Yu. 2016. Bidirectional RNN for medical event detection in electronic health records. In *Proceedings of NAACL-HLT*,
pages 473–482.
Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In *Proceedings of ACL*, pages 254–262.
Heng Ji and Ralph Grishman. 2011. Knowledge Base Population: Successful Approaches and Challenges.
In *Proceedings of ACL*, pages 1148–1158.
Alex Judea and Michael Strube. 2016. Incremental global event extraction. In *Proceedings of COLING*, pages 2279–2289.
John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields:
Probabilistic models for segmenting and labeling sequence data. In *Proceedings of ICML*, pages 282–
289.
Viet Lai, Franck Dernoncourt, and Thien Huu Nguyen.
2021. Learning prototype representations across fewshot tasks for event detection. In Proceedings of EMNLP, pages 5270–5277.
Viet Dac Lai, Tuan Ngo Nguyen, and Thien Huu Nguyen. 2020. Event Detection: Gate Diversity and Syntactic Importance Scores for Graph Convolution Neural Networks. In *Proceedings of EMNLP*, pages 5405–5411.
Kenton Lee, Yoav Artzi, Yejin Choi, and Luke Zettlemoyer. 2015. Event detection and factuality assessment with non-expert supervision. In *Proceedings of* EMNLP, pages 1643–1648.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of ACL*, pages 7871–
7880.
Diya Li, Lifu Huang, Heng Ji, and Jiawei Han. 2019.
Biomedical event extraction based on knowledgedriven tree-LSTM. In *Proceedings of NAACL-HLT*,
pages 1421–1430.
Fayuan Li, Weihua Peng, Yuguang Chen, Quan Wang, Lu Pan, Yajuan Lyu, and Yong Zhu. 2020a. Event extraction as multi-turn question answering. In *Findings of EMNLP*, pages 829–838.
Haochen Li, Tong Mo, Hongcheng Fan, Jingkun Wang, Jiaxi Wang, Fuhao Zhang, and Weiping Li. 2022a.
KiPT: Knowledge-injected prompt tuning for event detection. In *Proceedings of COLING*, pages 1943–
1952.
Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features.
In *Proceedings of ACL*, pages 73–82.
Rui Li, Wenlin Zhao, Cheng Yang, and Sen Su. 2021a.
Treasures outside contexts: Improving event detection via global statistics. In *Proceedings of EMNLP*,
pages 2625–2635.
Sha Li, Heng Ji, and Jiawei Han. 2021b. Documentlevel event argument extraction by conditional generation. In *Proceedings of NAACL-HLT*, pages 894–
908.
Xinyu Li, Fayuan Li, Lu Pan, Yuguang Chen, Weihua Peng, Quan Wang, Yajuan Lyu, and Yong Zhu.
2020b. Duee: A large-scale dataset for chinese event extraction in real-world scenarios. In *Proceedings of* NLPCC, volume 12431 of Lecture Notes in Computer Science, pages 534–545.
Zhongqiu Li, Yu Hong, Jie Wang, Shiming He, Jianmin Yao, and Guodong Zhou. 2022b. Unregulated Chinese-to-English data expansion does NOT work for neural event detection. In *Proceedings of COLING*, pages 2633–2638.
Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019.
Cost-sensitive regularization for label confusionaware event detection. In *Proceedings of ACL*, pages 5278–5283.
Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020.
A joint neural model for information extraction with global features. In *Proceedings of ACL*, pages 7999–
8009.
Anan Liu, Ning Xu, and Haozhe Liu. 2021a. Selfattention graph residual convolutional networks for event detection with dependency relations. In *Findings of EMNLP*, pages 302–311.
Jian Liu, Yubo Chen, Kang Liu, Wei Bi, and Xiaojiang Liu. 2020a. Event Extraction as Machine Reading Comprehension. In *Proceedings of EMNLP*, pages 1641–1651.
Jian Liu, Yubo Chen, Kang Liu, Yantao Jia, and Zhicheng Sheng. 2020b. How does context matter? On the robustness of event detection with context-selective mask generalization. In *Findings of* EMNLP, pages 2523–2532.
Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2019a.
Neural cross-lingual event detection with minimal parallel resources. In *Proceedings of EMNLPIJCNLP*, pages 738–748.
Jian Liu, Yufeng Chen, and Jinan Xu. 2021b. Machine reading comprehension as data augmentation: A case study on implicit event argument extraction. In *Proceedings of EMNLP*, pages 2716–2725.
Jian Liu, Yufeng Chen, and Jinan Xu. 2022a. Saliency as evidence: Event detection with trigger saliency attribution. In *Proceedings of ACL*, pages 4573–4585.
Minqian Liu, Shiyu Chang, and Lifu Huang. 2022b.
Incremental prompting: Episodic memory prompt for lifelong event detection. In *Proceedings of COLING*,
pages 2157–2165.
Shaobo Liu, Rui Cheng, Xiaoming Yu, and Xueqi Cheng. 2018a. Exploiting contextual information via dynamic memory network for event detection. In Proceedings of EMNLP, pages 1030–1035.
Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016. Leveraging FrameNet to improve automatic event detection. In *Proceedings of ACL*, pages 2134–2143.
Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017.
Exploiting Argument Information to Improve Event Detection via Supervised Attention Mechanisms. In Proceedings of ACL, pages 1789–1798.
Shulin Liu, Yang Li, Feng Zhang, Tao Yang, and Xinpeng Zhou. 2019b. Event detection without triggers.
In *Proceedings of NAACL-HLT*, pages 735–744.
Xiao Liu, Heyan Huang, Ge Shi, and Bo Wang. 2022c.
Dynamic prefix-tuning for generative template-based event extraction. In *Proceedings of ACL*, pages 5216–
5228.
Xiao Liu, Zhunchen Luo, and Heyan Huang. 2018b.
Jointly multiple events extraction via attention-based graph information aggregation. In *Proceedings of* EMNLP, pages 1247–1256.
Dongfang Lou, Zhilin Liao, Shumin Deng, Ningyu Zhang, and Huajun Chen. 2021. MLBiNet: A crosssentence collective event detection network. In *Proceedings of ACL-IJCNLP*, pages 4829–4839.
Weiyi Lu and Thien Huu Nguyen. 2018. Similar but not the same: Word sense disambiguation improves event detection via neural representation matching.
In *Proceedings of EMNLP*, pages 4822–4828.
Yaojie Lu, Hongyu Lin, Xianpei Han, and Le Sun. 2019.
Distilling discrimination and generalization knowledge for event detection via delta-representation learning. In *Proceedings of ACL*, pages 4366–4376.
Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, and Shaoyi Chen. 2021. Text2Event: Controllable sequence-tostructure generation for end-to-end event extraction.
In *Proceedings of ACL-IJCNLP*, pages 2795–2806.
Qing Lyu, Hongming Zhang, Elior Sulem, and Dan Roth. 2021. Zero-shot event extraction via transfer learning: Challenges and insights. In Proceedings of ACL-IJCNLP, pages 322–332.
Jie Ma, Shuai Wang, Rishita Anubhai, Miguel Ballesteros, and Yaser Al-Onaizan. 2020. Resourceenhanced neural model for event argument extraction.
In *Findings of EMNLP*, pages 3554–3559.
Yubo Ma, Zehao Wang, Yixin Cao, Mukai Li, Meiqi Chen, Kun Wang, and Jing Shao. 2022. Prompt for extraction? PAIE: Prompting argument interaction for event argument extraction. In *Proceedings of* ACL, pages 6759–6774.
Hieu Man Duc Trong, Duc Trong Le, Amir Pouran Ben Veyseh, Thuat Nguyen, and Thien Huu Nguyen.
2020. Introducing a new dataset for event detection in cybersecurity texts. In *Proceedings of EMNLP*,
pages 5381–5390.
Jiaxin Mi, Po Hu, and Peng Li. 2022. Event detection with dual relational graph attention networks. In Proceedings of COLING, pages 1979–1989.
Aakanksha Naik and Carolyn Rose. 2020. Towards open domain event trigger identification using adversarial domain adaptation. In *Proceedings of ACL*,
pages 7618–7624.
Nghia Ngo Trung, Duy Phung, and Thien Huu Nguyen.
2021. Unsupervised domain adaptation for event detection using domain-specific adapters. In *Findings* of ACL-IJCNLP, pages 4015–4025.
Minh Van Nguyen, Bonan Min, Franck Dernoncourt, and Thien Nguyen. 2022. Joint extraction of entities, relations, and events via modeling inter-instance and inter-label dependencies. In *Proceedings of NAACLHLT*, pages 4363–4374.
Minh Van Nguyen, Tuan Ngo Nguyen, Bonan Min, and Thien Huu Nguyen. 2021. Crosslingual transfer learning for relation and event extraction via word category and class alignments. In Proceedings of EMNLP, pages 5414–5426.
Thien Nguyen and Ralph Grishman. 2018. Graph convolutional networks with argument-aware pooling for event detection. In *Proceedings of AAAI*, pages 5900–5907.
Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In *Proceedings of NAACL-HLT*,
pages 300–309.
Thien Huu Nguyen and Ralph Grishman. 2015. Event Detection and Domain Adaptation with Convolutional Neural Networks. In *Proceedings of ACL*,
pages 365–371.
Thien Huu Nguyen and Ralph Grishman. 2016. Modeling skip-grams for event detection with convolutional neural networks. In *Proceedings of EMNLP*, pages 886–891.
Walker Orr, Prasad Tadepalli, and Xiaoli Fern. 2018.
Event detection with neural networks: A rigorous empirical evaluation. In *Proceedings of EMNLP*,
pages 999–1004.
Haoruo Peng, Yangqiu Song, and Dan Roth. 2016.
Event detection and co-reference with minimal supervision. In *Proceedings of EMNLP*, pages 392–402.
Amir Pouran Ben Veyseh, Viet Lai, Franck Dernoncourt, and Thien Huu Nguyen. 2021a. Unleash GPT2 power for event detection. In *Proceedings of ACLIJCNLP*, pages 6271–6282.
Amir Pouran Ben Veyseh, Minh Van Nguyen, Franck Dernoncourt, Bonan Min, and Thien Nguyen. 2022.
Document-level event argument extraction via optimal transport. In *Findings of ACL*, pages 1648–1658.
Amir Pouran Ben Veyseh, Minh Van Nguyen, Nghia Ngo Trung, Bonan Min, and Thien Huu Nguyen.
2021b. Modeling document-level context for event detection via important context selection. In *Proceedings of EMNLP*, pages 5403–5413.
Amir Pouran Ben Veyseh, Tuan Ngo Nguyen, and Thien Huu Nguyen. 2020. Graph Transformer Networks with Syntactic and Semantic Structures for Event Argument Extraction. In *Findings of EMNLP*, pages 3651–3661.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21(140):1–67.
Alan Ramponi, Rob van der Goot, Rosario Lombardo, and Barbara Plank. 2020. Biomedical event extraction as sequence labeling. In *Proceedings of EMNLP*,
pages 5357–5367.
Lance Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In *Third* Workshop on Very Large Corpora.
Yubing Ren, Yanan Cao, Fang Fang, Ping Guo, Zheng Lin, Wei Ma, and Yi Liu. 2022. CLIO: Roleinteractive multi-event head attention network for document-level event extraction. In Proceedings of COLING, pages 2504–2514.
Oscar Sainz, Itziar Gonzalez-Dios, Oier Lopez de Lacalle, Bonan Min, and Eneko Agirre. 2022. Textual entailment for event argument extraction: Zero- and few-shot with multi-source learning. In *Findings of* NAACL-HLT, pages 2439–2455.
Lei Sha, Jing Liu, Chin-Yew Lin, Sujian Li, Baobao Chang, and Zhifang Sui. 2016. RBPB: Regularization-based pattern balancing method for event extraction. In *Proceedings of ACL*, pages 1224–
1234.
Shirong Shen, Guilin Qi, Zhen Li, Sheng Bi, and Lusheng Wang. 2020. Hierarchical Chinese legal event extraction via pedal attention mechanism. In Proceedings of COLING, pages 100–113.
Shirong Shen, Tongtong Wu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari, and Sheng Bi. 2021. Adaptive knowledge-enhanced Bayesian meta-learning for few-shot event detection. In *Findings of ACLIJCNLP*, pages 2417–2429.
Jiawei Sheng, Shu Guo, Bowen Yu, Qian Li, Yiming Hei, Lihong Wang, Tingwen Liu, and Hongbo Xu.
2021. CasEE: A joint learning framework with cascade decoding for overlapping event extraction. In Findings of ACL-IJCNLP, pages 164–174.
Matthew Sims, Jong Ho Park, and David Bamman. 2019.
Literary event detection. In *Proceedings of ACL*,
pages 3623–3634.
Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ere: Annotation of entities, relations, and events.
In *Proceedings of the 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation*,
pages 89–98.
Ananya Subburathinam, Di Lu, Heng Ji, Jonathan May, Shih-Fu Chang, Avirup Sil, and Clare Voss.
2019. Cross-lingual structure transfer for relation
and event extraction. In *Proceedings of EMNLPIJCNLP*, pages 313–325.
Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li, and Jun Xie. 2020. Improving event detection via open-domain trigger knowledge. In Proceedings of ACL, pages 5887–5897.
David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, Relation, and Event Extraction with Contextualized Span Representations.
In *Proceedings of EMNLP-IJCNLP*, pages 5784–
5789.
Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus. *Linguistic Data Consortium*, 57.
Sijia Wang, Mo Yu, Shiyu Chang, Lichao Sun, and Lifu Huang. 2022. Query and extract: Refining event extraction as type-oriented binary decoding. In *Findings of ACL*, pages 169–182.
Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, and Peng Li. 2019a. Adversarial Training for Weakly Supervised Event Detection. In *Proceedings of* NAACL-HLT, pages 998–1008.
Xiaozhi Wang, Ziqi Wang, Xu Han, Wangyi Jiang, Rong Han, Zhiyuan Liu, Juanzi Li, Peng Li, Yankai Lin, and Jie Zhou. 2020. MAVEN: A Massive General Domain Event Detection Dataset. In *Proceedings of* EMNLP, pages 1652–1671.
Xiaozhi Wang, Ziqi Wang, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Maosong Sun, Jie Zhou, and Xiang Ren. 2019b. HMEAE: Hierarchical Modular Event Argument Extraction. In *Proceedings of EMNLPIJCNLP*, pages 5777–5783.
Ziqi Wang, Xiaozhi Wang, Xu Han, Yankai Lin, Lei Hou, Zhiyuan Liu, Peng Li, Juanzi Li, and Jie Zhou.
2021. CLEVE: Contrastive Pre-training for Event Extraction. In *Proceedings of ACL-IJCNLP*, pages 6283–6297.
Kaiwen Wei, Xian Sun, Zequn Zhang, Jingyuan Zhang, Guo Zhi, and Li Jin. 2021. Trigger is not sufficient: Exploiting frame-aware knowledge for implicit event argument extraction. In *Proceedings of ACLIJCNLP*, pages 4672–4682.
Sam Wei, Igor Korostil, Joel Nothman, and Ben Hachey.
2017. English event detection with translated language features. In *Proceedings of ACL*, pages 293–
298.
Yinyi Wei, Shuaipeng Liu, Jianwei Lv, Xiangyu Xi, Hailei Yan, Wei Ye, Tong Mo, Fan Yang, and Guanglu Wan. 2022. DESED: Dialogue-based explanation for sentence-level event detection. In *Proceedings of* COLING, pages 2483–2493.
Dominik Wurzer, Victor Lavrenko, and Miles Osborne.
2015. Twitter-scale new event detection via k-term hashing. In *Proceedings of EMNLP*, pages 2584–
2589.
Xiangyu Xi, Wei Ye, Shikun Zhang, Quanxiu Wang, Huixing Jiang, and Wei Wu. 2021. Capturing event argument interaction via a bi-directional entity-level recurrent decoder. In *Proceedings of ACL-IJCNLP*,
pages 210–219.
Jianye Xie, Haotong Sun, Junsheng Zhou, Weiguang Qu, and Xinyu Dai. 2021. Event detection as graph parsing. In *Findings of ACL-IJCNLP*, pages 1630–
1640.
Runxin Xu, Tianyu Liu, Lei Li, and Baobao Chang.
2021. Document-level event extraction via heterogeneous graph-based interaction model with a tracker.
In *Proceedings of ACL-IJCNLP*, pages 3533–3546.
Runxin Xu, Peiyi Wang, Tianyu Liu, Shuang Zeng, Baobao Chang, and Zhifang Sui. 2022. A two-stream AMR-enhanced model for document-level event argument extraction. In *Proceedings of NAACL-HLT*,
pages 5025–5036.
Semih Yagcioglu, Mehmet Saygin Seyfioglu, Begum Citamak, Batuhan Bardak, Seren Guldamlasioglu, Azmi Yuksel, and Emin Islam Tatli. 2019. Detecting cybersecurity events from noisy short text. In Proceedings of NAACL-HLT, pages 1366–1372.
Haoran Yan, Xiaolong Jin, Xiangbin Meng, Jiafeng Guo, and Xueqi Cheng. 2019. Event Detection with Multi-Order Graph Convolution and Aggregated Attention. In *Proceedings of EMNLP-IJCNLP*, pages 5766–5770.
Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context.
In *Proceedings of NAACL-HLT*, pages 289–299.
Hang Yang, Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, and Taifeng Wang. 2021. Document-level event extraction via parallel prediction networks. In Proceedings of ACL-IJCNLP, pages 6298–6308.
Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, and Dongsheng Li. 2019. Exploring pre-trained language models for event extraction and generation. In *Proceedings of ACL*, pages 5284–5294.
Feng Yao, Chaojun Xiao, Xiaozhi Wang, Zhiyuan Liu, Lei Hou, Cunchao Tu, Juanzi Li, Yun Liu, Weixing Shen, and Maosong Sun. 2022. LEVEN: A largescale chinese legal event detection dataset. In *Findings of ACL*, pages 183–201.
Pengfei Yu, Heng Ji, and Prem Natarajan. 2021. Lifelong event detection with knowledge transfer. In Proceedings of EMNLP, pages 5278–5290.
Qi Zeng, Qiusi Zhan, and Heng Ji. 2022. EA2E: Improving consistency with event awareness for documentlevel argument extraction. In *Findings of NAACL*,
pages 2649–2655.
Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song, and Cane Wing-Ki Leung. 2020a. ASER: A largescale eventuality knowledge graph. In *Proceedings* of WWW, pages 201–211.
Hongming Zhang, Haoyu Wang, and Dan Roth. 2021.
Zero-shot Label-aware Event Trigger and Argument Classification. In *Findings of ACL-IJCNLP*, pages 1331–1340.
Senhui Zhang, Tao Ji, Wendi Ji, and Xiaoling Wang.
2022. Zero-shot event detection based on ordered contrastive learning and prompt-based prediction. In Findings of NAACL-HLT, pages 2572–2580.
Zhisong Zhang, Xiang Kong, Zhengzhong Liu, Xuezhe Ma, and Eduard Hovy. 2020b. A two-step approach for implicit event argument detection. In *Proceedings* of ACL, pages 7479–7485.
Zixuan Zhang and Heng Ji. 2021. Abstract Meaning Representation guided graph encoding and decoding for joint information extraction. In *Proceedings of* NAACL-HLT, pages 39–49.
Shun Zheng, Wei Cao, Wei Xu, and Jiang Bian. 2019.
Doc2EDAG: An end-to-end document-level framework for Chinese financial event extraction. In *Proceedings of EMNLP-IJCNLP*, pages 337–346.
Hanzhang Zhou and Kezhi Mao. 2022. Document-level event argument extraction by leveraging redundant information and closed boundary loss. In *Proceedings* of NAACL-HLT, pages 3041–3052.
Jie Zhou, Qi Zhang, Qin Chen, Qi Zhang, Liang He, and Xuanjing Huang. 2022. A multi-format transfer learning model for event argument extraction via variational information bottleneck. In *Proceedings* of COLING, pages 1990–2000.
Yang Zhou, Yubo Chen, Jun Zhao, Yin Wu, Jiexin Xu, and Jinlong Li. 2021. What the role is vs. what plays the role: Semi-supervised event argument extraction via dual question answering. In *Proceedings of AAAI*, volume 35, pages 14638–14646.
## Appendices A Experimental Details
The section introduces the experimental details in the paper, including the data preprocessing details (appendix A.1), the reproduction details
(appendix A.2), and the training details (appendix A.3).
## A.1 Data Preprocessing Details
The section introduces the details of the three data preprocessing scripts for ACE 2005: ACE-DYGIE,
ACE-OneIE, and ACE-Full.
ACE-DYGIE We adopt the released official codes12 provided by Wadden et al. (2019) as the ACE-DYGIE preprocessing script. Specifically, we adopt the widely-used "default-settings" in the codes to preprocess ACE 2005. ACE-DYGIE uses spaCy13 for sentence segmentation and tokenization. The version of spaCy is 2.0.18, and the used spaCy model is en_core_web_sm.
ACE-OneIE We adopt the released official codes14 provided by Lin et al. (2020) as the ACE-OneIE preprocessing script. ACE-OneIE uses NLTK15 for sentence segmentation and tokenization , and the version of NLTK is 3.5.
ACE-Full We adopt the released official codes16 provided by Wang et al. (2019b) as the ACE-Full preprocessing script. ACE-Full uses the Stanford CoreNLP17 toolkit for sentence segmentation and tokenization, and the version of CoreNLP is 4.4.0.
## A.2 Reproduction Details
In this section, we introduce the reproduction details of all the reproduced models and provide some explanations for the results' differences between our reproduction and the originally reported results.
All the reproduction experiments adopt their original evaluation settings, respectively. The number of parameters for each reproduced model is shown in Table 8.
| Model | #Paramter |
|------------|-------------|
| DMCNN | 2M |
| DMBERT | 110M |
| CLEVE | 354M |
| BiLSTM+CRF | 37M |
| BERT+CRF | 110M |
| EEQA | 110M |
| PAIE | 406M |
| Text2Event | 770M |
DMCNN Our DMCNN implementation is mainly based on the codes18 provided by Wang et al. (2020). The reproduced ED F1 score (67.2) is similar to the reported result (69.1) in the original paper (Chen et al., 2015) on the ACE 2005 dataset.
However, there is a gap between our reproduced and the originally reported EAE F1 scores (43.2 vs.
53.5). A possible reason is that Chen et al. (2015)
adopts a different EAE evaluation setting: Only the argument annotations of the predicted triggers are included in the metric calculation, while the argument annotations of the false negative trigger predictions are discarded. This setting is also adopted in some other early works like DMBERT (Wang et al., 2019b), and we call it "legacy setting". Compared to the common evaluation setting now, which includes all the argument annotations, the recall scores under the legacy setting are typically higher.
When re-evaluating our reproduced DMCNN under the legacy setting, the EAE F1 score (53.9) is consistent with the originally reported result (53.5).
DMBERT Our DMBERT implementation is mainly based on the codes18 provided by (Wang et al., 2020). The reproduced ED F1 score (74.5) is consistent with the originally reported result
(74.3) on the ACE 2005 dataset. However, similar to the DMCNN case introduced in the last paragraph, the reproduced EAE F1 score (54.8) is lower than the originally reported result (57.2 in Wang et al. (2019b)) due to the "legacy setting". When re-evaluating the reproduced DMBERT under the legacy setting, the EAE F1 score is 60.6.
CLEVE We download the pre-trained CLEVE
checkpoint19 and finetune it on ACE 2005. The reproduced F1 scores of ED (78.3) and EAE (61.0)
are basically consistent with the originally reported ED (79.8) and EAE (61.1) results.
18https://github.com/THU-KEG/MAVEN-dataset 19https://github.com/THU-KEG/CLEVE
BiLSTM+CRF We implement BiLSTM+CRF
based on the codes18 provided by Wang et al.
(2020). The reproduced ED F1 score (75.5) is similar to the reported result (75.4) in the original paper (Wang et al., 2020) on ACE 2005. As there is no work using BiLSTM+CRF to perform EAE,
we adopt all the settings used in ED and evaluate the EAE performance of BiLSTM+CRF.
BERT+CRF We implement BERT+CRF based on the codes18 provided by Wang et al. (2020). The reproduced ED F1 score (72.1) is similar to the reported result (74.1) in the original paper (Wang et al., 2020) on ACE 2005. As there is no work using BERT+CRF to perform EAE, we implement its EAE model following all the ED settings.
EEQA We implement EEQA (Du and Cardie, 2020b) based on the released official codes20.
When directly running the released code, we get the F1 score of 69.0 for ED and 47.3 for EAE,
which are consistent with our finally reproduced ED (69.5) and EAE (47.4) results. However, there is still a gap between the reproduced and the originally reported results, which is also mentioned in several GitHub issues21.
PAIE We implement PAIE (Ma et al., 2022)
based on the released official codes22 and evaluate it in different evaluation settings. The reproduced EAE F1 score (71.8) is basically consistent with that reported in the original paper (72.7).
Text2Event We adopt the released official codes23 to re-evaluate Text2Event (Lu et al., 2021)
in different settings. There are minor differences between the reproduced F1 results and the originally reported results (ED: 69.5 vs. 71.9, EAE:
50.8 vs. 53.8). We think the differences come from randomness. When only using the same random seed reported by the authors, the reproduction results are nearly the same as the original results.
## A.3 Training Details
We run three random trials for all the experiments using three different seeds (seed=0, seed=1, seed=2). The final reported results are the average results over the three random trials. All hyperparameters are the same as those used in the original papers. The experiments of CLEVE, PAIE, and Text2Event are run on Nvidia A100 GPUs, which consume about 600 GPU hours. The other experiments are run on Nvidia GeForce RTX 3090 GPUs, which consume about 100 GPU hours.
## B Additional Experimental Results
The section shows additional experimental results on different preprocessed ACE 2005 datasets.
Output Space Discrepancy Table 9 shows the metrics' differences with and without output standardization on the ACE-DYGIE and ACE-Full preprocessed datasets. We can observe that all evaluation metrics change obviously, which is consistent with the observations in § 3.3.
Absence of Pipeline Evaluation Table 10 shows the results using gold trigger evaluation and pipeline evaluation on the ACE-DYGIE and ACE-Full preprocessed datasets. We can observe that the phenomena are consistent with those in
§ 3.4.
Consistent Evaluation Framework Table 11 shows the results using our consistent evaluation on ACE-DYGIE, ACE-OneIE, and ACE-Full. We can observe that the phenomena on ACE-DYGIE and ACE-OneIE are consistent with those in § 4.4.
## C Papers For Meta-Analysis
The complete list of papers surveyed in our metaanalysis is shown in Table 12.
## D Authors' Contribution
Hao Peng, Feng Yao, and Kaisheng Zeng conducted the empirical experiments. Feng Yao conducted the meta-analyses. Xiaozhi Wang, Hao Peng, and Feng Yao wrote the paper. Xiaozhi Wang designed the project. Lei Hou, Juanzi Li, Zhiyuan Liu, and Weixing Shen advised the project. All authors participated in the discussion.
| ACE-DYGIE | ACE-Full | | | | | | | | | | | |
|-------------|------------|------|------|------|------|------|------|------|------|-------|-------|------|
| ED | EAE | ED | EAE | | | | | | | | | |
| Metric | ∆P | ∆R | ∆F1 | ∆P | ∆R | ∆F1 | ∆P | ∆R | ∆F1 | ∆P | ∆R | ∆F1 |
| BiLSTM+CRF | +0.4 | +0.0 | +0.2 | +6.7 | +0.2 | +3.7 | +1.9 | −0.2 | +1.0 | +15.7 | +0.1 | +7.4 |
| BERT+CRF | +0.6 | −0.2 | +0.2 | +5.2 | −0.1 | +2.9 | +2.5 | −0.2 | +1.3 | +14.1 | −0.4 | +6.1 |
| EEQA | +0.0 | +0.0 | +0.0 | −0.7 | −1.2 | −1.0 | −0.3 | +1.3 | +0.4 | +19.3 | −15.3 | −6.9 |
| PAIE | N/A | N/A | N/A | +4.4 | −0.5 | +1.9 | N/A | N/A | N/A | +21.9 | −1.6 | +8.4 |
| Text2Event | +0.3 | +0.0 | +0.1 | +0.5 | −2.5 | −0.9 | +2.0 | +0.0 | +1.0 | +6.2 | −3.7 | +0.1 |
| ACE-DYGIE | ACE-Full | | | | | |
|-------------|------------|-----------|----------|-------|-----------|----------|
| Metric | ED F1 | Gold Tri. | Pipeline | ED F1 | Gold Tri. | Pipeline |
| EAE F1 | EAE F1 | EAE F1 | EAE F1 | | | |
| DMCNN | 62.5 | 50.1 | 34.0 | 67.2 | 61.8 | 43.2 |
| DMBERT | 68.3 | 67.3 | 48.1 | 74.5 | 73.1 | 54.8 |
| CLEVE | 72.9 | 71.4 | 54.8 | 78.3 | 76.2 | 61.0 |
| BiLSTM+CRF | 72.0 | 45.2 | 36.2 | 76.5 | 46.2 | 36.9 |
| BERT+CRF | 68.1 | 64.1 | 47.8 | 73.4 | 64.5 | 48.6 |
| EEQA | 69.5 | 63.5 | 46.4 | 73.6 | 46.1 | 36.4 |
| PAIE | 72.9 | 73.8 | 56.5 | 78.3 | 65.0 | 52.7 |
| ED | EAE | | | | | |
|------------|-------------|-------------|-------------|-------------|-------------|-------------|
| Metric | P | R | F1 | P | R | F1 |
| ACE-DYGIE | | | | | | |
| DMCNN | 58.6 ± 2.28 | 67.0 ± 0.88 | 62.5 ± 1.08 | 38.6 ± 1.58 | 30.4 ± 0.99 | 34.0 ± 1.20 |
| DMBERT | 66.4 ± 0.69 | 70.2 ± 0.73 | 68.3 ± 0.43 | 45.6 ± 1.46 | 51.0 ± 0.87 | 48.1 ± 0.91 |
| CLEVE | 70.7 ± 0.87 | 75.3 ± 0.82 | 72.9 ± 0.53 | 52.2 ± 1.47 | 57.6 ± 1.40 | 54.8 ± 1.26 |
| BiLSTM+CRF | 68.5 ± 1.27 | 75.8 ± 2.28 | 72.0 ± 0.99 | 36.4 ± 1.21 | 36.1 ± 0.37 | 36.2 ± 0.55 |
| BERT+CRF | 64.0 ± 1.94 | 72.8 ± 1.57 | 68.1 ± 1.01 | 46.3 ± 1.35 | 49.5 ± 2.04 | 47.8 ± 0.70 |
| EEQA | 65.3 ± 3.46 | 74.5 ± 1.22 | 69.5 ± 1.41 | 49.0 ± 3.88 | 44.3 ± 1.30 | 46.4 ± 1.06 |
| PAIE | N/A | N/A | N/A | 56.5 ± 0.49 | 56.5 ± 1.28 | 56.5 ± 0.87 |
| Text2Event | 67.2 ± 0.82 | 72.4 ± 0.62 | 69.7 ± 0.72 | 48.5 ± 2.60 | 51.6 ± 1.04 | 50.0 ± 0.89 |
| ACE-OneIE | | | | | | |
| DMCNN | 61.5 ± 2.66 | 64.5 ± 2.86 | 62.8 ± 0.40 | 36.7 ± 2.48 | 34.1 ± 1.88 | 35.2 ± 0.22 |
| DMBERT | 64.4 ± 2.89 | 75.4 ± 3.21 | 69.4 ± 1.36 | 41.5 ± 1.84 | 54.7 ± 1.42 | 47.2 ± 0.99 |
| CLEVE | 72.3 ± 1.86 | 78.0 ± 0.91 | 75.0 ± 0.81 | 52.1 ± 1.99 | 57.6 ± 0.47 | 54.7 ± 1.31 |
| BiLSTM+CRF | 73.0 ± 1.55 | 71.8 ± 0.11 | 72.4 ± 0.82 | 37.0 ± 2.33 | 33.1 ± 1.01 | 34.9 ± 1.56 |
| BERT+CRF | 69.6 ± 4.08 | 69.2 ± 4.23 | 69.2 ± 1.18 | 48.9 ± 3.25 | 45.5 ± 2.75 | 47.1 ± 1.04 |
| EEQA | 66.7 ± 1.73 | 71.8 ± 2.51 | 69.1 ± 0.28 | 50.1 ± 1.73 | 41.0 ± 1.92 | 45.0 ± 0.70 |
| PAIE | N/A | N/A | N/A | 56.1 ± 0.30 | 57.4 ± 0.55 | 56.7 ± 0.29 |
| Text2Event | 71.4 ± 1.44 | 74.1 ± 1.77 | 72.7 ± 0.20 | 51.5 ± 1.46 | 51.6 ± 0.65 | 51.6 ± 0.99 |
| ACE-Full | | | | | | |
| DMCNN | 65.0 ± 3.33 | 69.7 ± 0.62 | 67.2 ± 1.53 | 45.3 ± 4.79 | 41.6 ± 1.93 | 43.2 ± 1.79 |
| DMBERT | 72.1 ± 0.80 | 77.1 ± 1.53 | 74.5 ± 0.85 | 50.5 ± 1.53 | 60.0 ± 1.82 | 54.8 ± 1.67 |
| CLEVE | 76.4 ± 2.49 | 80.4 ± 1.54 | 78.3 ± 2.03 | 56.9 ± 2.86 | 65.9 ± 2.06 | 61.0 ± 2.44 |
| BiLSTM+CRF | 74.2 ± 1.62 | 78.9 ± 0.45 | 76.5 ± 1.02 | 42.8 ± 1.20 | 32.4 ± 0.23 | 36.9 ± 0.60 |
| BERT+CRF | 72.4 ± 2.34 | 74.5 ± 1.23 | 73.4 ± 1.29 | 55.6 ± 1.51 | 43.2 ± 1.31 | 48.6 ± 0.96 |
| EEQA | 70.5 ± 2.93 | 77.3 ± 3.28 | 73.6 ± 0.38 | 65.8 ± 2.98 | 25.5 ± 4.68 | 36.4 ± 4.49 |
| PAIE | N/A | N/A | N/A | 61.4 ± 1.70 | 46.2 ± 0.64 | 52.7 ± 0.77 |
| Text2Event | 76.1 ± 0.25 | 74.5 ± 1.28 | 75.2 ± 0.68 | 59.6 ± 0.96 | 43.0 ± 1.49 | 50.0 ± 1.07 |
ACL Chen et al. (2015), Bronstein et al. (2015), Nguyen and Grishman (2015)
Sha et al. (2016), Huang et al. (2016) Ghaeini et al. (2016), Feng et al. (2016), Liu et al. (2016),
Wei et al. (2017), Liu et al. (2017), Chen et al. (2017),
Chan et al. (2019), Yang et al. (2019), Sims et al. (2019), Lu et al. (2019), Lin et al. (2019),
Lin et al. (2020), Naik and Rose (2020), Tong et al. (2020),
Du and Cardie (2020a), Zhang et al. (2020b),
Zhang et al. (2021), Lyu et al. (2021), Ngo Trung et al. (2021), Pouran Ben Veyseh et al. (2021a),
Lu et al. (2021), Deng et al. (2021), Lou et al. (2021), Cong et al. (2021), Xie et al. (2021), Wang et al. (2021), Sheng et al. (2021), Shen et al. (2021),
Xi et al. (2021), Wei et al. (2021), Yang et al. (2021), Xu et al. (2021),
Liu et al. (2022a), Wang et al. (2022), Liu et al. (2022c), Ma et al. (2022), Huang et al. (2022), Du et al. (2022), Pouran Ben Veyseh et al. (2022)
EMNLP
Wurzer et al. (2015), Lee et al. (2015),
Nguyen and Grishman (2016), Peng et al. (2016),
Lu and Nguyen (2018), Chen et al. (2018), Liu et al. (2018a), Orr et al. (2018), Liu et al. (2018b), Liu et al. (2019a), Ding et al. (2019), Yan et al. (2019), Wang et al. (2019b),
Espinosa et al. (2019), Wadden et al. (2019), Zheng et al. (2019), Subburathinam et al. (2019),
Du and Cardie (2020b), Huang and Ji (2020), Man Duc Trong et al. (2020), Cao et al. (2020), Liu et al. (2020b), Li et al. (2020a), Liu et al. (2020a), Lai et al. (2020), Cui et al. (2020),
Huang et al. (2020a), Ramponi et al. (2020), Ma et al. (2020), Pouran Ben Veyseh et al. (2020),
Li et al. (2021a), Liu et al. (2021a), Pouran Ben Veyseh et al. (2021b),Yu et al. (2021), Lai et al. (2021), Chen et al. (2021),Nguyen et al. (2021), Liu et al. (2021b), Huang and Jia (2021)
NAACL
Intxaurrondo et al. (2015),
Jagannatha and Yu (2016), Yang and Mitchell (2016), Nguyen et al. (2016),
Bekoulis et al. (2019), Liu et al. (2019b), Yagcioglu et al. (2019), Li et al. (2019), Wang et al. (2019a), Zhang and Ji (2021), Li et al. (2021b),
Zhang et al. (2022), Nguyen et al. (2022), Hsu et al. (2022), Guzman-Nateras et al. (2022),
Sainz et al. (2022), Zeng et al. (2022), Zhou and Mao (2022), Xu et al. (2022) COLING
Ge et al. (2016), Judea and Strube (2016), Hsi et al. (2016),
Araki and Mitamura (2018), Huang et al. (2020b), Shen et al. (2020), Li et al. (2022b), Ren et al. (2022), Wei et al. (2022), Liu et al. (2022b), Mi et al. (2022), Cao et al. (2022), Li et al. (2022a), Zhou et al. (2022)
Table 12: The complete list of papers for meta-analysis, categorized by venues and sorted by publication years.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In Section "Limitations"
✓ A2. Did you discuss any potential risks of your work?
In Section "Ethical Considerations"
✓ A3. Do the abstract and introduction summarize the paper's main claims?
In Section "Abstract" and "Introduction"
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 3 And Section 4
✓ B1. Did you cite the creators of artifacts you used?
In Section 3 and appendix A
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
In Section "Ethical Considerations"
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
In Section "Ethical Considerations"
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
In Section "Ethical Considerations"
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
In Section 3 and Section "Ethical Considerations"
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
In Section 3
## C ✓ **Did You Run Computational Experiments?** In Section 3, Section 4, And Appendix B
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
In appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
In appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
In appendix A and appendix B
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
In appendix A
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
sadler-etal-2023-yes | Yes, this Way! Learning to Ground Referring Expressions into Actions with Intra-episodic Feedback from Supportive Teachers | https://aclanthology.org/2023.findings-acl.587 | The ability to pick up on language signals in an ongoing interaction is crucial for future machine learning models to collaborate and interact with humans naturally. In this paper, we present an initial study that evaluates intra-episodic feedback given in a collaborative setting. We use a referential language game as a controllable example of a task-oriented collaborative joint activity. A teacher utters a referring expression generated by a well-known symbolic algorithm (the {``}Incremental Algorithm{''}) as an initial instruction and then monitors the follower{'}s actions to possibly intervene with intra-episodic feedback (which does not explicitly have to be requested). We frame this task as a reinforcement learning problem with sparse rewards and learn a follower policy for a heuristic teacher. Our results show that intra-episodic feedback allows the follower to generalize on aspects of scene complexity and performs better than providing only the initial statement. |
## Yes, This Way! Learning To Ground Referring Expressions Into Actions With Intra-Episodic Feedback From Supportive Teachers
Philipp Sadler1, **Sherzod Hakimov**1and **David Schlangen**1,2 1CoLabPotsdam / Computational Linguistics Department of Linguistics, University of Potsdam, Germany 2German Research Center for Artificial Intelligence (DFKI), Berlin, Germany [email protected]
## Abstract
![0_Image_0.Png](0_Image_0.Png)
The ability to pick up on language signals in an ongoing interaction is crucial for future machine learning models to collaborate and interact with humans naturally. In this paper, we present an initial study that evaluates intraepisodic feedback given in a collaborative setting. We use a referential language game as a controllable example of a task-oriented collaborative joint activity. A teacher utters a referring expression generated by a well-known symbolic algorithm (the "Incremental Algorithm")
as an initial instruction and then monitors the follower's actions to possibly intervene with intra-episodic feedback (which does not explicitly have to be requested). We frame this task as a reinforcement learning problem with sparse rewards and learn a follower policy for a heuristic teacher. Our results show that intra-episodic feedback allows the follower to generalize on aspects of scene complexity and performs better than providing only the initial statement.
## 1 Introduction
The communicative acts of humans in collaborative situations can be described as two parts of a joint act: signalling and recognizing. In such joint activities, these signals work as coordination devices to increment on the current common ground of the participants (Clark, 1996). The ability to act on these language signals is crucial for future machine learning models to naturally collaborate and interact with humans (Lemon, 2022; Fernández et al., 2011). Such a collaborative interaction with humans usually happens fluently, where one communicative act is performed after the other. The framework of reinforcement learning (RL) (Sutton and Barto, 2018) describes such mechanics where an agent is exposed in steps to observations of an environment with dynamic factors such as the position of objects or language expressions. The goal is that the agent learns to behave generally well in a particular environment solely based on the observations it makes and rewards it gets.
A key challenge here is the variability of expressions in language that can be said to the agent during an interaction. Even in relatively simple environments, there might arise an overwhelming amount of situations for an agent to handle (Chevalier-Boisvert et al., 2019). Recent work on collaborative agents focuses on large precollected datasets for imitation learning to learn agents in complex simulated visual environments
(Gao et al., 2022; Padmakumar et al., 2022; Pashevich et al., 2021) or frames the learning as a contextual bandit problem (Suhr and Artzi, 2022; Suhr et al., 2019). Nevertheless, other work has shown that intermediate language inputs are a valuable signal to improve the agent's learning performance in task-oriented visual environments (Co-Reyes et al.,
9228 2019; Mu et al., 2022).
In this paper, we present an initial study that evaluates a follower's learning success given a teacher's intra-episodic feedback in a collaborative setting.
We use a referential language game (in English) as a controllable example of a task-oriented collaborative joint activity (see Figure 1). In this game one player (the follower) is supposed to select a piece based on the another player's directives (the teacher). We assume a teacher that utters referring expressions as initial instructions and then responds to the follower's actions with intra-episodic feedback. We frame this as a RL problem with sparse rewards where the intermediate feedback is not part of the reward function but its potential usefulness is learnt by the follower alone.1
## 2 Related Work
Vision and language navigation. In vision and language navigation, an agent is given a natural language instruction which is to be understood to navigate to the correct goal location in a visually observed environment (Gu et al., 2022). The follower can usually ask an Oracle for further information, if necessary (Nguyen et al., 2019; Nguyen and III, 2019; Fried et al., 2018). We extend on this idea and aim for an ongoing interaction with corrections that loosens the turn-based paradigm by letting the Oracle choose when to speak as part of the environment. Hence, in our reference game, the language back-channel for the follower is cut, so that we force the follower to rely more on the visual observations for task success.
## Continual Learning From Human Feedback.
Suhr and Artzi (2022) let humans instruct the follower and then ask them to rate the agent's behaviour (thumbs up or down). This binary feedback is used for further training as the reward signal in a contextual bandit framework. They show that the agent improves over several interactions with humans. Similarly we evaluate the learning process in the context of RL because it imposes "weaker constraints on the regularity of the solution" (Nguyen et al., 2019), but take a broadly available, off-theshelf learning algorithm (Schulman et al., 2017)
to directly study the effects of different kinds of feedback. The feedback given to our agent is of natural language and not directly bound to the reward; the follower needs to learn the meaning of the language feedback itself.
Language-guided policy learning. ChevalierBoisvert et al. (2019) compared the sampling complexity of RL and imitation learning (IL) agents on various language-conditioned tasks. They proposed a 2-dimensional visual environment called Minigrid in which an agent is given a single mission statement that instructs the agent to achieve a specific state, e.g. "Take the red ball". In contrast to them we intentionally do not use IL approaches, because then the agent would have already learnt how to ground the language signals. We want to test if the agent can pick-up on the language from the interaction alone. For this, we similarly propose a diagnostic environment to directly control for the distributions of target objects (cf. skewed distribution of target objects in CVDN (Thomason et al., 2019)) and feedback signals.
Other work uses the *Minigrid* environment to propose a meta-training approach that improves the learning via natural language corrections, e.g.
"Pick up the green ball" (Co-Reyes et al., 2019).
The agent is given an episodic correction if a specific task cannot be solved. In this way, the agent must not only ground the mission statement but also ground the corrections into actions. Mu et al.
(2022) improve policy learning with intra-episodic natural language sub-goals e.g. "Pick up the ball".
These sub-goals are provided by a trained teacher policy when a previous sub-goal has been reached.
In contrast, we rather follow earlier work (Engonopoulos et al., 2013) on monitoring execution and use a heuristic teacher which provides intraepisodic language feedback whenever it appears feasible. The agent has to learn that certain pairs of feedback and behaviour at a specific time-step lead to the task's success and others to failure.
## 3 The Cogrip Environment
We use a Collaborative Game of Referential and Interactive language with Pentomino pieces as a controllable setting. A teacher instructs a follower to select a specific piece using a gripper. Both are constrained as follows: The teacher can provide utterances but cannot move the gripper. The follower can move the gripper but is not allowed to provide an utterance. This asymmetry in knowledge and skill forces them to work together and coordinate.
Zarrieß et al. (2016) found that this settings leads to diverse language use on the teacher's side.
![2_image_0.png](2_image_0.png)
## 3.1 Problem Formulation
The follower has to navigate a gripper to select a piece described by the teacher. We frame this task as a RL problem with sparse rewards. At each time-step t, given an observation ot ∈ O of the environment, the agent has to select an action at ∈ {LEFT, RIGHT, UP, DOWN, WAIT, GRIP}
such that the overall resulting sequence of actions (a0, ..., at*, ..., a*T ) maximizes the sparse reward R(oT ) = r. An episode ends when the GRIP action is chosen, and the gripper position gtis in the boundaries of a piece. An episode also ends when t reaches Tmax = 100. Following Chevalier-Boisvert et al. (2019), the reward function returns a basic reward minus the movement effort R = 1 − 0.9 ∗ (T /Tmax). We extend this formulation and give an additional bonus of +1 if the correct piece has been taken or a penalty of −1 when the wrong or no piece has been taken at all.
## 3.2 Environment
The environment exposes at each time-step t an observation otthat contains the gripper coordinates gt = (*x, y*), the initial referring expression lRE, the language feedback lFBt
(which might be empty)
and a partial view vt of the scene. While the scene as a whole is represented as a 2-dimensional image (with RGB colour channel), the partial view represents a 11 × 11-sized cut out, centered on the gripper position (see Figure 2). The teacher generates the initial and feedback statements.
## 3.3 Teacher
For the teacher, we assume a heuristic behaviour
(a fix policy) that has been shown to lead to collaborative success with humans (Götze et al., 2022)
and leave the complexity of learning in a multiagent setting (Gronauer and Diepold, 2022) for future work. The teacher produces an initial referring expression lRE = (w0*, ..., w*N ) where N
is the message length and wiis a word in the vocabulary. The production rule is implemented following the Incremental Algorithm (IA) (Dale and Reiter, 1995) that is given the symbolic representations of the pieces on the board (see Appendix A.1). The teacher provides a feedback message lFBt = (w0*, ..., w*N ) at a time-step t>0 when the gripper's position gt has exceeded a pre-defined distance threshold Ddist = 3 compared to the gripper's last position of feedback gFBlast or it is over a piece.
The generated feedback is of positive sentiment
("Yes this way/piece") when the gripper is then closer to or over the target piece and negative otherwise ("Not this direction/piece"). Alternatively, suppose the follower does not exceed the distance threshold after Dtime = 6 time-steps the feedback message is the same as the initial statement. Overall, the property values and sentence templates lead to a small vocabulary of 33 words.
## 3.4 Follower
The follower agent has to move the gripper and successfully grip a piece solely based on the observations. The observations ot = (vt, gt, lRE, lFBt)
are mapped to 128-dimensional features x˜t ∈ R
using the encoder model (see Figure 2). Following Chevalier-Boisvert et al. (2019), the word embeddings (which are learned from scratch) of the language inputs are fed through a Gated Recurrent Unit (GRU) (Cho et al., 2014) and then combined with the embedded visual features using a Featurewise Linear Modulation (FiLM) layer (Perez et al.,
2018). These language conditioned visual features are then max pooled, averaged and again averaged with the gripper position. Given the resulting features x˜t, we learn a parameterised policy π(˜xt; θ) ∼ atthat predicts a distribution over the action space. We use the Proximal Policy Optimization (PPO) (Schulman et al., 2017) implementation of *StableBaselines3* v1.6.2 (Raffin et al., 2021) to train the policy in our environment.
## 3.5 Tasks
The follower has to grip an intended target piece among several other pieces (the distractors). Thus a task is defined by the number of pieces, the target piece and the map size. The pieces for the tasks are instantiated from symbolic representations: a tuple of shape (9), color (6) and position (8) which leads to 432 possible piece symbols. For our experiments we use all of these symbols as targets, but split them into distinct sets (Appendidx A.4). Therefore the targets for testing tasks are distinct from the ones in the training tasks. We ensure the reproducibility of our experiments by constructing 3300 training, 300 validation, 720 testing tasks representing scenes with a map size of 20 × 20 and 4 or 8 pieces.
## 4 Experiments
In this section we explore the effects of the teacher's language and intra-episodic feedback on the follower's success and ask whether the follower generalizes on aspects of scene complexity.
## 4.1 Which Referential Language Is Most Beneficial For The Agent'S Learning Success?
As suggested by Madureira and Schlangen (2020)
we explore the question of which language is most effective. The IA constructs the initial reference by following a preference order over object properties (Krahmer et al., 2012). We hypothesize that a particular order might be more or less suitable depending on the task. Thus we conduct a series of experiments *without* the feedback signal where the preference order is varied as the permutation of color, shape and position. Our results indicate that such orders perform better that prioritize to mention positional attributes as distinguishing factors of the target piece (see Table 1). This is reasonable as the directional hint reduces the agent's burden for broader exploration. The follower is able to pick up early on these positional clues and performs overall better during training (see Figure 3).
## 4.2 **What Is The Agent'S Performance Gain With** Intra-Episodic Feedback In Our Setting?
We conduct the same experiments as above *with* intra-episodic language feedback to measure its
![3_image_0.png](3_image_0.png)
effect on the follower's success rate. Our results show that the follower achieves higher success rates with intra-episodic feedback among all preference orders (see Table 1). We also notice that the gain is higher for the low-performing preference orders.
This shows that the intra-episodic feedback is a valuable signal for the follower to overcome missing directives in the initial referring expressions.
The agent can learn strategies incorporating the feedback signals. This is an interesting finding because language feedback is not part of the reward function and could be empty.
## 4.3 Does Intra-Episodic Feedback Help The Agent To Generalize On Scene Complexity?
As a proxy for generalization capabilities, we take the best performing follower and raise the complexity of the *testing* scenes along two dimensions (i)
we increase the map size to 30 × 30 and (ii) put up to 18 pieces on the board. In addition, we hold out 72 combinations of piece shapes and colors that have never been seen during training. Our results show that the agent trained with intra-episodic feedback is able to perform better (i) on the larger map size, (ii) the higher number of pieces and (iii) the new target pieces compared to the one without (see Table 2).
## 5 Conclusion
In this work, we studied the effects of a teacher's language and intermediate interventions (the feedback) towards a learner's success and whether the learner generalizes on aspects of scene complexity.
Our results show that there is a most beneficial language for the teacher. Its intra-episodic feedback allows the learner to learn faster and generalize better than without intermediate help. An exciting direction for further work is to show the benefits of language feedback for other reinforcement learning problems, to overcome the limits of the heuristic teacher strategy and to reduce the need for feedback after successful training.
## 6 Limitations
Limits on visual variability and naturalness.
The Pentomino domain can only serve as an abstraction for referring expression generations in visual domains. The amount of objects is limited to 9 different shapes and the number of colors is reduced to 6 as well. The positions are chosen to be discrete and absolute while real-world references might include spatial relations. Furthermore, the pieces show no texture or naturalness, but are drawn with a solid color fill. We choose this simplified domain to focus on the interaction between the follower and the teacher and left the evaluation of the proposed models on more realistic looking scenes for further work. Nevertheless, we think our approach can also be applied to photo-realistic environments (Ramakrishnan et al., 2021; Kolve et al., 2017). Limits on variability of the referring expressions.
We only explored expressions that are generate by the Incremental Algorithm. Moreover, we choose a fix property value order (color is mentioned before shape is mentioned before position) for the realisation of the template's surface structure and left the exploration for a higher variability to further work.
Limits on variability of the feedback signal. In this work we used a heuristic teacher with a fixed behavior to provide the intermediate feedback to the follower. We choose this Oracle speaker for better control over the experiments and to focus on the research questions of which feedback is most helpful and how it should be presented (contain which information). We are aware that in natural interaction the teacher's responses might be more dynamic and can be potentially learnt in a much more complex multi-agent RL settings which would go beyond our focused contribution here. Still this is an interesting prospect for future research.
## 7 Ethics Statement
For now, we see no immediate threats regarding this work, because the experiments are performed in a controlled setting of an abstract domain. But since this research has collaborative agents in prospect people might use more advanced stages of this technique to train agents on possibly other tasks. Thus we encourage everyone to apply such a technology only for good use and to avoid harmful applications.
## Acknowledgements
We want to thank the anonymous reviewers for their comments. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 423217434 ("RECOLAGE") grant.
## References
Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. 2019. Babyai: A platform to study the sample efficiency of grounded language learning. In *7th International Conference on* Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–
1734, Doha, Qatar. Association for Computational Linguistics.
Herbert H. Clark. 1996. *Using Language*. 'Using' Linguistic Books. Cambridge University Press.
John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, and Sergey Levine. 2019. Guiding policies with language via meta-learning. In *7th International* Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Robert Dale and Ehud Reiter. 1995. Computational interpretations of the gricean maxims in the generation of referring expressions. *Cogn. Sci.*, 19(2):233–263.
Nikos Engonopoulos, Martin Villalba, Ivan Titov, and Alexander Koller. 2013. Predicting the resolution of referring expressions from user behavior. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1354–1359. ACL.
Raquel Fernández, Staffan Larsson, Robin Cooper, Jonathan Ginzburg, and David Schlangen. 2011. Reciprocal Learning via Dialogue Interaction: Challenges and Prospects. In Proceedings of the IJCAI
2011 Workshop on Agents Learning Interactively from Human Teachers (ALIHT 2011).
Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. 2018. Speaker-follower models for vision-and-language navigation. In *Advances* in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 3318–3329.
Xiaofeng Gao, Qiaozi Gao, Ran Gong, Kaixiang Lin, Govind Thattai, and Gaurav S. Sukhatme. 2022. Dialfred: Dialogue-enabled agents for embodied instruction following. *IEEE Robotics Autom. Lett.*,
7(4):10049–10056.
Sven Gronauer and Klaus Diepold. 2022. Multi-agent deep reinforcement learning: a survey. Artif. Intell.
Rev., 55(2):895–943.
Jing Gu, Eliana Stefani, Qi Wu, Jesse Thomason, and Xin Wang. 2022. Vision-and-language navigation: A survey of tasks, methods, and future directions.
In *Proceedings of the 60th Annual Meeting of the*
Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7606–7623. Association for Computational Linguistics.
Jana Götze, Karla Friedrichs, and David Schlangen.
2022. Interactive and Cooperative Delivery of Referring Expressions: A Comparison of Three Algorithms. In *Proceedings of the 26th Workshop on the* Semantics and Pragmatics of Dialogue - Full Papers, Virtually and at Dublin, Ireland. SEMDIAL.
Eric Kolve, Roozbeh Mottaghi, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. 2017. AI2-
THOR: an interactive 3d environment for visual AI.
CoRR, abs/1712.05474.
Emiel Krahmer, Ruud Koolen, and Mariët Theune. 2012.
Is it that difficult to find a good preference order for the incremental algorithm? *Cogn. Sci.*, 36(5):837–
841.
Oliver Lemon. 2022. Conversational grounding as natural language supervision - the need for divergent agent data. In *ACL Workshop on Learning with Natural Language Supervision*.
Brielen Madureira and David Schlangen. 2020. An overview of natural language state representation for reinforcement learning. In Proceedings of the ICML
Workshop on Language in Reinforcement Learning.
Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah D. Goodman, Tim Rocktäschel, and Edward Grefenstette. 2022. Improving intrinsic exploration with language abstractions. *CoRR*, abs/2202.08938.
Khanh Nguyen, Debadeepta Dey, Chris Brockett, and Bill Dolan. 2019. Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 12527–12537. Computer Vision Foundation / IEEE.
Khanh Nguyen and Hal Daumé III. 2019. Help, anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing, EMNLP-IJCNLP
2019, Hong Kong, China, November 3-7, 2019, pages 684–695. Association for Computational Linguistics.
Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gökhan Tür, and Dilek Hakkani-Tür. 2022. Teach: Task-driven embodied agents that chat. In *Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, ThirtyFourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22*
- March 1, 2022, pages 2017–2025. AAAI Press.
Alexander Pashevich, Cordelia Schmid, and Chen Sun.
2021. Episodic transformer for vision-and-language navigation. In *2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC,*
Canada, October 10-17, 2021, pages 15922–15932.
IEEE.
Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. 2018. Film: Visual reasoning with a general conditioning layer. In *Proceedings of the Thirty-Second AAAI Conference on* Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 3942–
3951. AAAI Press.
Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. 2021. Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1–8.
Santhosh Kumar Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alexander Clegg, John Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Angel X. Chang, Manolis Savva, Yili Zhao, and Dhruv Batra. 2021. Habitat-matterport 3d dataset (HM3D): 1000 large-scale 3d environments for embodied AI. In *Proceedings of the Neural Information Processing Systems Track on Datasets and* Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. *CoRR*, abs/1707.06347.
Alane Suhr and Yoav Artzi. 2022. Continual learning for instruction following from realtime feedback.
CoRR, abs/2212.09710.
Alane Suhr, Claudia Yan, Jacob Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. 2019. Executing instructions in situated collaborative interactions. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2119–2130. Association for Computational Linguistics.
Richard S. Sutton and Andrew G. Barto. 2018. *Reinforcement Learning: An Introduction*, second edition.
The MIT Press.
Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2019. Vision-and-dialog navigation. *CoRR*, abs/1907.04957.
Kees van Deemter. 2016. *Computational Models of* Referring, chapter 4.6. The MIT Press.
Sina Zarrieß, Julian Hough, Casey Kennington, Ramesh Manuvinakurike, David DeVault, Raquel Fernández, and David Schlangen. 2016. PentoRef: A Corpus of Spoken References in Task-oriented Dialogues. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16),
pages 125–131, Portorož, Slovenia. European Language Resources Association (ELRA).
## A Appendix A.1 Teacher Details Hyperparameters
- Ddist = 3
- Dtime = 6
- preference_order
The distances between two coordinates (p1, p2) are calculated as the euclidean distance.
The Incremental Algorithm (IA) The Algorithm 1, in the formulation of (Dale and Reiter, 1995), is supposed to find the properties that uniquely identify an object among others given a preference over properties. To accomplish this the algorithm is given the property values P of distractors in M and of a referent r. Then the algorithm excludes distractors in several iterations until either M is empty or every property of r has been tested. During the exclusion process the algorithm computes the set of distractors that do not share a given property with the referent and stores the property in D. These properties in D are the ones that distinguish the referent from the others and thus will be returned.
The algorithm has a meta-parameter O, indicating the *preference order*, which determines the order in which the properties of the referent are tested against the distractors. In our domain, for example, when *color* is the most preferred property, the algorithm might return BLUE, if this property already excludes all distractors. When *shape* is the preferred property and all distractors do not share the shape T with the referent, T would be returned. Hence even when the referent and distractor pieces are the same, different preference orders might lead to different expressions.
Algorithm 1 The IA on symbolic properties as
based on the formulation by van Deemter (2016)
Require: A set of distractors M, a set of property
values P of a referent r and a linear preference
order O over the property values P
1: *D ← ∅*
2: for P in O(P) do
$$\begin{array}{r l}{T}&{{}\operatorname{im}{\mathcal{O}}(P)\,{\mathbf{d}}{\boldsymbol{\theta}}}\\ {{\mathcal{E}}\leftarrow\{m\in M:\neg P(m)\}}\\ {{\mathbf{if}}\,{\mathcal{E}}\neq\varnothing\,{\mathbf{then}}}\\ {{\mathrm{~Add~}}P{\mathrm{~to~}}{\mathcal{D}}}\\ {{\mathrm{~Remove~}}{\mathcal{E}}{\mathrm{~from~}}M}\end{array}$$
7: **return** D
Referring Expression Templates There are 3 expression templates that are used when only a single property value of the target piece is returned by the Incremental Algorithm (IA):
- *Take the [color] piece*
- *Take the [shape]* - *Take the piece at [position]*
Then there are 3 expression templates that are selected when two properties are returned:
- *Take the [color] [shape]* - *Take the [color] piece at [position]*
- *Take the [shape] at [position]*
And finally there is one expression templates that lists all property values to identify a target piece:
- *Take the [color] [shape] at [position]*
Feedback Expression Templates We use two templates to give positive or negative feedback on the direction of the follower
## - Yes This Way
- *Not this way* And we give a similar feedback when the follower is locating the gripper over a piece
- *Yes this piece* - *Not this piece* The vocabulary Overall, the property values and sentence templates lead to a small vocabulary of 33 words:
- 9 shapes: F, N, P, T, U, W, X, Y, Z
- 6 colors: red, yellow, green, blue, purple, brown
- 6 position words: left, right, top, bottom, center (which are combined to e.g., right center or top left)
- 8 template words: take, the, piece, at, yes, no, this, way
- 4 special words: <s>, <e>, <pad>, <unk>
The maximal sentence length is 11.
## A.2 Follower Details
Agent Parameters: 9, 456
| word_embedding_dim | 128 |
|-----------------------|-------|
| feature_embedding_dim | 128 |
| actor_layers | 2 |
| actor_dims | 128 |
| vf_layers | 2 |
| vf_dims | 128 |
The max-pooling layer additionally downsamples the language conditionaed visual features from 11 × 11 × 128 to 1 × 1 × 128 dimensions. For this we use the nn.AdaptiveMaxPool2d((1, 1))
layer from PyTorch v1.11.0. In addition, before we average the gripper coordinates features and the resulting language conditioned visual features, we apply a layer normalization (eps = 1e-5) on them.
Architecture Search We performed a little architecture search where we evaluated two methods for visual encoding (pixels, symbols), four methods for language encoding (word embeddings with GRU,one-hot word embeddings with GRU, one-hot sentence embeddings, pre-trained sentence embeddings) and two methods for the fusion (concatenate, FiLM). We found learnt word embeddings and FiLM perform best in regard of training speed and success rate. The visual encodings showed similar performance but we prefer the pixel encoder because it makes less assumptions about the world.
Learning Algorithm We apply a learning rate schedule that decreases the learning rate during training according to the training progress (based on the number of time steps) with p ∈ [0, 1], but the learning rate is given a lower bound αmin so that it never reaches zero: αt = max(p · αinit, αmin)
## A.3 Environment Details
Board The internal representation of the visual state is a 2-dimensional grid that spans W ×H tiles where W and H are defined by the map size. A
tile is either empty or holds an identifier for a piece
(the tile is then occupied). The pieces are defined by their colour, shape and coordinates and occupy five adjacent tiles (within a virtual box of 5 × 5 tiles). The pieces are not allowed to overlap with another piece's tiles. For a higher visual variation, we also apply rotations to pieces, but we ignore the rotation for expression generation, though this could be an extension of the task.
| lr_init | 2.5e-4 |
|----------------|----------|
| lr_min | 2.5e-5 |
| num_epochs | 8 |
| buffer_per_env | 1024 |
| clip_range | 0.2 |
| clip_range_vf | 0.2 |
| ent_coef | 0.01 |
| vf_coef | 0.5 |
| target_kl | 0.015 |
Name HEX RGB
red #ff0000 (255, 0, 0)
yellow #ffff00 (255, 255, 0) green #008000 (0, 128, 0)
blue #0000ff (0, 0, 255) purple #800080 (128, 0, 128)
brown #8b4513 (139, 69, 19)
Gripper The gripper can only move one position at a step and can move over pieces, but is not allowed to leave the boundaries of the board.
The gripper coordinates {(*x, y*) : x ∈ [0, W], y ∈
[0, H]} are projected to {(*x, y*) : *x, y* ∈ [−1, +1]}
so that the coordinate in the center is represented with (0, 0). This provides the agent with the necessary information about its positions on the overall board as its view field is shrinked to 11 × 11 tiles.
In addition, to provide the agent with a notion of velocity, the environment keeps track of the last two gripper positions and applies a grey with decreasing intensity to these positions on the board:
- $\operatorname{color}_{gt}=(200,200,200)$ - $\operatorname{color}_{gt-1}=(150,150,150)$ - $\operatorname{color}_{gt-2}=(100,100,100)$.
## A.4 Task Details
We created training, validation, test and holdout splits of target piece symbols (a combination of shape, color and position) for the task creation (see Table 6). We split these possible target piece symbols so that each subset still contains all colors, shapes and positions, but different combinations of them. For example, the training set might contain a "red F" but this is never seen at the bottom left. Though this will be seen during validation or testing. An exception is the holdout split where we hold out a color for each shape. This means that for example a "green T" is never seen during training, but a "green F" or a "blue T".
# of Tasks
![9_image_0.png](9_image_0.png)
Table 6: The target piece symbols (TPS) distributed over the task splits with different map sizes (Size) and number of pieces (N) on the board. The total possible number of target piece symbols is 9 · 6 · 8 = 432.
To create a task we first place the target piece on a board with the wanted map size. Then we sample uniform random from all possible pieces and place them until the wanted number of pieces is reached.
If a piece cannot be placed 100 times, then we resample a piece and try again. The coordinates are chosen at random uniform from the coordinates that fall into an area of the symbolic description.
We never set a piece into the center, because that is the location where the gripper is initially located.
## A.5 Experiment Details
We trained the agents on a single GeForce GTX
1080 Ti (11GB) where each of them consumed about 1GB of GPU memory. The training spanned 10.24 million time steps executed with 4 environments simultaneously (and a batch size of 64). The training took between 9.24 and 12.32 hours (11.86 hours on average). The random seed was set to 49184 for all experiments. We performed an evaluation run on the validation tasks after every 50 rollouts (51, 200 timesteps) and saved the best performing agent according to the mean reward.
Pr.Or. Step in K w/ FB Step in K w/o FB
C-S-P 8,601 8,806 ![9_image_3.png](9_image_3.png)
Table 7: The timesteps of the best model checkpoints.
As the evaluation criteria on the testings tasks we chose success rate which indicates the relative number of episodes (in a rollout or in a test split)
where the agent selected the correct piece:
$$\mathrm{mSR}={\frac{\sum^{N}s_{i}}{N}}{\mathrm{~where~}}s_{i}={\begin{cases}1,\\ 0,\end{cases}}$$
(1, for correct piece
$${\mathrm{for~correct~pieces}}$$ otherwise
## B Additional Results
In addition, we notice that the feedback has a positive effect on early success rates during *training* when we compare training runs of the same preference order groups with and without feedback (see 4). The intra-episodic feedback largely improves the early success rates of agents with teachers of preference orders **P (SCP, CSP) as well as those with preference orders *P* (SPC, CPS). There is also a noticable but lower effect on the preference orders P** (PSC, PCS) that perform already well early without the intra-episodic feedback. Though the latter seem to be confused by the feedback initially (until 10% of the training steps). The benefit of intra-episodic feedback is starting to decrease in later time steps, because the agent without that additional signal catch up on the success rates. Still these findings show that intra-episodic feedback is helpful to improve the learning in early stages.
![9_image_1.png](9_image_1.png)
## C Misc
![9_Image_2.Png](9_Image_2.Png)
Robot image in Figure 1 adjusted from https://commons.wikimedia.org/wiki/File:
Cartoon_Robot.svg. That file was made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
6
✓ A2. Did you discuss any potential risks of your work?
7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** A.4
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
A.4
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
A.2 A.5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
A.2 A.5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4 A.5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3.4 A.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
rajaraman-etal-2023-investigating | Investigating Transformer-Guided Chaining for Interpretable Natural Logic Reasoning | https://aclanthology.org/2023.findings-acl.588 | Natural logic reasoning has received increasing attention lately, with several datasets and neural models proposed, though with limited success. More recently, a new class of works have emerged adopting a Neuro-Symbolic approach, called transformer guided chaining, whereby the idea is to iteratively perform 1-step neural inferences and chain together the results to generate a multi-step reasoning trace. Several works have adapted variants of this central idea and reported significantly high accuracies compared to vanilla LLM{'}s. In this paper, we perform a critical empirical investigation of the chaining approach on a multi-hop First-Order Logic (FOL) reasoning benchmark. In particular, we develop a reference implementation, called Chainformer, and conduct several experiments to analyze the accuracy, generalization, interpretability, and performance over FOLs. Our findings highlight key strengths and possible current limitations and suggest potential areas for future research in logic reasoning. | # Investigating Transformer Guided Chaining For Interpretable Natural Logic Reasoning
Rajaraman Kanagasabai, Saravanan Rajamanickam and Shi Wei Agency for Science, Technology and Research (A*STAR)
Institute for Infocomm Research Singapore 138632
{kanagasa, saravanan_rajamanickam, shi_wei}@i2r.a-star.edu.sg
## Abstract
Natural logic reasoning has received increasing attention lately, with several datasets and neural models proposed, though with limited success. More recently, a new class of works have emerged adopting a Neuro-Symbolic approach, called *transformer guided chaining*, whereby the idea is to iteratively perform 1step neural inferences and chain together the results to generate a multi-step reasoning trace. Several works have adapted variants of this central idea and reported significantly high accuracies compared to vanilla LLM's. In this paper, we perform a critical empirical investigation of the chaining approach on a multi-hop First-Order Logic (FOL)
reasoning benchmark. In particular, we develop a reference implementation, called Chainformer, and conduct several experiments to analyze the accuracy, generalization, interpretability, and performance over FOLs. Our findings highlight key strengths and possible current limitations and suggest potential areas for future research in logic reasoning.
## 1 Introduction
We consider deductive reasoning over Natural Logic (MacCartney and Manning, 2014; Moss, 2010), i.e., reasoning over statements expressed in language. Natural logic reasoning has received increasing attention lately, with several datasets (Yu et al., 2020; Liu et al., 2020; Clark et al., 2020; Dalvi et.al. 2021) and neural models proposed (Huang et.al. 2021, Jiao et.al. 2022, Clark et al.,
2020; Saha et.al. 2020; Wang et.al. 2021; Xu et.al. 2022; Pi et.al. 2022).
Most of the traditional neural approaches tackled multi-step reasoning as a single pass 'all-at-once' inference. For reasoning problems that are inherently multi-step, it is more natural to consider a symbolic machinery in tandem with the neural model. Taking inspiration from this philosophy, a new class of works have emerged recently by combining neural models (popularly using transformers (Vaswani et.al. 2017)) with symbolic chaining. The central idea is to iteratively perform 1-step neural inferences and chain together the results to generate a multi-step reasoning trace. ProofWriter (Tafjord et.al. 2021) was one of the first to explore this idea, and demonstrate >95% multi-hop reasoning accuracy on several synthetic datasets. (Picco et.al. 2021) and (Bostrom 2022)
reported similar results. Recently, several works (Qu et.al. 2022; Yang et.al. 2022; Tafjord et.al. 2022; Ghosal et.al. 2022; Ribeiro et.al. 2022; Hong et.al. 2022) applied variants of this approach on EntailmentBank (Dalvi et.al. 2021) and showed superior performance. The iterative approach is attractive because i) it is *faithful* in that it naturally reflects the internal reasoning process, and is inherently interpretable, ii) it has been shown to be easily adapted for multiple choice Q&A (Shi et.al. 2021) and open-ended Q&A (Tafjord 2022), besides Natural Language Inference (NLI), iii) it enables teachable reasoning (Dalvi et.al. 2022).
While the above results are promising, we argue that an unbiased third-party investigation is important to facilitate a better understanding of the strengths and weaknesses. This is the main goal of this paper. Towards this, we develop a reference implementation, called *Chainformer* that captures the core idea behind the chaining approach, and benchmark on a multi-hop FOL reasoning task using a recently proposed diagnostic dataset, called LogicNLI (Tian et.al. 2021). The dataset is composed of a rich class of FOLs that go beyond conjunctive implications and is non-trivial with a reported human reasoning accuracy of 77.5% (Tian et.al. 2021).
| Entailment | 𝑃 ⊢ h ˄ 𝑃 ⊬ ¬h |
|---------------|--------------------|
| Contradiction | 𝑃 ⊬ h ˄ 𝑃 ⊢ ¬h |
| Neutral | 𝑃 ⊬ h ˄ 𝑃 ⊬ ¬h |
| Paradox | 𝑃 ⊢ h ˄ 𝑃 ⊢ ¬h |
Table 1. Inference Relations between P and h.
We conduct several experiments to analyze the performance in terms of accuracy, generalization, interpretability, and expressiveness over FOLs. Our key findings are: 1) human level multi-step reasoning performance is achieved (84.5% machine vs 77.5% human), with a minimalist transformer guided chaining implementation, and even with a base model (80.4% base vs 84.5%
large). However, this requires the 1-step inferences be carefully trained for high accuracy; 2) the inferred reasoning chains are correct 78% of the time but could be more than twice longer than the optimal chains; 3) FOLs with simple conjunctions and existential quantifiers are easier to handle, whereas FOLs with equivalence are harder especially with universal quantifiers and disjunctions. Our results highlight the key strengths of the transformer-guided chaining approach and faithful reasoning in general, and suggest possible weaknesses that could motivate future research in multi-hop reasoning.
In related work, (Yu et al., 2020; Liu et al., 2020; Dalvi et.al. 2021; Tian et.al. 2021) have performed diagnostic studies on popular language models and pointed out limitations in logic reasoning capabilities. (Li et.al. 2022) investigated NLU
datasets to measure correlation with logic reasoning as a key skill. Our focus is different, and we aim to specifically analyze the iterative reasoning strategy for multi-hop logic reasoning, and hence is novel.
## 2 Problem Definition
We consider the NLI setting (Bowman 2015; Storks et.al. 2019). Let F = {f1,f2,··· ,fn}, be n simple sentences, called *Facts*; R = {r1,r2,··· ,rm}, a set of m compound sentences, called *Rules*. Then, given the tuple P = (F, R), called the *Premise*, and a statement h, called the *Hypothesis*; the inference problem is to determine i) the inference relation of h, and ii) a reasoning chain X, where =
{1,2, … , , … ,
} is a sequence such that =
(,
) , where ∈ and is a set of intermediate facts, with members not necessarily from F.
The inference relations can be *entailment*,
contradiction, *neutral*, or *paradox*, as defined in Table 1, where ⊢ is the entailment operator.
It is easy to see that the complexity of the problem varies based on the constraints imposed on F, R, X and the target inference labels of h. For example, RuleTaker (Clark et.al. 2020.) considers h to be 'true or 'false'. Additionally, R is restricted to be implication rules with conjunctions and negations. ProofWriter (Tafjord et.al. 2021) adopts a similar setting but allows h also to be undetermined ('neutral'),
In this paper we consider a more general NLI
problem following (Tian et.al 2021), where i) R is expressed using a rich class of FOLs with universal
∀ and existential ∃ quantifiers, logic connectives such as disjunctions ∨, implications →,
equivalence ≡ and negations ¬..ii) h can take any of the 4 inference labels (Table 1). Figure A-1 in the Appendix presents a sample problem instance.
## 3 Logic Reasoning Method
Logic reasoning using chaining strategy can be implemented in several ways, e.g. with fact selection (Bostrom et.al. 2022), rule selection
(Sanyal et.al. 2022), inference verification (Tafjord et.al. 2022), etc. We aim to adopt a minimalist implementation, as we believe it facilitates better examination of the strengths and weaknesses of the central methodology,.
We consider the Forward Chaining algorithm from Sec 9.3.2 of (Russell et.al. 2010), which is known to be sound and complete for a rich class of FOLs. Basically, the algorithm starts with the known facts and applies rules whose preconditions are satisfied, to infer new facts repeatedly until the hypothesis can be verified. To extend to natural language, our idea is to employ a transformer model to do fact unification and rule inference, and a second transformer to verify the given hypothesis against the currently known facts.
Rule Inference: In this step, given the current known facts and a rule, the rule preconditions are matched through unification to check for a rule match. If the latter succeeds, new facts are inferred
(intermediate facts); otherwise, no facts are generated and the control moves to the next rule. We model this as an abstractive Q&A task, with the current facts as the 'context', the chosen rule as the 'question' and the inferred facts as the desired 'answer'. A T5 transformer model (Raffel et.al.
2020) is employed for this purpose. In particular, the processed input to the model is 'question: <rule> context: <known facts>' and the output is
'inferred facts' if the rule can be triggered and 'none' otherwise.
Facts **Checking**: This step verifies the given hypothesis against the currently known facts based on Table 1. In our implementation, we accomplish this by formulating a 2-class NLI task, for inferring
′ ⊢ h and ′ ⊢ ¬h, where F' is the currently known facts.
Assemble Chain Additionally, for interpretability, we store the rule and the intermediate facts, every time a rule is satisfied. If the hypothesis is successfully verified, then the stored rules and facts are assembled to form a reasoning chain and returned.
An outline of the complete algorithm (Figure A-2), and an illustration (Figure A-4) are presented in in the Appendix, along with the training details.
## 4 Experiments And Results
We perform several experiments using the multihop FOL reasoning dataset LogicNLI, (Tian et.al 2021). The dataset includes 16K/2K/2K train/dev/test instances, with each instance consisting of over 12 facts and 12 rules, along with labeled statements and reasoning chains (called proof paths). A sample instance is in Figure A-4.
The results are presented in the following subsections. In all the tabulated results, the performance metrics are averaged over 10 runs and quoted in % for easier interpretation, unless stated otherwise. Details about the implementation, hyper-parameter settings and machine configuration are provided in Appendix A.
## 4.1 Comparison With Baselines (Table 2)
Firstly, we compare the performance in terms of accuracy against the baseline language models BERT (Delvin et.al. 2019), RoBERTa (Liu et.al.
2019), and XLNet (Yang et.al. 2019). Additionally, we considered a naïve algorithm, called NaiveFactsChecker, that does facts checking as in Sec 3 but without rule inference.
| Models | Accuracy (%) | |
|--------------------------------------------------|----------------|------|
| Dev | Test | |
| Random | 25.0 | |
| Human | 77.5 | |
| BERT-base | 30.1 | 29.5 |
| RoBERTa-base | 59.5 | 58.0 |
| BERT-large+MLP | 57.0 | 55.9 |
| RoBERTa-large+MLP | 65.0 | 68.3 |
| XLNet + MLP layer | 64.0 | 65.4 |
| NaiveFactsChecker | 50.1 | 51.2 |
| Chainformer + t5-base | 78.1 | 80.4 |
| Chainformer + t5-large | 80.2 | 84.5 |
| Table 2: Comparison of Accuracy against Baseline | | |
Table 2: Comparison of Accuracy against Baseline
models on Dev/Test
We observe that *NaiveFactsChecker* achieved
~50% (2x more than *Random*), suggesting that about 50% of the hypotheses in LogicNLI may be verifiable from the given facts alone. All LM baselines, barring *BERT-base*, performed better, with *RoBERTa-large+MLP* the best model. In comparison, *Chainformer* significantly outperformed all baselines and even exceeded human performance. This is surprising given that our implementation was minimalist without other functionalities often used in the published approaches. We argue that the results highlight the strength of iterative LM-guided reasoning over 'all-at-once' approach. Furthermore, the t5-base model version was comparable in performance to the t5-large version, which gives promise for lowcompute possibilities in implementing logic reasoning.
## 4.2 Detailed Performance Analysis
Here we investigate Chainformer approach in more detail to derive further insights.
## 4.2.1 Generalization (Figure 1)
To analyze the generalization ability of our
![3_image_1.png](3_image_1.png) approach, we varied training instances from 2400 (25%), 4800 (50%), 7200 (75%) and 9600 (100%)
and measured performance of i) 1-step inference, and ii) final reasoning (Figure 1).
We observe an almost linear improvement, indicating good generalization.
## 4.2.2 Performance Over Fols (Table 3, 4 & 5)
For our next, experiment, we studied the ability in reasoning over various FOL classes. LogicNLI contains 23 FOL classes in total, and we first analyzed Chainformer to determine the respective inference accuracies. A summary of the results is presented in Table 3, 4 and 5. Details about the individual classes and the respective accuracies are provided in the Appendix.
We notice that FOLs with logical equivalence are harder than implication, rather unsurprisingly, and the easiest with neither of them (Table 4). Similarly, disjunctions are harder than conjunctions (Table 5). Universal quantifiers are harder than no quantifiers, but existential quantifiers are easier in comparison (Table 3). A possible explanation is that neural unification is easier when matching any one relevant fact is sufficient rather than requiring the same for all relevant facts. However, this depends on the modeling and implementation specifics. It might be possible to alter the behavior with approaches, e.g. using different angle than 'Abstractive Q&A' (Section 3), but this needs more research.
![3_image_0.png](3_image_0.png)
Table 3 Accuracy over FOLs w.r.to Quantifiers
| Analysis of Connectives I → ≡ None | | | |
|--------------------------------------|------|------|------|
| # of FOL lasses | 15 | 6 | 2 |
| Accuracy | 81.2 | 74.5 | 92.3 |
Table 4 Analysis of Accuracy over FOLs w.r.to Implication → and Equivalence ≡
| Analysis of Connectives II ∧ ∨ None | | | |
|---------------------------------------|------|------|------|
| # of FOL Classes | 15 | 5 | 5 |
| Accuracy | 77.2 | 70.3 | 85.7 |
Table 5 Analysis of Accuracy over FOLs w.r.to Conjunctions ∧ and Disjunctions ∨
## 4.2.3 Interpretability (Table 6 & 7)
We next analyzed interpretability of the predicted reasoning chains, by asking two questions i) *Is the* chain *correct?* and ii) Is the chain optimal compared to the ground truth.
Towards this, we define two metrics viz.
correctness and *minimality*. A chain is deemed correct if and only if every chain fragment corresponds to a valid entailment. Minimality is defined as the ratio of the length of the target chain over the length of the predicted chain. Note that a chain may be incorrect even if one step corresponds to an invalid entailment. Thus, we may have situations where the hypothesis is successfully inferred but the chain is incorrect. Such chains are called *partially correct*.
As an exhaustive analysis of all chains is arduous, we sampled 200 'entailment' and 200
'contradiction' instances from the predicted chains, as a preliminary evaluation, and tasked a student (not part of the project) to manually label the validity of every chain fragment. The labels were later verified via a random check by two project members to remove incorrect entries. The results are presented in Table 6.
On average, we observed that 78.8% of the chains were fully correct (Table 6), providing support for chaining as a faithful reasoning approach. In fact, about 10% of the chains were partially correct and only 11.2% were incorrect.
To analyze minimality, we extracted the correct chains and computed the minimality score against the gold standard chains. An overall average score of 0.42 was observed (Table 7), implying that the correctly predicted chains could be 2.3 times longer than the optimal ones.
| Label | Correctness | |
|-------------------|---------------|------|
| Correct | 73.5 | |
| Incorrect | 12.8 | |
| Partially correct | 13.7 | |
| Entailment | Correct | 84.1 |
| Incorrect | 9.8 | |
| Partially correct | 6.1 | |
| Contradiction | | |
Table 6: Correctness of Predicted Chains
| Label | Minimality Score |
|---------------------------------------|--------------------|
| Entailment | 0.44 |
| Contradiction | 0.40 |
| Table 7 Minimality of Verified Chains | |
## 5 Discussion And Conclusions
We considered the recently emerging neurosymbolic approach for addressing multi-step natural logic reasoning, called the transformer guided chaining. The approach adopts an iterative reasoning strategy in contrast to the traditional neural approaches that tackle multi-step reasoning as a single pass 'all-at-once' inference. The iterative approach is attractive as it offers several advantages such as i) it is *faithful* in that it naturally reflects the internal reasoning process, ii) it is inherently interpretable, iii) it can be applied to multiple choice Q&A and open-ended Q&A,
besides Natural Language Inference.
We performed a detailed empirical investigation of this approach, using a challenging FOL
reasoning dataset. Our key findings are: 1) human level performance is achieved on multi-hop FOL
reasoning task with a minimalist implementation
(80.4% machine vs 77.5% human), and even with a base model (80.4% base vs 84.5% large). This provides support for the potential of chaining strategy and encourages possible applications on real life texts; 2) FOLs with simple conjunctions and existential quantifiers are easier to handle, whereas FOLs with equivalence are harder especially with universal quantifiers and disjunctions, suggesting scope for further research; 3) the predicted reasoning chains are correct 78%
of the time, but could be more than twice longer than the optimal chains. The latter implies that two or more *correct* reasoning chains are possible, and iterative reasoning strategy might return one of them (though sub-optimal). This underscores the importance of human validation in interpretability evaluation, as automating it, say by scoring exact match, is likely to underestimate the true performance, A key observation is that the approach hinges on how accurately the 1-step inferences can be performed, as small errors can propagate over multiple iterations and get magnified. For example, if the rule inference step results in false positives/negatives, it is unclear how the chaining performance will be impacted. In addition, if facts are incomplete or even inconsistent, how effective will the reasoning be? These are interesting research questions for further investigation.
(Ghosal et.al. 2022; Dalvi et.al. 2022) are steps along this direction.
On another direction, most of the chaining-based works have considered mainly 'entailment' as the inference relation. To handle real-life texts, it is important to go beyond simple entailment relations, and consider more sophisticated ones, e.g. necessity, possibility and rebuttal
(MacCartney and Manning, 2014; Huang et.al. 2022). To, cover such relations, new models and approaches are required, and they could facilitate enhancing the scope of current faithful reasoning approaches towards addressing advanced multihop reasoning scenarios.
## 5.1 Limitations
Our work is one of the first to perform a detailed empirical investigation of transformer guided chaining but is clearly preliminary. The following are some key limitations: - Evaluation of Interpretability: A fair evaluation of interpretability is not straightforward. In this paper, we reported results from a preliminary study with limited human labor.
- Analysis of negations: LogicNLI dataset uses negations in the facts, rules and statements but it is difficult to disentangle them for a fair investigation. Hence, we were unable to rigorously analyze the ability in handling negations.
- Evaluation on Real-life data: Our reported work focused on a synthetic dataset. For a more rigorous evaluation, it is imperative to consider more datasets including real-life ones.
## References
Kaj Bostrom, Zayne Sprague, Swarat Chaudhuri, and Greg Durrett. 2022. Natural language deduction through search over statement compositions. arXiv preprint arXiv:2201.06028.
Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, 2015 Peter Clark, Oyvind Tafjord, and Kyle Richardson.
2020. Transformers as soft reasoners over language.
In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI20, pages 3882–3890. International Joint Conferences on Artificial Intelligence Organization. Main track Bhavana Dalvi, Peter Jansen, Oyvind Tafjord, Zhengnan Xie, Hannah Smith, Leighanna Pipatanangkura, and Peter Clark. 2021. Explaining answers with entailment trees. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7358–7370, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Bhavana Dalvi, , Oyvind Tafjord, and Peter Clark.
2022. "Towards teachable reasoning systems."
arXiv preprint arXiv:2204.13074.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT.
Deepanway Ghosal, Aditya, S. and Choudhury, M.,
2022. Generating Intermediate Steps for NLI with Next-Step Supervision. arXiv preprint arXiv:2208.14641.
Chadi Helwe, Chloé Clavel, and Fabian M Suchanek.
2021. Reasoning with transformer-based models: Deep learning, but shallow reasoning. In 3rd Conference on Automated Knowledge Base Construction.
Chadi Helwe, Chloé Clavel, and Fabian Suchanek.
2022. LogiTorch: A PyTorch-based library for logical reasoning on natural language. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations.
Ruixin Hong, Hongming Zhang, Xintong Yu, and Changshui Zhang. 2022. METGEN: A ModuleBased Entailment Tree Generation Framework for Answer Explanation. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1887–1905, Seattle, United States.
Association for Computational Linguistics Yinya Huang, Meng Fang, Yu Cao, Liwei Wang, Xiaodan Liang. 2021. DAGN: Discourse-aware graph network for logical reasoning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5848-5855. Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.naaclmain.467.
Yinya Huang, Zhang Hongming, Hong Ruixin, Liang Xiaodan, Zhang Changshui, and Yu Dong. 2022.
MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. The argument reasoning comprehension task: Identification and reconstruction of implicit warrants. *In Proceedings* of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1930–1940, New Orleans, Louisiana. Association for Computational Linguistics Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In *Proceedings of* the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198–4205, Online. Association for Computational Linguistics.
Fangkai Jiao, Yangyang Guo, Xuemeng Song, and Liqiang Nie. 2022. MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3496–3509, Dublin, Ireland. Association for Computational Linguistics.
https://doi.org/10.18653/v1/2022.findings-acl.276.
Nora Kassner, Benno Krojer, and Hinrich Schütze.
2020. Are Pretrained Language Models Symbolic Reasoners over Knowledge? In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 552–564, Online.
Association for Computational Linguistics.
Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. LogiQA: A
challenge dataset for machine reading comprehension with logical reasoning. In Proceedings of the Twenty-Ninth International Joint
Conference on Artificial Intelligence, pages 36223628. https://doi.org/10.24963/ijcai.2020/501.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov.
2019. Roberta: A robustly optimized BERT
pretraining approach. CoRR.
Yitian Li, Jidong Tian, Wenqing Chen, Caoyun Fan, Hao He, and Yaohui Jin. 2022. To What Extent Do Natural Language Understanding Datasets Correlate to Logical Reasoning? A Method for Diagnosing Logical Reasoning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1708–1717, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Qing Lyu, Marianna Apidianaki, Chris Callison-Burch.
2022. Towards Faithful Model Explanation in NLP:
A Survey. *arXiv preprint arXiv:2209.11326*.
Bill MacCartney and Chris Manning. 2014. Natural logic and natural language inference. Computing Meaning, 47:129–147, 2014.
Bill MacCartney and Christopher D Manning. 2009.
An extended model of natural logic*. In Proceedings* of the eighth international conference on computational semantics, pp. 140–156. Association for Computational Linguistics.
Lawrence S Moss. 2010. Natural logic and semantics.
In Logic, Language and Meaning, pages 84–93. Springer.
Siru Ouyang, Zhuosheng Zhang, Hai Zhao. 2021. Factdriven logical reasoning. Computing Research Repository, arXiv:2105.10334.
Gabriele Picco, Thanh Lam Hoang, Marco Luca Sbodio, and Vanessa Lopez. 2021. Neural Unification for Logic Reasoning over Natural Language. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3939–3950, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Xinyu Pi, Wanjun Zhong, Yan Gao, Nan Duan, JianGuang Lou. 2022. LogiGAN: Learning logical reasoning via adversarial pre-training. In Proceedings of the 36th *Conference on Neural* Information Processing Systems.
Hanhao Qu, Yu Cao, Jun Gao, Liang Ding, and Ruifeng Xu. 2022. Interpretable Proof Generation via Iterative Backward Reasoning. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2968–2981, Seattle, United States. Association for Computational Linguistics.
Danilo Neves Ribeiro, Shen Wang, Xiaofei Ma, Rui Dong, Xiaokai Wei, Henghui Zhu, Xinchi Chen, Peng Xu, Zhiheng Huang, Andrew Arnold, and Dan Roth. 2022. Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 465–475, Seattle, United States. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. In Journal of Machine Learning Research.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. *In Proceedings* of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018.
Know what you don't know: Unanswerable questions for SQuAD. *In Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics.
Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, and Matt Gardner. 2020. Obtaining faithful interpretations from compositional neural networks. In ACL.
Jihao Shi, Xiao Ding, Li Du, Ting Liu, and Bing Qin.
2021. Neural Natural Logic Inference for Interpretable Question Answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3673–3684, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Soumya Sanyal, Harman Singh, and Xiang Ren. 2022.
Fairr: Faithful and robust deductive reasoning over natural language. In Annual Meeting ofthe Association for Computational Linguistics (ACL).
Soumya Sanyal, Zeyi Liao, Xiang Ren. 2022.
ROBUSTLR: A diagnostic benchmark for evaluating logical robustness of deductive reasoners. Computing Research Repository, arXiv:2205.12598.
Shane Storks, Qiaozi Gao and Joyce Yue Chai. 2019.
Recent advances in natural language inference: A
survey of benchmarks, resources, and approaches.
arXiv preprint arXiv:1904.01172.
Asher Stern, Shachar Mirkin, Eyal Shnarch, Lili Kotlerman, Ido Dagan, Amnon Lotan and Jonathan Berant. 2011. Knowledge and Tree-Edits in Learnable Entailment Proofs. In TAC. 2011.
Stuart Russell and Peter Norvig. 2010. Artificial intelligence: a modern approach, 3 edition. Prentice Hall.
Swarnadeep Saha, Prateek Yadav, and Mohit Bansal.
2021. multiPRover: Generating multiple proofs for improved interpretability in rule reasoning. In NAACL.
Swarnadeep Saha, Sayan Ghosh, Shashank Srivastava, and Mohit Bansal. 2020. PRover: Proof generation for interpretable reasoning over rules. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 122–136, Online. Association for Computational Linguistics.
Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, and Guoping Hu. 2020. Is Graph Structure Necessary for Multi-hop Question Answering? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language *Processing (EMNLP)*, pages.
7187–7192, Online. Association for Computational Linguistics.
Changzhi Sun, Xinbo Zhang, Jiangjie Chen, Chun Gan, Yuanbin Wu, Jiaze Chen, Hao Zhou, and Lei Li.
2021. Probabilistic graph reasoning for natural proof generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3140–3151, Online. Association for Computational Linguistics.
Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021.
ProofWriter: Generating implications, proofs, and abductive statements over natural language. In Findings of the Association for Computational Linguistics: *ACL-IJCNLP 2021*, pages 3621–3634, Online. Association for Computational Linguistics.
Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2022.
Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning. arXiv preprint arXiv:2210.12217.
Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, and Jonathan Berant. 2020. Leap-of-thought:
Teaching pre-trained models to systematically reason over implicit knowledge. In Advances in Neural Information Processing Systems 33, NeurIPS 2020.
Jidong Tian, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, Yaohui Jin. 2021. Diagnosing the first-order logical reasoning ability through logicNLI. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3738-3747, Online and Punta Cana, Dominican Republic. https://doi.org/10.18653/v1/2021.emnlpmain.303.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112– 1122, New Orleans, Louisiana. Association for Computational Linguistics.
Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, and Nan Duan. 2022. Logic-driven context extension and data augmentation for logical reasoning of text. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1619-1629, Dublin, Ireland. Association for Computational Linguistics.
https://doi.org/10.18653/v1/2022.findings-acl.127.
Siyuan Wang, Zhongkun Liu, Wanjun Zhong, Ming Zhou, Zhongyu Wei, Zhumin Chen, and Nan Duan.
2021a. From LSAT: The progress and challenges of complex reasoning. arXiv preprint arXiv:2108.00648.
Siyuan Wang, Zhongkun Liu, Wanjun Zhong, Ming Zhou, Zhongyu Wei, Zhumin Chen, Nan Duan. 2022. From LAST: The progress and challenges of complex reasoning. In IEEE/ACM Transactions on Audio, Speech, and Language Processing, pages 2201-2216.
https://doi.org/10.1109/TASLP.2022.3164218.
Leon Weber, Pasquale Minervini, Jannes Münchmeyer, Ulf Leser, and Tim Rocktäschel. 2019. Nlprolog:
Reasoning with weak unification for question answering in natural language. *In Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 6151–6161.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *arXiv* preprint arXiv:2201.11903.
Fangzhi Xu, Jun Liu, Qika Lin, Yudai Pan, Lingling Zhang. 2022. Logiformer: A two-branch grph transformer network for interpretable logical reasoning. In Proceedings of the 45th *International* ACM SIGIR Conference on Research and Development in Information Retrieval, pages 10551065. https://doi.org/10.1145/3477495.3532016.
Kaiyu Yang, Jia Deng, Danqi Chen. 2022. Generating Natural Language Proofs with Verifier-Guided Search. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G.
Carbonell, Ruslan Salakhutdinov, and Quoc V. Le.
2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS.
Weihao Yu, Zihang Jiang, Yanfei Dong, Jiashi Feng.
2020. ReClor: A reading comprehension dataset requiring logical reasoning. In Proceedings of 8th International Conference on Learning Representations, Addis Ababa, Ethiopis.
## A Model Training And Parameters Baseline Models
Initially, we performed evaluation on LogicNLI
dataset (Tian, J. 2021). LogicNLI dataset contains different section: facts, rules statements and labels.
We have used train/dev/test 16000/2000/2000 examples for our models. For baseline experiments, we have re-implemented the finetuned BERT (Devlin et al., 2019) and RoBERTa
(Liu et al., 2019) base version and used [CLS] facts rules [SEP] statement [SEP] as input to the transformers to predict the logical relation. BERT
uses 12-layer, 768-hidden, 12-heads, 110M parameters for base version and RoBERTa uses 123M parameters.
Our models are trained end-to-end using AdamW optimizer with the decay rate of 0.9. In addition, we have experimented with different learning rates to understand if there is any change in performance. However, learning rate of 5e-6 shows a steady linear increase with the specified decay rate for RoBERTa model. Hence, we have retained the similar hyper-parameters as mentioned in the LogicNLI dataset (Tian, J. 2021), for our BERT and RoBERTa base version. RoBERTa performs better than BERT base and shows 59% on the validation set and 57% on the test set.
The hyper-parameters are listed in the Table A1.
Parameter s BERT RoBERTa XLNET
| Parameter | BERT | RoBERTa | XLNET |
|--------------|--------|-----------|---------|
| s batch size | 16 | 16 | 16 |
| lr | 5e−6 | 5e−6 | 5e−6 |
| decay rate | 0.9 | 0.9 | 0.9 |
| l2 coeff. | 1e−5 | 1e−5 | 1e−5 |
| early stop | 5 | 5 | 5 |
| epochs | 20 | 20 | 20 |
| optimizer | AdamW | AdamW | AdamW |
Table A-1 Hyperparemeters for Experiments
## Logic Reasoning Model
Rule Inference We apply T5 (Raffel et al., 2020)
as the encoder-decoder model to generate new facts given the input facts and rule. Given labeled reasoning chains in the LogicNLI dataset (Tian et al., 2021), it is not straight forward to train the model as they provide only 'positive' examples.We build our training set as follows. Given a training instance, we use the logic representation of the facts and rules and apply every rule expression on the fact expressions to generate 1-step inference with an off-the-shelf logic reasoner. The inferred facts are converted to natural language using a simple rule-based technique. The natural language versions of the source rules and facts are extracted from the dataset, and a training set is prepared using the processed input as '*question: <rule> context:*
<known facts>' and the output 'inferred *facts*', if the rule can be triggered and 'None' otherwise.
During training, we set number of beams as 50 and number of returned sequences as 5. We randomly split the 9600 instances into 80% training and 20% test for 5 times and report the average performance. We measure accuracy using the exact matching ratio.
As the input size depends on the facts, which may grow over multiple iterations, there is an impact on the token size limitations. We analyze the instances and find that the average size of each instance 191.758 tokens (Min: 171; Max: 240). Our current T5 model can handle sequences with up to 512 tokens. Assuming the worst case (Max size 240; 4 tokens/fact), the chaining process can go up to depth=68, before the limit is reached. We argue that this is sufficiently large for LogicNLI.dataset.
For inference/real world examples, working with documents greater than 512 size, we can chunk the document (facts/rules) and use Roberta to encode each chunk accordingly.
Facts Checking We adopt RoBERTa (Liu et al.,
2019) base version and used [CLS] facts [SEP]
hypothesis [SEP] as input to the transformers to predict the inference relation. The hyperparameters are as in the Table A-1.
![9_image_0.png](9_image_0.png) ![9_image_4.png](9_image_4.png)
![9_image_3.png](9_image_3.png)
![9_image_5.png](9_image_5.png) ![9_image_6.png](9_image_6.png)
Figure A-1: Sample Instance for Illustration Sample Instance for Illustration, dataset showing facts, rules, a statement, proofs, the path and the label.
Machine Configuration For baseline models, initially we have used NVIDIA-GeForce RTX 2080 series with eight cores of GPU machines for all our experiments.
Later, in order to train t5 large models, we have used NVIDIA-GeForce Tesla V100 series SXM232GB with 5 cores of GPU machines. Models were trained for 3-5 hours for training and reasoning.
## B Supplementary Material Sample Instance For Illustration
Figure A-1 presents a sample instance for illustration.
## Algorithm Pseudocode
Figure A-2 provides the full pseudocode of our algorithm outlined in Section 3.
## Illustration Of Output
Figure A-3 presents an illustration of the output by algorithm Chainformer.
## Additional Experiments Analysis Of Inference Relations (Table A-2)
Here, we present the detailed reasoning performance for the four labels. 'Entailment' and 'Contradiction' performance were similar. 'Paradox' was the toughest (F1=74.4) among all. It had a high precision but low recall, as two reasoning chains are required for its classification.
In contrast, 'neutral' had a lower precision but higher recall since most of the missed hypotheses will be labeled thus.
![9_image_1.png](9_image_1.png)
![9_image_2.png](9_image_2.png)
Figure A-2 Algorithm Chainformer Pesudocode
| Labels | Test | | |
|---------------|--------|------|------|
| P | R | F1 | |
| Contradiction | 82.7 | 81.6 | 82.1 |
| Entailment | 81.3 | 82.0 | 81.6 |
| Neutral | 75.0 | 92.6 | 82.9 |
| Paradox | 86.0 | 65.6 | 74.4 |
Table A-2: Analysis of Inference Relations
## Performance Over Fols
LogicNLI dataset tags over 23 classes of FOLs.
Each class is named using an abbreviation of the rule members as below. Given a rule, we denote the FOL connectives viz. conjunction ∧ (C),
disjunction ∨ (D), implication → (I), equation ≡
(Q), universal quantifier ∀ (A), and existential quantifier ∃ (E), with a letter (bracketed), and concatenate their respective lettersin the order they appear in the rule. For example, Rule 4 in Figure 4 would belong to the class 'ACQ'. The accuracy results of all classes are presented in Table A-3.
![10_image_0.png](10_image_0.png)
![10_image_1.png](10_image_1.png)
Analysis of FOLs with Conjunction **(Table A-4)**
We also analyzed the accuracy over FOLs with conjunctions in implication rule (before and after
→) and similarly in rules with equivalence. Results imply that conjunctions in the consequent are harder for implications. In case of equivalence, it is even harder possibly the implication works both ways.
Sample Instance from LogicNLI
Figure A-4 presents a sample instance from LogicNLI as an illustration.
| FOL Class | Accuracy |
|-------------|------------|
| CIC | 100.0% |
| CQC | 100.0% |
| EDI | 97.3% |
| EIC | 97.1% |
| EI | 92.9% |
| CE | 92.9% |
| EC | 91.7% |
| ADI | 91.4% |
| ECI | 90.9% |
| CQ | 90.0% |
| ACI | 90.0% |
| AQ | 89.3% |
| EDIC | 88.9% |
| I | 85.2% |
| Q | 81.5% |
| ECIC | 80.0% |
| AI | 80.0% |
| AIC | 75.6% |
|-------|---------|
| ACIC | 75.0% |
| DI | 73.9% |
| ACQ | 44.8% |
| ACQC | 41.7% |
| ADIC | 0.0% |
![11_image_0.png](11_image_0.png)
![11_image_1.png](11_image_1.png)
Table A-4. Analysis of Conjunctions Facts: (F1) Pierce is not crazy.(F2) Norman is breakable.(F3) Travis is not terrible.(F4) Alfred is terrible.(F5) Norman is crazy.(F6) Norman is difficult.(F7) Pierce is difficult.(F8) Kerry is cautious.(F9) Pierce is not cautious.(F10) Alfred is not breakable.(F11) Travis is not breakable.(F12) Kerry is careful. Rules: ((R1) Norris being not cautious implies that Norman is not crazy. (R2) If someone is not difficult or he is terrible, then he is cautious. (R3) Pierce being breakable implies that Norman is difficult. (R4) If Travis is crazy, then Alfred is cautious and Pierce is not difficult. (R5) As long as someone is either not difficult or breakable, he is terrible and not careful. (R6) As long as someone is difficult, he is not crazy and breakable. (R7) If there is at least one people who is not breakable, then Kerry is careful. (R8) If Kerry is not careful and Alfred is cautious, then Pierce is not difficult. (R9) If there is someone who is both crazy and terrible, then Alfred is careful. (R10) If someone is not cautious, then he is crazy. (R11) If someone is terrible or not cautious, then he is difficult. (R12) If there is at least one people who is both not breakable and crazy, then Kerry is careful. Statement: Pierce is not careful.
Label: entailment Reasoning **Path:** [[FACT(7)--> RULE (6)]--> RULE(5)]
Statement: Norman is careful. Label: contradiction
![11_image_2.png](11_image_2.png)
Reasoning **Path:** FACT(2) --> RULE (5)
![11_image_3.png](11_image_3.png)
Figure A-4: An instance from LogicNLI,dataset showing facts, rules, Statements, reasoning paths and labels.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5.1
✗ A2. Did you discuss any potential risks of your work?
No potential risks anticipated.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Not applicable. Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Section 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix A
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
lai-etal-2023-multilingual | Multilingual Multi-Figurative Language Detection | https://aclanthology.org/2023.findings-acl.589 | Figures of speech help people express abstract concepts and evoke stronger emotions than literal expressions, thereby making texts more creative and engaging. Due to its pervasive and fundamental character, figurative language understanding has been addressed in Natural Language Processing, but it{'}s highly understudied in a multilingual setting and when considering more than one figure of speech at the same time. To bridge this gap, we introduce multilingual multi-figurative language modelling, and provide a benchmark for sentence-level figurative language detection, covering three common figures of speech and seven languages. Specifically, we develop a framework for figurative language detection based on template-based prompt learning. In so doing, we unify multiple detection tasks that are interrelated across multiple figures of speech and languages, without requiring task- or language-specific modules. Experimental results show that our framework outperforms several strong baselines and may serve as a blueprint for the joint modelling of other interrelated tasks. | # Multilingual Multi-Figurative Language Detection
Huiyuan Lai, Antonio Toral, Malvina Nissim CLCG, University of Groningen / The Netherlands
{h.lai, a.toral.ruiz, m.nissim}@rug.nl
## Abstract
Figures of speech help people express abstract concepts and evoke stronger emotions than literal expressions, thereby making texts more creative and engaging. Due to its pervasive and fundamental character, figurative language understanding has been addressed in Natural Language Processing, but it's highly understudied in a multilingual setting and when considering more than one figure of speech at the same time.
To bridge this gap, we introduce *multilingual* multi-figurative language modelling, and provide a benchmark for sentence-level figurative language detection, covering three common figures of speech and seven languages. Specifically, we develop a framework for figurative language detection based on template-based prompt learning. In so doing, we unify multiple detection tasks that are interrelated across multiple figures of speech and languages, without requiring task- or language-specific modules.
Experimental results show that our framework outperforms several strong baselines and may serve as a blueprint for the joint modelling of other interrelated tasks.
## 1 Introduction
Figurative language is ubiquitous in human language, allows us to convey abstract concepts and emotions, and has been embedded intimately in our cultures and behaviors (Roberts and Kreuz, 1994; Harmon, 2015). In the hyperbolic sentence "My heart failed a few times while waiting for the result.", the expression "*my heart failed a few times*"
is not a literal heart-stop, it exaggerates the mood of when waiting for a possibly important result, thereby vividly showing anxiety.
Recent years have seen a lot of interest in figurative language processing in the NLP community, including the successful organization of dedicated workshops (Beigman Klebanov et al., 2018; Klebanov et al., 2020; Ghosh et al., 2022). There are many works focusing on figurative language
![0_image_0.png](0_image_0.png)
detection, mostly in English, including hyperbole (Troiano et al., 2018), metonymy (Nissim and Markert, 2003), metaphor (Tsvetkov et al., 2014),
idiom (Liu and Hwa, 2018) and sarcasm (Hazarika et al., 2018). In addition, researchers have started start to pay attention to figurative language detection in a multilingual scenario (Tsvetkov et al.,
2014; Tedeschi et al., 2022; Aghazadeh et al., 2022; Tayyar Madabushi et al., 2022), where models can exploit cross-lingual knowledge transfer (Conneau and Lample, 2019). Nonetheless, detection tasks for different figures of speech are usually studied independently of each other, which leads to having to train separate models for each figure of speech.
However, different figures of speech are often related to each other, and therefore models can thus potentially benefit from cross-figurative knowledge transfer, as empirically shown by Lai and Nissim
(2022) in a monolingual setting for English.
In this paper we investigate how these related detection tasks can be connected and modelled jointly in a multilingual way (see Table 1). To do so, we propose a multitask framework to model multilingual multi-figurative language detection at the sentence level. As shown in Figure 1, our goal is to connect the detection tasks from different languages and different figures of speech, resulting in a unified model which can benefit from cross9254
| Reference | Description | M-Lang | M-Fig | M-Task | Level |
|--------------------------------|--------------------------------------------------|----------|---------|----------|----------|
| Troiano et al. (2018) | Hyperbole detection in English | ✗ | ✗ | ✗ | Sentence |
| Tedeschi et al. (2022) | Multilingual idiom detection | ✓ | ✗ | ✗ | Word |
| Tayyar Madabushi et al. (2022) | Multilingual idiom detection | ✓ | ✗ | ✗ | Sentence |
| Aghazadeh et al. (2022) | Multilingual metaphor detection | ✓ | ✗ | ✗ | Sentence |
| Lai and Nissim (2022) | Multi-figurative language generation in English | ✗ | ✓ | ✓ | Sentence |
| Our work | Multilingual multi-figurative language detection | ✓ | ✓ | ✓ | Sentence |
lingual and cross-figurative knowledge transfer.
Generally, a multi-task framework consists of shared modules and task-specific modules. With the development of pre-trained language models
(PLMs), prompt learning offers the opportunity to model multiple tasks in a framework that does not require task-dependent parameters (Radford et al.,
2019; Brown et al., 2020; Fu et al., 2022; Mishra et al., 2022). With this method, task-specific language instructions are predefined and used to guide the model to handle different tasks.
In practice, we first formalize the figurative language detection task as a text-to-text generation problem, where the input is the source sentence while the target is a textual label (e.g. "literal" or
"idiomatic"). This method thus enables us to train our models in a sequence-to-sequence (seq2seq)
fashion. We then prepend the prompt template to source sentences from various tasks when feeding them into the model. This connects multiple figures of speech and languages in a unified framework, also leading to a better understanding of how to jointly model tasks related to each other. We perform extensive experiments on three figures of speech: hyperbole, idiom, and metaphor, involving seven languages (English:EN, Chinese:ZH, German:DE, Spanish:ES, Italian:IT, Farsi:FA, and Russian:RU).
Our main contributions are as follows: (i) We introduce the novel task of multilingual multifigurative language detection, wherein we explore the potential of joint modelling. (ii) We introduce a multitask and multilingual framework based on prompt learning, which unifies interrelated detection tasks without task- nor language-specific modules. (iii) We evaluate the model's generalization capabilities across a range of figures of speech and languages: extensive experiments are run for different settings, including in-language, zero-shot, cross-lingual, and cross-figurative to show how the unified framework performs. (iv) Our framework may serve as a blueprint for joint modelling of other interrelated tasks, such as the detection of hate speech (Waseem and Hovy, 2016), offensive and abusive language (Caselli et al., 2020), toxicity (Pavlopoulos et al., 2020), as well as fake news and AI-generated content (Zellers et al., 2019).
We have released our code and all preprocessed dataset.1
## 2 Related Work
We briefly introduce the background of figurative language detection, from feature engineering to neural-based approaches, as well as prompt-based learning with PLMs.
## 2.1 Figurative Language Detection
This task often involves word-level and sentencelevel detection. Word-level detection is concerned with identifying the exact words within the context of a sentence used with a figurative meaning.
Sentence-level detection, as a binary classification problem, requires to automatically detect whether a given sentence is literal or not.
Feature Engineering Traditionally, researchers have investigated hand-engineered features to understand figurative usages. These features are primarily concerned with linguistic aspects, including word imageability (Broadwell et al., 2013; Troiano et al., 2018), word unexpectedness (Troiano et al.,
2018), syntactic head-modifier relations (Nissim and Markert, 2003), abstractness and semantic supersenses (Tsvetkov et al., 2014), property norms (Bulat et al., 2017), pragmatic phenomena (Karoui et al., 2017), together with other aspects such as sentiment (Troiano et al., 2018; Rohanian et al., 2018) and sensoriality (Tekiroglu et al. ˘ ,
2015). These features rely heavily on manual extraction and are very much task-dependent. Exploiting verb and noun clustering (Shutova et al.,
1https://github.com/laihuiyuan/MMFLD
2010) and bag-of-words approaches (Köper and Schulte im Walde, 2016) are common automated methods to reduce manual work.
Neural-Based Approaches In the last decade, researchers have moved from feature engineering to neural-based modelling, using LSTM- (Wu et al., 2018; Gao et al., 2018; Mao et al., 2019; Kong et al., 2020) and CNN-based approaches (Wu et al., 2018; Kong et al., 2020) for figurative language detection.
Most recently, PLMs have been used for this task, usually yielding new state-of-the-art results (Su et al., 2020; Choi et al., 2021; Zeng and Bhat, 2021; Tedeschi et al., 2022). Similar to other NLP
tasks, researchers have also moved towards multilingual detection (Tsvetkov et al., 2014; Tedeschi et al., 2022; Tayyar Madabushi et al., 2022; Aghazadeh et al., 2022), especially thanks to crosslingual knowledge transfer via multilingual PLMs.
All these works focus on single figures of speech, i.e. detecting whether a sentence (or each word in a sentence) contains a given figure of speech or it is literal. We take here the first step towards multilingual multi-figurative language modelling to introduce a unified framework for multiple languages and multiple figures of speech, focusing on sentence-level detection.
## 2.2 Pre-Training And Prompt Learning
Over the past few years, PLMs have brought NLP to a new era (Devlin et al., 2019; Radford et al., 2018, 2019; Brown et al., 2020). PLMs are pre-trained on massive textual data in a selfsupervised manner, and then fine-tuned on downstream tasks with task-specific training objectives.
This paradigm, however, has to be adapted to different target tasks, where the task-specific objectives are different from the pre-training one, and the introduction of additional parameters such as a PLM-based classifier is at times necessary.
Prompt learning, a new learning paradigm based on PLMs, aims to make better use of pre-trained knowledge by reformulating tasks to be close to the pre-training objectives (Liu et al., 2022). Specifically, this is a method of leveraging PLMs by prepending task-specific prompts to the original input when feeding it into PLMs. One way to do this is with manually designed templates as task instructions (Radford et al., 2019; Raffel et al., 2020);
another one is to use continuous prompts that optimize a sequence of continuous task-specific vectors (Lester et al., 2021; Li and Liang, 2021). More
Form Lang Train Valid Test
Hyperbole EN 3,352 100 300
ZH 3,760 600 1,000
Idiom
EN 18,676 1,470 200 [41/159]
DE 14,952 1,670 200 [19/181]
ES 12,238 1,706 199 [66/133]
IT 15,804 1,732 200 [48/152]
Metaphor
EN 12,238 4,014 4,014
ES 12,238 2,236 4,474
FA 12,238 1,802 3,604
RU 12,238 1,748 3,498
recently, Fu et al. (2022) have introduced an mT5based framework to learn a unified semantic space blurring the boundaries of 6 NLP tasks with the prompting method, which we adopt in this work.
Here, we investigate how a small PLM such as mt5 can be used in the multilingual multitask prompting framework, also to better understand how interrelated tasks can benefit from such a scheme.
Compared to very large models like GPT-3
(Brown et al., 2020), smaller models have the significant advantage of lower hardware requirements, making it easier to customize them quickly and cheaply for specific tasks, to implement modelling ideas iteratively, and for other researchers to reproduce experiments, too. Using a small PLM could however be very challenging when modelling more unrelated NLP tasks than those addressed in previous and in the current work, so this is something to bear in mind for future extensions.
## 3 Tasks And Datasets 3.1 Task Formulation
We focus on figurative language detection at sentence-level, which can be viewed as a binary classification task that requires identifying whether a given sentence is literal or non-literal (e.g. idiomatic). To unify multiple figurative language detection tasks in different languages, we reformulate them as a text-to-text generation problem, where our model will generate the textual label for each given sentence. For instance, given a sample s from a detection task T*idiomatic* ∈ T, where T = {Thyperbole, Tidiom, T*metaphor*} is the task set we consider, the model aims to output the text label y ∈ {Literal, Idiomatic}.
9256
![3_image_0.png](3_image_0.png)
## 3.2 Datasets
We use five existing figurative language datasets for our experiments, which cover three figures of speech and seven languages. Table 2 shows the dataset statistics for the various languages in each figure of speech.
Hyperbole HYPO (Troiano et al., 2018) is an English dataset containing 709 hyperbolic sentences with their corresponding non-hyperbolic versions.
HYPO-Red (Tian et al., 2021) is another dataset that includes literal and hyperbolic texts. We combine these two datasets for the English hyperbole detection task. HYPO-cn (Kong et al., 2020) is a Chinese hyperbole detection dataset. Since both English and Chinese hyperbole datasets are rather small compared to the sizes of the training datasets for the other figures of speech, we upsample them by random instance replication obtaining training sets of 10,000 samples.
Idiom ID10M (Tedeschi et al., 2022) is a multilingual idiom dataset, containing automaticallycreated (silver) training and validation data in 10 languages and manually-created (gold) test sets in 4 languages: English, German, Italian, Spanish. This dataset is designed for word-level idiom detection; we convert it to sentence-level labels and use the four languages with gold data.
Metaphor LCC (Mohler et al., 2016) is a multilingual metaphor dataset derived from web-crawled data in four languages: English, Spanish, Russian, and Farsi. It provides metaphoricity ratings for within-sentence word pairs on a four-point scale, including 0 as no, 1 as weak, 2 as Conventional, and 3 as clear metaphor. We use the data preprocessed by Aghazadeh et al. (2022).
## 4 Multilingual Multi-Figurative Model
We propose a multitask and multilingual framework based on template-based prompt learning for figurative language detection.
## 4.1 Multitask Prompt Training
We use mT5 (Xue et al., 2021) as our backbone and jointly model multiple detection tasks, with the ultimate goal of having one single model that can handle the detection of multiple figures of speech in multiple languages. The overall framework is illustrated in Figure 2. Given a sample x from the t th task Tt, it is first combined with the predefined prompt template pt and then fed into model M, which is expected to produce the label y: M(*x, p*t) = y
′.
We minimize the negative log-likelihood of the sequences of the model's outputs, the loss function being formulated as:
$$L_{\theta}=-\sum\log(\mathbf{y}\mid\mathbf{x};\theta)\qquad\qquad(1)$$
where θ are the parameters of mT5. x and y represents the sequences of the given sentence x and its text label y, respectively. We use the multilingual and multi-figurative samples from dataset T to finetune mT5, adapting it to the figurative language detection tasks.
We design the prompt templates based on our intuition of how we would ask a human annotator to complete the figurative language detection task.
In our main framework, we use a cross-lingual template setting whose templates for all tasks are in English. We will assess the impact of different prompts settings, including template and language
(see Sec 5.4).
## 4.2 Generalization
We investigate the generalization ability of our proposed framework in cross-figurative and crosslingual scenarios, where the training and test data come from different figure-of-speech or different languages.
Cross-Figurative Knowledge Inspired by Lai and Nissim (2022), we evaluate our framework in terms of cross-figurative knowledge transfer. The hypothesis is that different figures of speech might share some figurative features, and that a text may contain different figures of speech simultaneously, possibly triggered by different textual portions, so that a single framework might warrant a large knowledge gain through transfer from one figure of speech to another.
Using a multitask framework to jointly model multiple figurative language detection tasks, the cross-figurative generalization ability is expected to improve performance across different tasks compared to single figurative language modelling.
Cross-Lingual Knowledge Multilingual PLMs are pre-trained on texts from multiple languages in a self-supervised way, which enables different languages to be represented in a single space. Therefore, words and phrases that are similar across languages will be close to each other. We extend and evaluate the cross-lingual generalization in metaphor carried out by Aghazadeh et al. (2022)
to a setting with multiple figures of speech. The hypothesis is that if the knowledge of figurative language is transferable across languages, then the model Mlm would be able to have a good generalization in language ln based on what it has learned in language lm: Mlm(*x, p*t) = y
′, for (x, y) ∈ Ttin language ln. Furthermore, cross-lingual knowledge transfer can further improve model performance when doing multilingual modelling.
However, cultural differences often have a great influence on the usage of figurative language. Idioms are culture-/language-specific, for example, with established meanings over a long period of usage in a specific cultural background (Nunberg et al., 1994). Therefore, we expect that the model will have different performances in cross-lingual generalization for different figures of speech depending on how culturally-related the languages involved are.
## 5 Experiments 5.1 Setup
We use mT5-base (580M parameters) to evaluate our framework. All experiments are implemented atop Transformers (Wolf et al., 2020). We train our models with batch size 32, using the Adam optimiser (Kingma and Ba, 2015) with a polynomial learning rate decay. We set a linear warmup of 1,000 steps for a maximum learning rate of 1e-4 and a maximum decay of 10,000 steps for a minimum learning rate of 5e-5. We evaluate checkpoints every 1,000 steps, and use early stopping (patience 5) if validation performance does not improve. Following Aghazadeh et al. (2022), we report performance as detection accuracy for all experiments.
In Section 5.4, we include an additional analysis for the unbalanced idiom datasets.
## 5.2 Model Settings
Since we take the first step towards the joint modelling for multilingual multi-figurative language detection, we conduct extensive experiments with different architectures and settings, leading to five sets of models. Additionally, we obtain zero-shot results in non-English languages by utilizing Englishonly variants of the same sets of models.
- **Baseline** Following Tayyar Madabushi et al.
(2022), we train a binary detection classifier for each figure of speech and language by fine-tuning multilingual BERT (Devlin et al., 2019). Our work is similar to one previous work on sentencelevel metaphor detection, which is carried out by Aghazadeh et al. (2022) in a multilingual setting. However, they assume that the phrase in the sentence to be classified as metaphoric or not is already known in advance, while our models do not use such information. Therefore, we do not consider it as a baseline model.
- **Vanilla mT5** Similar to mBERT, we fine-tune mT5 on specific figures of speech for each language but in a seq2seq fashion.
- **Prompt mT5** We fine-tune mT5 with the prompt template in a seq2seq way for each figure of speech in each language with the aforementioned sets of models.
| Model | Hyperbole | Idiom | Metaphor | | | | | | | |
|------------------------------------------------|-------------|---------|------------|-------|-------|-------|-------|-------|-------|-------|
| EN | ZH | EN | DE | ES | IT | EN | ES | FA | RU | |
| Main results Baseline | 72.33 | 80.40 | 79.00 | 72.50 | 66.33 | 70.50 | 81.37 | 80.11 | 74.83 | 79.93 |
| Vanilla mT5 | 72.67 | 71.40 | 79.50 | 74.50 | 64.82 | 76.00 | 82.64 | 82.32 | 77.33 | 82.25 |
| + multitask | 72.67 | 81.40 | 62.00 | 74.50 | 56.78 | 72.00 | 81.86 | 81.20 | 77.61 | 83.76 |
| Prompt mT5 | 81.00 | 81.60 | 79.50 | 75.00 | 68.34 | 75.00 | 83.43 | 82.66 | 76.64 | 83.39 |
| + multitask | 82.00 | 82.60 | 86.00 | 79.00 | 67.84 | 76.00 | 83.06 | 83.10 | 78.14 | 83.16 |
| (Zero-shot) with EN model Baseline 72.33 69.60 | 79.00 | 62.00 | 61.81 | 60.00 | 81.37 | 71.70 | 61.29 | 69.01 | | |
| Vanilla mT5 | 72.67 | 70.20 | 79.50 | 53.00 | 64.32 | 70.50 | 82.64 | 75.10 | 68.70 | 76.10 |
| + multitask | 65.67 | 64.90 | 72.50 | 52.50 | 37.69 | 63.50 | 82.41 | 71.86 | 66.84 | 73.61 |
| Prompt mT5 | 81.00 | 74.00 | 79.50 | 59.00 | 69.85 | 76.50 | 83.43 | 75.95 | 70.17 | 76.39 |
| + multitask | 82.33 | 76.10 | 81.50 | 65.60 | 66.83 | 79.50 | 81.27 | 74.99 | 68.70 | 75.93 |
- **+ multitask** These are multilingual multitask models. We fine-tune mT5-based models with their corresponding single-task training methods using all data from T.
- **Zero-shot with EN model** Based on the above models, but we train them on English data only and test them on non-English languages.
## 5.3 Results
Table 3 reports results on three figurative language detection tasks in seven languages.
Main Results We see that Vanila mT5 performs better than mBERT on most tasks, except ZH hyperbole and ES idiom. When Vanila mT5 is used for multitask training, unsurprisingly, its performance drops in many tasks. One straight reason is that it is challenging to model multiple tasks at once.
The other possible reason is that a text may contain features of multiple figures of speech at the same time, but there is not enough evidence to guide the model to perform a specific task. In other words, the model may correctly predict the figurative form for a given text, but it does not match the label of the target task.
When looking at Prompt mT5, we see that the model with prompt training brings improvement for most tasks compared to Vanila mT5. This shows the effectiveness of the prompt, which instructs the model to perform the target task. Prompt mT5 with multi-task training has the best performances on most tasks: (i) it shows a steady improvement in hyperbole detection; (ii) in idiom detection performances are boosted for EN, DE, and IT though the ES score is lower compared to Prompt mT5;
(iii) for metaphor detection it achieves the highest accuracy in ES and FA but slightly underperforms in EN and RU compared to Prompt mT5.
Zero-Shot For zero-shot results on non-EN languages using EN models, we see similar trends to the main results (see Table 3, second block).
Vanilla mT5 has overall better performances than its multitask counterpart and mBERT. We observe that Prompt mT5-based models have a clear edge in this setting, with the highest accuracy for all tasks and languages obtained by one of them. EN models yield the highest accuracy scores in EN hyperbole and metaphor detection, and even in idiom detection of ES and IT with zero-shot. The main reason for this is most likely that the idiom training and validation data is created automatically, leading to a non-test set of inferior quality and reduced performance on the test set compared to the validation set (see Sec 5.4). Overall, a zero-shot approach for figurative language detection when lacking highquality resources in the target language seems a highly reliable strategy.
## 5.4 Analysis And Dissussion
Error Analysis. Table 4 presents the results of our main model (Prompt mT5 + multitask) on validation and test sets. The performances on the test sets are comparable to the validation sets for hyperbole and metaphor, while the idiom task stands out:
for EN idiom detection, test accuracy is higher than validation accuracy, while we observe the opposite
![6_image_0.png](6_image_0.png)
Form Lang Valid Test Lang Valid Test
Hyperbole EN 87.00 82.00 ZH 83.00 82.60
Idiom EN 70.07 86.00 DE 97.01 79.00
ES 91.68 67.84 IT 94.40 76.00
Metaphor EN 83.06 83.06 ES 83.54 83.10
FA 78.30 78.14 RU 82.78 83.16
Table 4: Results (accuracy) of our main model (Prompt mT5 + multitask) on validation and test sets.
Lang Valid Test
Literal Idiomatic Literal Idiomatic
EN 34.83 49.12 48.78 62.26 DE 2.16 97.61 63.16 65.75
ES 18.17 97.19 13.64 30.83
IT 5.88 98.28 85.42 68.42
in languages other than English, with validation scores above 90% and test scores below 80%.
To analyze this behaviour of the idiom task, in Table 5 we report the ratio of idiomatic expressions contained in sentences of the validation/test sets that also appear in idiomatic sentences of the training sets. The distribution of the EN data is relatively balanced for both validation and test. For other languages, most of the expressions in the idiomatic sentences of the validation set are already present in the training set, but this is not the case for the literal sentences. Regarding test sets, the ratio of DE and IT are very high for both literal and idiomatic sentences, but very low for ES, which poses a significant challenge to the model.
We also group both the predictions and the labels to produce the confusion matrices for the main and EN models in Figure 3. For the EN task, we see that scores on the main diagonal are higher than those on the secondary diagonal, except for the EN model in the test set (39.02 vs 60.98). This is commonly observed in binary classification experiments. We have different observations on in-language training and zero-shot in other languages: (i) from (a), we see that the in-language model performs very well on the validation set for literal and idiomatic sentences, while it overpredicts literal sentences on the test set; (ii) in contrast, EN models in (b) perform better in the test sets and they overpredict idiomatic sentences on the validation set.
Generally, a sentence might not be idiomatic as a whole although it contains idioms. Such sentences will be labelled as non-literal in the automatic dataset creation. Based on the above observations, we see that the distribution of the automatically created training and validation data is quite different from the manually created test set, and the quality of the former is much lower than that of the latter. The nature of training data actually affects the stability of the model on different tasks. For instance, this even leads to better performances using EN models (zero-shot) on ES and IT than using in-language-trained models (see Table 3).
Cross-Figurative Knowledge Transfer To further investigate cross-figurative knowledge transfer, we sample different figures of speech for two languages from our dataset and compare singleto multi-figurative language modelling. Table 6 shows the results for EN and ES. For EN, com-
![7_image_0.png](7_image_0.png)
| Model | Lang | Hyperbole | Idiom | Metaphor |
|-------------|--------|-------------|---------|------------|
| Prompt mT5 | EN | 81.00 | 79.50 | 83.43 |
| + multitask | 82.33 | 81.50 | 81.27 | |
| Prompt mT5 | ES | - | 68.34 | 82.66 |
| + multitask | - | 70.35 | 82.14 | |
pared to single figurative form models, we see that multitask modelling yields further improvements in hyperbole and idiom but hurts metaphor. Similarly, when combining information on both idioms and metaphors for ES, extra information about idioms hurts metaphor detection slightly, while extra information about metaphors helps idioms. We suggest two main reasons for these observations:
(i) performance improvements in hyperbole and idioms are enhanced by the transfer of knowledge from metaphors; (ii) The low-quality idiom training data, as discussed earlier in this section, negatively impacts the accuracy of metaphor detection.
While incorporating information from hyperbole data could potentially be beneficial, the limited amount of such data might not be enough to bring any benefit.
Cross-Lingual Knowledge Transfer We use the model trained in one language to run the zero-shot experiments on the other languages and model all languages jointly for each figure of speech. Figure 4 shows the results for cross-lingual experiments. Zero-shot has moderate detection accuracy on hyperbole and metaphor with scores greater than 68% for all languages, confirming that figurative knowledge is transferable across languages.
![7_image_2.png](7_image_2.png)
EN ZH EN DE ES IT EN ES FA RU
![7_image_1.png](7_image_1.png)
(a) Results between templates A and B.
EN ZH EN DE ES IT EN ES FA RU
(b) Results between templates A and C.
EN ZH EN DE ES IT EN ES FA RU
(c) Results between cross-lingual and in-lingual templates (A
VS D).
In idiom detection, it is unsurprising to see that zero-shot performs poorly, e.g. the accuracy of the DE model on EN idiom detection is only 29.5%
considering that cultural specificities of idioms might hamper cross-lingual generalization more than for other figures of speech. Still, multilingual modelling brings performance improvements on most tasks in different languages, including idioms.
Overall, figurative language detection can benefit from multilingual modelling, and the zero-shot technique can be used for hyperbole and metaphor detection when lacking resources in the target language, but not, in most cases, for idiom detection.
Impact of Prompt Although prompt learning has been shown an effective method for many NLP
| Task Lang | Prompt Lang | # | Prompt Template |
|-------------|-------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|-------------------------------------------------------|
| A | Which figure of speech does this text contain? (A) Literal. (B) [TASK]. | Text: [IT-text] | | |
| B | Is there a(n) [TASK] in this text? | Text: [IT-text] | | |
| Italian | English | C | Does this text contain a(n) [TASK]? | Text: [IT-text] |
| Italian | D | Quale figura retorica contiene questo testo? (A) Letterale. (B) [TASK]. | Testo: [IT-text] | |
tasks, it usually requires extensive prompt engineering on template design as it is sensitive to different tasks. Following Fu et al. (2022), we assess the prompt effect with different templates and languages. In Table 7, we show a set of crosslingual and in-lingual prompt templates. In the cross-lingual prompt setting, all templates are written in English, here we experiment with two other templates (B and C) besides the one used in our main results (A). In the in-lingual prompt setting, instead, the language of the template is consistent with the task language. Template D in Table 7, for example, is an in-lingual prompt translated from A
and used for Italian tasks.
Figure 5 shows the relative performance differences, where we subtract the performance of the model with other prompts from the model with prompt A. In the cross-lingual setting, we see that the performances of models with different prompt templates are very close, with an accuracy difference of less than 5 percentage points on all tasks except for EN and DE idiom using template A and C. Interestingly, the English template does not hurt performances in other languages (Figure 5(c)).
These results suggest that a model based on prompt learning for multilingual multi-figurative language detection is not particularly sensitive to different templates.
## 6 Conclusions
We introduced a multilingual multi-figurative language understanding benchmark that focuses on sentence-level figurative language detection, involving three common figures of speech and seven languages. Based on prompt learning, we proposed a framework to unify the interrelated detection tasks across multiple figures of speech and languages using a PLM, while having no task- or language-specific modules. We further analyzed the generalization of the model across different figures of speech and languages.
Our unified model benefits from cross-lingual and cross-figurative knowledge transfer in sentencelevel detection. It is natural to explore fine-grained detection at the word-level in future work, as well as language generation in multilingual and multifigurative scenarios. This approach can also serve as a blueprint for the joint modelling of other interrelated tasks.
## 7 Limitations And Impact
While introducing a framework which deals with multiple languages and multiple figures of speech, this work is still only dealing with three figures of speech and seven languages. Many more phenomena and languages can still bring substantial challenges and insights if considered (once the data availability bottleneck is addressed). Also, we deal with figurative language as labelled at the sentence level, but the word level is also not only interesting but important for broader natural language understanding and could yield different insights than those observed in the present work.
We only mention in passing the influence that different cultural contexts have on figurative usages, and we make some observations on idioms, but this aspect would require a much bigger unpacking. We actually believe that (failure) of cross-lingual computational models can be an excellent diagnostic tool towards a finer-grained analysis of the interplay between culture(s) and figurative language.
We propose a successful method based on prompt learning and present experiments using a specific pre-trained model. Choosing different
(and possibly larger) models and investigating even more than what we already do in this paper the influence of specific prompts would also be necessary to further generalise the efficacy of our approach.
Finally, as with most language technology, the limitations of our approach, also in terms of accuracy (especially for some phenomena and some languages), could lead to substantial inaccuracies which could be propagated in further processing.
Considering that figures of speech are associated with emotional language, a word of warning is necessary regarding the direct deployment of our models. We do hope that writing about risks explicitly and also raising awareness of this possibility in the general public are ways to contain the effects of potential harmful consquences. We are open to any discussion and suggestions to minimise such risks.
## Acknowledgments
This work was partly funded by the China Scholarship Council (CSC). The anonymous reviewers of ACL 2023 provided us with useful comments which contributed to improving this paper and its presentation, so we're grateful to them. We would also like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster.
## References
Ehsan Aghazadeh, Mohsen Fayyaz, and Yadollah Yaghoobzadeh. 2022. Metaphors in pre-trained language models: Probing and generalization across datasets and languages. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2037–
2050, Dublin, Ireland. Association for Computational Linguistics.
Beata Beigman Klebanov, Ekaterina Shutova, Patricia Lichtenstein, Smaranda Muresan, and Chee Wee, editors. 2018. *Proceedings of the Workshop on Figurative Language Processing*. Association for Computational Linguistics, New Orleans, Louisiana.
George Broadwell, Umit Boz, Ignacio Cases, Tomek Strzalkowski, Laurie Feldman, Sarah Taylor, Samira Shaikh, Ting Liu, Kit Cho, and Nick Webb. 2013.
Using imageability and topic chaining to locate metaphors in linguistic corpora. volume 7812, pages 102–110.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Luana Bulat, Stephen Clark, and Ekaterina Shutova.
2017. Modelling metaphor with attribute-based semantics. In *Proceedings of the 15th Conference of*
the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 523–528, Valencia, Spain. Association for Computational Linguistics.
Tommaso Caselli, Valerio Basile, Jelena Mitrovic, Inga ´
Kartoziya, and Michael Granitzer. 2020. I feel offended, don't be abusive! implicit/explicit messages in offensive and abusive language. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6193–6202, Marseille, France. European Language Resources Association.
Minjin Choi, Sunkyung Lee, Eunseong Choi, Heesoo Park, Junhyuk Lee, Dongwon Lee, and Jongwuk Lee.
2021. MelBERT: Metaphor detection via contextualized late interaction using metaphorical identification theories. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1763–1773, Online. Association for Computational Linguistics.
Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In *Advances in* Neural Information Processing Systems, volume 32.
Curran Associates, Inc.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Jinlan Fu, See-Kiong Ng, and Pengfei Liu. 2022. Polyglot prompt: Multilingual multitask promptraining.
In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.
Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettlemoyer. 2018. Neural metaphor detection in context.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 607–613, Brussels, Belgium. Association for Computational Linguistics.
Debanjan Ghosh, Beata Beigman Klebanov, Smaranda Muresan, Anna Feldman, Soujanya Poria, and Tuhin Chakrabarty, editors. 2022. *Proceedings of the* 3rd Workshop on Figurative Language Processing
(FLP). Association for Computational Linguistics, Abu Dhabi, United Arab Emirates (Hybrid).
Sarah Harmon. 2015. Figure8: A novel system for generating and evaluating figurative language. In Proceedings of the Sixth International Conference on Computational Creativity, pages 71–77.
Devamanyu Hazarika, Soujanya Poria, Sruthi Gorantla, Erik Cambria, Roger Zimmermann, and Rada Mihalcea. 2018. CASCADE: Contextual sarcasm detection
in online discussion forums. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1837–1848, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Jihen Karoui, Farah Benamara, Véronique Moriceau, Viviana Patti, Cristina Bosco, and Nathalie AussenacGilles. 2017. Exploring the impact of pragmatic phenomena on irony detection in tweets: A multilingual corpus study. In *Proceedings of the 15th* Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 262–272, Valencia, Spain. Association for Computational Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In *International* Conference on Learning Representations.
Beata Beigman Klebanov, Ekaterina Shutova, Patricia Lichtenstein, Smaranda Muresan, Chee Wee, Anna Feldman, and Debanjan Ghosh, editors. 2020. *Proceedings of the Second Workshop on Figurative Language Processing*. Association for Computational Linguistics, Online.
Li Kong, Chuanyi Li, Jidong Ge, Bin Luo, and Vincent Ng. 2020. Identifying exaggerated language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 7024–7034, Online. Association for Computational Linguistics.
Maximilian Köper and Sabine Schulte im Walde. 2016.
Distinguishing literal and non-literal usage of German particle verbs. In *Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 353–362, San Diego, California. Association for Computational Linguistics.
Huiyuan Lai and Malvina Nissim. 2022. Multifigurative language generation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5939–5954, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning:
Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–
4597, Online. Association for Computational Linguistics.
Changsheng Liu and Rebecca Hwa. 2018. Heuristically informed unsupervised idiom usage recognition.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 1723–1731, Brussels, Belgium. Association for Computational Linguistics.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2022. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing.
ACM Comput. Surv. Just Accepted.
Rui Mao, Chenghua Lin, and Frank Guerin. 2019. Endto-end sequential metaphor identification inspired by linguistic theories. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 3888–3898, Florence, Italy. Association for Computational Linguistics.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task generalization via natural language crowdsourcing instructions.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3470–3487, Dublin, Ireland.
Association for Computational Linguistics.
Michael Mohler, Mary Brunson, Bryan Rink, and Marc Tomlinson. 2016. Introducing the LCC metaphor datasets. In Proceedings of the Tenth International Conference on Language Resources and Evaluation
(LREC'16), pages 4221–4227, Portorož, Slovenia.
European Language Resources Association (ELRA).
Malvina Nissim and Katja Markert. 2003. Syntactic features and word similarity for supervised metonymy resolution. In *Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics*, pages 56–63, Sapporo, Japan. Association for Computational Linguistics.
Geoffrey Nunberg, Ivan A Sag, and Thomas Wasow.
1994. Idioms. *Language*, 70:491–538.
John Pavlopoulos, Jeffrey Sorensen, Lucas Dixon, Nithum Thain, and Ion Androutsopoulos. 2020. Toxicity detection: Does context really matter? In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4296–
4305, Online. Association for Computational Linguistics.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67.
Richard M. Roberts and Roger J. Kreuz. 1994. Why do people use figurative language? Psychological Science, 5(3):159–163.
Omid Rohanian, Shiva Taslimipoor, Richard Evans, and Ruslan Mitkov. 2018. WLV at SemEval-2018 task 3: Dissecting tweets in search of irony. In *Proceedings of the 12th International Workshop on Semantic* Evaluation, pages 553–559, New Orleans, Louisiana.
Association for Computational Linguistics.
Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010.
Metaphor identification using verb and noun clustering. In *Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)*,
pages 1002–1010, Beijing, China. Coling 2010 Organizing Committee.
Chuandong Su, Fumiyo Fukumoto, Xiaoxi Huang, Jiyi Li, Rongbo Wang, and Zhiqun Chen. 2020. DeepMet:
A reading comprehension paradigm for token-level metaphor detection. In *Proceedings of the Second* Workshop on Figurative Language Processing, pages 30–39, Online. Association for Computational Linguistics.
Harish Tayyar Madabushi, Edward Gow-Smith, Marcos Garcia, Carolina Scarton, Marco Idiart, and Aline Villavicencio. 2022. SemEval-2022 task 2: Multilingual idiomaticity detection and sentence embedding.
In *Proceedings of the 16th International Workshop* on Semantic Evaluation (SemEval-2022), pages 107–
121, Seattle, United States. Association for Computational Linguistics.
Simone Tedeschi, Federico Martelli, and Roberto Navigli. 2022. ID10M: Idiom identification in 10 languages. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2715–2726, Seattle, United States. Association for Computational Linguistics.
Serra Sinem Tekiroglu, Gözde Özbal, and Carlo Strappa- ˘
rava. 2015. Exploring sensorial features for metaphor identification. In *Proceedings of the Third Workshop* on Metaphor in NLP, pages 31–39, Denver, Colorado.
Association for Computational Linguistics.
Yufei Tian, Arvind krishna Sridhar, and Nanyun Peng.
2021. HypoGen: Hyperbole generation with commonsense and counterfactual knowledge. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 1583–1593, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Enrica Troiano, Carlo Strapparava, Gözde Özbal, and Serra Sinem Tekiroglu. 2018. ˘ A computational exploration of exaggeration. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3296–3304, Brussels, Belgium. Association for Computational Linguistics.
Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In *Proceedings of the 52nd Annual Meeting of the Association*
for Computational Linguistics (Volume 1: Long Papers), pages 248–258, Baltimore, Maryland. Association for Computational Linguistics.
Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on Twitter. In *Proceedings of the NAACL*
Student Research Workshop, pages 88–93, San Diego, California. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang. 2018. Neural metaphor detecting with CNN-LSTM model. In *Proceedings of the Workshop on Figurative Language* Processing, pages 110–114, New Orleans, Louisiana.
Association for Computational Linguistics.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics.
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against Neural Fake News. Curran Associates Inc., Red Hook, NY, USA.
Ziheng Zeng and Suma Bhat. 2021. Idiomatic expression identification using semantic compatibility.
Transactions of the Association for Computational Linguistics, 9:1546–1562.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
abstract and Sec. 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3.2
✓ B1. Did you cite the creators of artifacts you used?
3.2
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The datasets we used are freely available and were created for the same tasks that we use them for.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The datasets we used are freely available and were created for the same tasks that we use them for.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The datasets are commonly used for this task.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3.2
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3.2, Table 2
## C ✓ **Did You Run Computational Experiments?** 4 And 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
partly in Sec. 5.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? partly in Sec. 5.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5.3
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
du-etal-2023-zero | Zero-shot Visual Question Answering with Language Model Feedback | https://aclanthology.org/2023.findings-acl.590 | In this paper, we propose a novel language model guided captioning approach, LAMOC, for knowledge-based visual question answering (VQA). Our approach employs the generated captions by a captioning model as the context of an answer prediction model, which is a Pre-Trained Language model (PLM). As the major contribution, we leverage the guidance and feedback of the prediction model to improve the capability of the captioning model. In this way, the captioning model can become aware of the task goal and information need from the PLM. To develop our approach, we design two specific training stages, where the first stage adapts the captioning model to the prediction model (selecting more suitable caption propositions for training) and the second stage tunes the captioning model according to the task goal (learning from feedback of the PLM). Extensive experiments demonstrate the effectiveness of the proposed approach on the knowledge-based VQA task. Specifically, on the challenging A-OKVQA dataset, LAMOC outperforms several competitive zero-shot methods and even achieves comparable results to a fine-tuned VLP model. Our code is publicly available at \url{https://github.com/RUCAIBox/LAMOC}. | # Zero-Shot Visual Question Answering With Language Model Feedback
Yifan Du1,4**, Junyi Li**1,3, Tianyi Tang1**, Wayne Xin Zhao**1,4 B and **Ji-Rong Wen**1, 2,4 1Gaoling School of Artificial Intelligence, Renmin University of China 2School of Information, Renmin University of China 3DIRO, Université de Montréal 4Beijing Key Laboratory of Big Data Management and Analysis Methods
{yifandu1999, batmanfly}@gmail.com [email protected], [email protected]
## Abstract
In this paper, we propose a novel language model guided captioning approach, L**AMOC**,
for knowledge-based visual question answering (VQA). Our approach employs the generated captions by a captioning model as the context of an answer prediction model, which is a Pre-trained Language model (PLM). As the major contribution, we leverage the guidance and feedback of the prediction model to improve the capability of the captioning model. In this way, the captioning model can become aware of the task goal and information need from the PLM. To develop our approach, we design two specific training stages, where the first stage adapts the captioning model to the prediction model (selecting more suitable caption propositions for training) and the second stage tunes the captioning model according to the task goal
(learning from feedback of the PLM). Extensive experiments demonstrate the effectiveness of the proposed approach on the knowledgebased VQA task. Specifically, on the challenging A-OKVQA dataset, LAMOC outperforms several competitive zero-shot methods and even achieves comparable results to a fine-tuned VLP model. Our code is publicly available at https://github.com/RUCAIBox/LAMOC.
## 1 Introduction
Recently, pre-trained language models (PLMs) (Devlin et al., 2019; Brown et al., 2020), especially large language models (Zhao et al., 2023) have demonstrated excellent capabilities in solving tasks that require background knowledge or complex reasoning, such as commonsense reasoning (Sap et al., 2019; Rajani et al., 2019) and logical reasoning (Wei et al., 2022; Kojima et al., 2022). Inspired by these successes, recent studies have proposed utilizing PLMs1to solve complex vision-language B Corresponding author.
1In this paper, PLMs refer to the models trained on textonly corpus, instead of the text encoder/decoder in visionlanguage pre-trained (VLP) models, which typically have a weaker reasoning capacity in linguistic content.
![0_image_0.png](0_image_0.png)
![0_image_1.png](0_image_1.png)
cupcakes?
Figure 1: An example that a captioning model (BLIP)
fails to provide suitable descriptions for a prediction model (FLAN-T5) of a question in A-OKVQA dataset.
tasks, exemplified by the task of knowledge-based visual question answering (VQA) that aims to answer open-ended questions given an image based on outside knowledge (Schwenk et al., 2022). It has been shown that PLM-enhanced approaches (Gui et al., 2022; Lin et al., 2022) typically lead to an improved performance on the knowledge-based VQA
task than pure vision-language pre-trained (VLP)
models (Schwenk et al., 2022).
In the literature, existing PLM-enhanced VQA
approaches can be roughly categorized into two lines. The first line of research focuses on adapting PLMs to the vision modality by introducing specific modular networks or training objectives (Tsimpoukelli et al., 2021; Liang et al., 2022; Alayrac et al., 2022). However, they usually incur a high computational cost during pre-training in order to effectively integrate a vision encoder into the PLM. As another line of research, several studies aim to reduce the cost of tuning PLMs in visionlanguage tasks by utilizing PLMs in a zero-shot or few-shot manner. They typically generate a caption for an image using a captioning model (*e.g.,* a fine-tuned VLP model), and employ the generated caption as the context (*e.g.,* prompt) to assist PLMs in question answering (Yang et al., 2022; Tiong et al., 2022; Guo et al., 2022). Such an approach is training-free and can be generally applied with
## Various Plms.
However, in these existing zero-shot or few-shot methods, the captioning model is unaware of both task goal and *information need* for the integrated PLM. They directly reuse the captioning model fine-tuned on caption datasets. As a result, the generated captions tend to be less informative for the VQA task, even irrelevant to the question. Figure 1 presents an example that an inappropriate caption leads to an incorrect answer generated by the PLM.
As we can see, the question is highly related to keywords "*icing*" or "*frosting*", while the captioning model misses these information and generates a general description.
To address this issue, we propose L**AMOC**: a novel LAnguage MOdel guided Captioning approach for the VQA task. The key idea is to leverage the guidance and feedback of the prediction model (*i.e.,* the PLM) to improve the capability of the captioning model, so that it can be aware of the task goal and information need, and assist the prediction model in answer prediction. Our approach is specially designed with two gradual training stages. At the first stage, the captioning model is trained to align to the prediction model, in which the prediction model selects captions that are more pertinent to a given question from multiple propositions generated by the captioning model.
These selected captions are informative and can be used to fine-tune the captioning model to generate informative captions. At the second stage, since the generated caption is used by the PLM as direct evidence for VQA, we employ the feedback from the PLM as reward signals to train the captioning model via reinforcement learning. During training, only the captioning model is tuned while the PLM
is fixed, which significantly reduces the computational costs. Meanwhile, since the feedback is from PLM, both training stages do not require any labeled data.
Our contributions can be summarized as follows:
(1) We propose L**AMOC**, a novel approach for training captioning models to generate informative captions that can assist PLMs in VQA tasks; (2) Using a small number of randomly sampled unlabeled
(image, question) pairs, L**AMOC** consistently outperforms several competitive zero/few-shot baselines without PLM feedback on two knowledgebased VQA datasets: OK-VQA and A-OKVQA;
(3) We have demonstrated the effectiveness of our method on PLMs of varying scales, from 223M
to 11B. This not only confirms the robustness of our approach but also demonstrates its potential for generalization to Large Language Models (LLMs).
## 2 Related Work
PLMs for VQA. After training on large corpora, PLMs exhibit surprising abilities, such as chainof-thought reasoning (Wei et al., 2022), in-context learning (Brown et al., 2020), and instruction following (Chung et al., 2022), which cannot be obtained by vision-language pre-training. Thus, some works adopt PLM to perform VQA and obtain promising results. One line of research combines a PLM and a vision encoder and trains them endto-end. Frozen (Tsimpoukelli et al., 2021) and Liang et al. (2022) train a visual encoder or a modular network and keep the PLM frozen to retain its powerful abilities. Flamingo (Alayrac et al., 2022)
elaborates the model architecture to combine the vision and language models and scales the model size to 80B. Another line of research tries to deploy PLMs on VQA tasks in a few-shot/zero-shot manner. PICa (Yang et al., 2022) and Img2Prompt (Guo et al., 2022) translate the image to captions or tags and employ GPT-3 to answer a question by incontext learning. PNP-VQA (Tiong et al., 2022)
generates question-related captions and utilizes a QA model (Khashabi et al., 2022) for answer prediction. This type of work does not require extra training and can be adapted to new PLMs. Our work follows the second paradigm and is an extension of these works.
Learning from Feedback. A regular paradigm to train a model is defining a loss function and optimizing it. However, certain objectives, such as coherence, diversity, and toxicity in text generation, may not be easily incorporated into the loss function and learned in an end-to-end manner (Paulus et al., 2018; Pang and He, 2021). Thus, explicit feedback on model output is regarded as a learning signal to assist in training. Campos and Shern (2022) utilize a PLM's refinement and human feedback to fine-tune a summary model. Wang et al. (2022c) leverage compiler feedback to improve the compilability of programs generated by the language model. Ouyang et al. (2022) align a language model with the user's intention through reinforcement learning from human feedback. We borrow idea from these works, but our feedback comes from a PLM instead of humans, thus saving the annotation cost.
## 3 Method
In this section, we present the proposed LAMOC:
LAnguage MOdel guided Captioning method for VQA. The overall architecture of LAMOC is depicted in Figure 2.
## 3.1 Overview Of Our Approach
In this work, we study the task of visual question answering (VQA). Given an image-question pair x :
⟨xi, xq⟩, the task goal is to predict a correct answer y to the question xq given the image xi. Following prior studies (Yang et al., 2022; Tiong et al., 2022),
we adopt a captioning-based approach for VQA,
in which a captioning model generates auxiliary captions for helping answer prediction. Formally, we represent the above idea in a probabilistic way:
$$\begin{array}{l l}{{}}&{{p(y|x_{i},x_{q})}}\\ {{}}&{{\sum_{z\in\mathcal{Z}}\underbrace{p(z|x_{i},x_{q};\Theta_{C})}_{\mathrm{caption~generation}}\cdot\underbrace{p(y|x_{q},z;\Theta_{P})}_{\mathrm{answer~prediction}},}}\end{array}$$
$=\;\frac{1}{2}$ .
=X
where a captioning model ΘC firstly generates an auxiliary captions z, and then a prediction model ΘP predicts an answer candidate y based on the caption z and the question xq. We evaluate this probability by iterating over a set of generated captions. Here, we consider an unsupervised setting:
no labeled answer data is available. Although there is no labeled answers, we assume that a small number of image-question pairs can be obtained for training (no overlapping with the task dataset).
To instantiate this probabilistic approach, we adopt a vision-language pre-trained (VLP) model, i.e., BLIP (Li et al., 2022b), as the captioning model, and a pre-trained language model (PLM),
i.e., FLAN-T5-XXL (Chung et al., 2022), as the prediction model. The prediction model ΘP is expected to fulfill the task by accurately predicting the answer, while the captioning model ΘC plays an assisted role by providing informative evidence for ΘP . In our approach, the captioning model ΘC can be tuned while the prediction model ΘP is fixed during optimization. By leveraging the unlabeled image-question pairs (without the labeled answers), we let the two models cooperate with each other: the captioning model generates informative evidence for helping answer prediction, and the prediction model provides task-specific guidance and feedback to improve the captioning model.
To optimize our approach, we design a gradual training process including two stages: (1) captioning adaptation aims to adjust ΘC to produce informative captions that are suitable for ΘP (§3.2.1),
and (2) *feedback-based learning* aims to optimize ΘC according to task-specific feedback from ΘP (§3.2.2). Once the captioning model is well trained, we employ the prediction model for predicting the final answer as in Eq. (1), based on the captions provided by the captioning model (§3.3). Next, we introduce these parts in details.
## 3.2 Language Model Guided Captioning
The key of our approach (Eq. (1)) is to train an effective captioning model ΘC for improving the capability of the prediction model ΘP on VQA.
Considering that there are no labeled answers, we employ the prediction model to provide guidance and feedback to optimize the captioning model.
## 3.2.1 Captioning Adaptation
Since the captioning model is originally intended to describe the given image, it may not be in suited form to assist the prediction model. Thus, we propose a captioning adaptation strategy that tunes the captioning model to fit the prediction model.
Caption Propositions. We first sample n imagequestion pairs from VQAv2 (Goyal et al., 2017),
which is a large VQA dataset containing more than 1M questions and does not overlap with our task dataset. Then we employ the captioning model to propose k captions for each image by nucleus sampling (Holtzman et al., 2019). Among these captions, some may be better suited for the prediction model than the rest. We would like to identify such captions and use them to refine the captioning model.
Instruction-based Captions Selection. Since the prediction model is developed based on the FLANT5-XXL, it has encoded a large amount of knowledge in a massive number of parameters. We design the following instruction to prompt FLAN-T5-XXL to identify more informative captions:
"Question: [QUESTION] *Caption:* [CAPTION]\n To what degree does the caption relate to the question:\n A: 0%\n B: 25%\n C: 50%\n D:75%".
Given the above prompt, FLAN-T5-XXL will generate a corresponding option among the set
{*A, B, C, D*}. Such an option reflects the correlation between the caption and question, and the captions with the predicted option "*D:75%*" are more relevant to the question. Since the options are made by the prediction model itself, they tend to be
![3_image_0.png](3_image_0.png)
more useful for answer prediction. Thus, we keep the captions with the predicted option "*D:75%*"
and discard the rest.
Captioning Model Fine-tuning. Via the above caption selection, we can obtain a set of more informative captions, which are judged by the prediction model. Further, we use them to fine-tune the captioning model by optimizing the following cross-entropy loss:
$${\mathcal{L}}_{F T}=-{\frac{1}{T}}\sum_{t=1}^{T}\log p(z_{t}|x_{i},z_{<t}),\qquad(2)$$
where T is the length of caption, zt denotes the t-th token of the informative caption selected by FLANT5-XXL, z<t represents the generated token up to the (t−1)-th step. After fine-tuning, the captioning model can be better suited for the prediction model.
## 3.2.2 Feedback-Based Learning
Though adapting to the prediction model, the captioning model is still unaware of the answer prediction task for VQA. Thus, we further propose construct pseudo supervision signals based on the PLM feedback from the prediction model. Since the captioning model is only involved as an intermediate component for answer prediction, we design a reinforcement learning method for optimizing it.
Reward From PLM Feedback. A key design consideration of reinforcement learning is the definition of the reward function. In our approach, instead of only generating relevant captions for the images, the effectiveness of the captioning model should be measured by how well it helps find the correct answer. To achieve this goal, we design the following two kinds of reward signals.
- *Prompt-based Reward:* A heuristic method is utilizing the prompt in §3.2.1 to instruct FLAN-T5-
XXL to obtain a relevance score, and regard this relevance score as the reward signal:
r(xq, z) = arg max s∈{0,0.25,0.5,0.75} p(s|xq, z; ΘP ), (3)
A higher score indicates a more informative caption, which is encouraged.
- *Confidence-based Reward:* Since there is no ground-truth answer during training, following Eq.(1), we employ the probability score of the predicted answer (the most confident candidate) given by the prediction model as the reward:
$$r(x_{q},z)=p(\hat{y}|x_{q},z;\Theta_{P}),\qquad\qquad(4)$$
where z is the generated caption by the captioning model and yˆ is the predicted answer from the prediction model. In this way, the PLM (*i.e.,* the prediction model) can inform the captioning model about the informativeness of the generated caption:
the larger probability score, the more informative a caption is, and vice versa. We will verify the reliability of these reward designs in §5.1.
Policy Gradient. In the framework of reinforcement learning, caption generation can be viewed as a sequential decision-making process over the whole vocabulary space. Each generated caption with T tokens is treated as an individual episode of length T in this process. At the t-th time step, the state (xi, z<t) is the combination of the image and caption generated up to the (t − 1)-th token, and the action ztis the t-th token to be generated.
We employ the policy gradient algorithm (Sutton and Barto, 2018) and perform gradient descent to optimize the following objective function:
$$\mathcal{L}_{RL}=-\sum_{t=1}^{T}r(x_{q},z)\log p(z_{t}|x_{i},z_{<t};\Theta_{cap}),\tag{5}$$ where $z=\langle z_{1},...,z_{t},...,z_{T}\rangle$ is the caption, and
r(xq, z) is the reward given by the PLM. Finally, we jointly optimize the two loss functions:
$${\mathcal{L}}=(1-\alpha)\cdot{\mathcal{L}}_{F T}+\alpha\cdot{\mathcal{L}}_{R L},$$
where α is a weight factor to balance the two parts.
To fully exploit the online feedback provided by FLAN-T5-XXL, we only optimize the captioning adaptation loss function LF T in the initial epoch, while the reinforcement learning loss function LRL
is optimized throughout the training process.
## 3.3 Answer Prediction
At inference time, we utilize the updated captioning model to assist the prediction model in answering questions, by calculating the probability p(y|xq, z; ΘP ). To increase the diversity of captions and the coverage of answers, we first randomly sample 20% patches from the whole image at each time and apply top-k sampling (Fan et al.,
2018) to generate a caption for these patches with the updated captioning model. We repeat this process m times to generate m diverse captions. Then we concatenate each of them with the corresponding question to construct the following prompt:
"Please answer the following question.\n[CAPTION]. [QUESTION]".
Based on this prompt, the FLAN-T5-XXL is instructed to propose an answer with greedy decoding. We can take the max-voting strategy over all the generated answers.
Different from previous work on learning from feedback (Campos and Shern, 2022; Wang et al.,
2022c; Ouyang et al., 2022), our proposed approach explores the guidance and feedback from the prediction model instead of human annotations.
As we will see in §5.1, our empirical study shows that there exists a negative correlation between the negative log likelihood assigned by a PLM and the VQA score of a generated answer. This finding suggests that the reward r(xq, z) given by PLM can potentially serve as a substitute for labeled data to improve the captioning model for the VQA task.
## 4 Experiment
This section shows the experimental setup and then highlights the main conclusions of our results.
## 4.1 Experimental Setup
$$(6)$$
Task Datasets. Since our goal is to improve the performance of PLMs on visual commonsense tasks, we choose two knowledge-based VQA datasets to evaluate our method: (1) **OK-VQA** (Marino et al.,
2019) contains 5,046 questions in the test set that require external knowledge resources to answer.
(2) **A-OKVQA** (Schwenk et al., 2022) is an augmented dataset based on OK-VQA, which requires additional types of world knowledge compared to OK-VQA. Since the test set of A-OKVQA is not public, we evaluate our method on the validation set. We do not test on VQAv2 (Goyal et al., 2017)
because the majority of questions in this dataset are largely focused on recognition and simple visual detection tasks, which can be done without much logical reasoning or external knowledge, and a fine-tuned VLP model could obtain surprising results (Wang et al., 2022b,a). We do not use training data to make a fair comparison with other methods.
Baselines. We divide previous methods into two categories: (1) **Methods without extra largescale Vision-Language (V-L) pre-training**, which means the models have not been pre-trained on large-scale V-L datasets, including PICa (Yang et al., 2022), PNP-VQA (Tiong et al., 2022),
Img2Prompt (Guo et al., 2022). LAMOC also belongs to this category. (2) **Methods with extra**
large-scale V-L pre-training, which means that the PLM and the vision encoder are jointly trained on V-L datasets (although the PLM may be fixed, it obtains the ability to understand images), including VL-T5 (Cho et al., 2021), FewVLM (Jin et al.,
| Evaluation Setting | Method | Parameters | Use | With extra V-L | OK-VQA | A-OKVQA |
|-----------------------------------------------------|---------------|--------------|-------|------------------|----------|-----------|
| Extra PLM? | Pre-training? | test | val | | | |
| Models fine-tuned on training set. Supervised BLIP† | 226M | ✗ | ✗ | 37.6 | 38.5 | |
| learning | PromptCap | 175B | ✔ | ✗ | 58.8 | 58.0 |
| Models without fine-tuning. FewVLMbase | 288M | ✗ | ✔ | 15.0 | - | |
| Few-shot | FewVLMlarge | 804M | ✗ | ✔ | 23.1 | - |
| PICa | 175B | ✔ | ✗ | 48.0 | - | |
| VL-T5no-VQA | 288M | ✗ | ✔ | 5.8 | - | |
| VLKDViT-B/16 | 494M | ✔ | ✔ | 10.5 | - | |
| VLKDViT-B-L/14 | 713M | ✔ | ✔ | 13.3 | - | |
| Flamingo3B | 3B | ✔ | ✔ | 41.2 | - | |
| Flamingo9B | 9B | ✔ | ✔ | 44.7 | - | |
| Flamingo80B | 80B | ✔ | ✔ | 50.6 | - | |
| Frozen | 7B | ✔ | ✔ | 5.9 | - | |
| PNP-VQA3B | 3.9B | ✔ | ✗ | 34.1 | 33.4 | |
| PNP-VQA11B | 11.9B | ✔ | ✗ | 35.9 | 36.0 | |
| Img2Prompt6.7B | 8.3B | ✔ | ✗ | 38.2 | 33.3 | |
| Img2Prompt13B | 14.6B | ✔ | ✗ | 39.9 | 33.3 | |
| LAMOC11B (Ours) | 11.4B | ✔ | ✗ | 40.3 | 37.9 | |
| Zero-shot | | | | | | |
2022), VLKD (Dai et al., 2022), Frozen (Tsimpoukelli et al., 2021), and Flamingo (Alayrac et al.,
2022). The above methods do not use or use few labeled data (zero-shot/few-shot). Besides, we include two methods, *i.e.,* BLIP (Li et al., 2022b) and PromptCap (Hu et al., 2022), which are fine-tuned on large amounts of labeled data.
Implementation details. For image captioning, we adopt BLIP (Li et al., 2022b) with 446M parameters and load the released checkpoint that has been fine-tuned on the COCO 2014 training set (Lin et al., 2014), which has no overlap with both the OK-VQA and A-OKVQA evaluation datasets. For the PLM, we utilize FLAN-T5-XXL (Wei et al., 2022), which has been fine-tuned on more than 1,800 tasks through instructions and stores considerable world knowledge. We also carry out experiments on PLMs with other sizes, from 223M
to 11B parameters, to demonstrate the robustness and generalizability of our approach across PLMs with different sizes. It is noteworthy that the informative caption dataset used in the captioning adaptation stage is selected by FLAN-T5-XXL, because the relevance score given by smaller models is not reliable, as will be illustrated in §5.1. When training the captioning model, we select 1,000 (image, question) pairs without labels from VQAv2
(about 10% of the amount of training data for our target datasets), which has no overlap with the OKVQA and A-OKVQA. It is worth noting that these 1,000 image-question pairs can be sampled from any datasets or even be generated, we sample from VQAv2 for the sake of reproducibility. The answers are generated by the PLM auto-regressively, without access to the pre-defined answer list. We conduct experiments with 5 random seeds and report the average VQA score according to official evaluation protocols.
## 4.2 Main Results
Table 1 displays the results of our methods and baselines on OK-VQA and A-OKVQA.
First, LAMOC outperforms all the zero-shot baselines without V-L pre-training on both datasets.
Compared to previous state-of-the-art, LAMOC
achieves prominent gains on the challenging AOKVQA dataset (37.9 vs 36.0) and OK-VQA dataset (40.3 vs 39.9). Compared to these baselines, our approach does not require additional image-question matching or question generation modules, thus speeding up the inference speed.
Since Flamingo has been trained on a massive V-L
dataset, it achieves the best performance among zero-shot methods. It has been reported that largescale V-L pre-training can develop a mapping between images and knowledge concepts that can aid in knowledge-based VQA (Tiong et al., 2022).
Second, LAMOC narrows the gap between methods with and without fine-tuning, and even achieves comparable results with the fine-tuned VLP model, i.e., BLIP. For example, the performance gap between PNP-VQA11B and BLIP is 2.5, and has been decreased to 0.6 by LAMOC, which implies the importance of language model feedback.
Finally, we report the results of our methods with different model sizes in Table 2. When increasing the model scale from 223M to 11B, we observe a 1-2 point improvement in VQA scores on the challenging A-OKVQA dataset. This indicates that a larger PLM can not only store more world knowledge to assist with question answering, but also provide more accurate feedback to refine the captioning model. This is further supported by the ablation study in §5.1.
## 5 Analysis 5.1 The Reliability Of Feedback From Plm
![6_image_0.png](6_image_0.png)
The main idea of our work is leveraging the feedback of a PLM to guide caption generation, so a critical aspect is the reliability of the feedback. LAMOC involves two types of feedback: (1)
prompt-based reward and (2) confidence-based reward, which will be evaluated independently.
To evaluate the reliability of the first type of feedback, we analyze the relation between the VQA
score and the relevance score provided by the PLM
on A-OKVQA validation set (Figure 3(a)). We can observe that as the relevance score provided by FLAN-T5-XXL increases, the VQA score also increases, indicating that FLAN-T5-XXL is a suitable prediction model for providing accurate feedback and the relevance scores can be regarded as reward signals. However, this trend is not observed for the other three models, implying that their feedback is unreliable. As a result, we only use FLANT5-XXL to select informative captions during captioning adaptation.
To evaluate the reliability of the second type of feedback, we prompt FLAN-T5 to answer the question conditioned on the captions and plot the relationship between the negative log-likelihood
(NLL) of the generated answer and its corresponding VQA score. As Figure 3(b) shows, there is a negative correlation between the NLL of the generated answers and their VQA scores, suggesting that captions with lower NLL are more informative and relevant to the questions. Therefore, the probability of the generated answer is a reliable feedback and can be used as the reward signal during reinforcement learning.
## 5.2 The Effectiveness Of Two-Stage Training
When training the captioning model, we adopt two gradual training stages: captioning adaptation and feedback-based learning. In this part, we study the effectiveness of this training strategy and explore whether one training stage is more effective than the other. As illustrated in Table 2, different models benefit from different training objectives.
For example, the captioning adaptation stage is more beneficial for FLAN-T5-large, leading to an improvement of about 4 points on OK-VQA. On the other hand, FLAN-T5-XXL benefits the most from reinforcement learning with prompt-based rewards and obtains more than 4 points improvement on A-OKVQA. Moreover, the results show that jointly training the two objectives further boosts performance, highlighting the effectiveness of the proposed two-stage training approach.
## 5.3 Case Study
Figure 4 displays three instances of the captions generated by BLIP and LAMOC, along with the corresponding answers generated by FLAN-T5-XXL.
Since LAMOC is trained on the basis of BLIP, the difference can reflect the effect of our method. As can be observed, the captions generated by LAMOC are longer and more comprehensive, containing key information relevant to the question. For example, in Figure 4(a), LAMOC generates captions that include specific details such as "*frosting*" and "*chocolate*", while BLIP only generates general captions about "*donuts*" and "box", without sufficient infor-
| Method | FLAN-T5-base (223M) | FLAN-T5-large (738M) | FLAN-T5-XL (3B) | FLAN-T5-XXL (11B) | | | | |
|-------------------|-----------------------|------------------------|-------------------|---------------------|---------|--------|---------|-------|
| OK-VQA | A-OKVQA | OK-VQA | A-OKVQA | OK-VQA | A-OKVQA | OK-VQA | A-OKVQA | |
| BLIP caption | 20.42 | 19.46 | 23.86 | 28.86 | 32.36 | 31.21 | 38.48 | 35.06 |
| + adaptation | 19.72 | 18.71 | 27.43 | 29.19 | 32.22 | 31.07 | 38.35 | 35.30 |
| + RL (prompt) | 21.24 | 19.25 | 27.29 | 29.73 | 32.28 | 30.63 | 38.74 | 37.62 |
| + RL (confidence) | 21.14 | 19.74 | 25.09 | 28.98 | 32.02 | 32.10 | 40.31 | 37.85 |
| + adaptation + RL | 19.72 | 20.63 | 24.82 | 29.84 | 32.77 | 32.00 | 39.72 | 37.09 |
![7_image_0.png](7_image_0.png)
![7_image_1.png](7_image_1.png)
mation to help answer the question. These results highlight the importance of training the captioning model under the guidance of PLMs.
One concern is that the PLM may generate correct answers due to the language bias, not attributing to the relevant information contained in the captions. For example, in Figure 4(a), the PLM
may generate the answer "*chocolate*", even if the captions do not mention chocolate (Li et al., 2023).
However, since chocolate often co-occurs with donuts in the training corpora, the PLM may associate chocolate with donuts and generate it as the answer. In order to check how often such a situation happens, we randomly sample 100 questions where the prediction model gives correct answers.
For each question, we manually assess whether their answer is derived from the caption. Our analysis reveals that only 6 out of 100 captions are irrelevant to the questions, indicating the reliability of the captions.
Another interesting phenomenon is that the sentences generated by LAMOC can be grammatically incoherent and sometimes incomplete. This indicates that PLM prompting may not always conform to human language patterns, which is consistent with previous studies (Webson and Pavlick, 2022; Deng et al., 2022).
The ablation study of the level of relevance, the number of captions, and the influence of different prompt designs can be found in appendix B.
## 6 Conclusion
In this paper, we propose LAMOC, a language model guided captioning method that improves a captioning model to generate comprehensive captions for an image to help answer the question. In order to train such a model, we first perform captioning adaptation on a self-generated dataset filtered by FLAN-T5-XXL, and then finetune the updated captioning model through reinforcement learning from PLM feedback. Our method, LAMOC, generates captions that are both informative and able to assist PLMs in VQA
tasks, as demonstrated through experiments on two knowledge-based VQA datasets. On the challenging A-OKVQA dataset, LAMOC substantially outperforms previous zero-shot methods and even achieves comparable results to a fine-tuned VLP
model. Additionally, we show that LAMOC is generalizable to PLMs of varying sizes, from 223M to 11B parameters, demonstrating its potential to be applied to LLMs, which we leave as future work.
## 7 Limitations
In our study, we have demonstrated the effectiveness of our proposed method on FLAN-T5 with different sizes. However, we have not yet evaluated its performance on LLMs, which possess an even greater number of parameters and have been pre-trained on larger corpora, thus potentially providing more accurate feedback for both caption adaptation and reinforcement learning. Meanwhile, it is worth noting that PLMs may contain certain biases, and training based on their feedback may amplify these biases. As future work, we aim to investigate the scalability of our method to LLMs, as well as strategies to mitigate the potential negative effects of biases present in PLMs.
## Ackownledgement
This work was partially supported by National Natural Science Foundation of China under Grant No. 62222215, Beijing Natural Science Foundation under Grant No. 4222027, and Beijing Outstanding Young Scientist Program under Grant No.
BJJWZYJH012019100020098. Xin Zhao is the corresponding author.
## References
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. *arXiv preprint arXiv:2204.14198*.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33:
Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Jon Ander Campos and Jun Shern. 2022. Training language models with language feedback. In *ACL Workshop on Learning with Natural Language Supervision. 2022.*
Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021.
Unifying vision-and-language tasks via text generation. In *International Conference on Machine Learning*, pages 1931–1942. PMLR.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2022. Scaling instruction-finetuned language models.
arXiv preprint arXiv:2210.11416.
Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, and Pascale Fung. 2022. Enabling multimodal generation on CLIP via vision-language knowledge distillation. In *Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland,*
May 22-27, 2022, pages 2383–2395. Association for Computational Linguistics.
Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P
Xing, and Zhiting Hu. 2022. Rlprompt: Optimizing discrete text prompts with reinforcement learning.
arXiv preprint arXiv:2205.12548.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA,
June 2-7, 2019, Volume 1 (Long and Short Papers),
pages 4171–4186. Association for Computational Linguistics.
Angela Fan, Mike Lewis, and Yann N. Dauphin. 2018.
Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 889–898. Association for Computational Linguistics.
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In *Proceedings of the* IEEE conference on computer vision and pattern recognition, pages 6904–6913.
Liangke Gui, Borui Wang, Qiuyuan Huang, Alexander Hauptmann, Yonatan Bisk, and Jianfeng Gao.
2022. KAT: A knowledge augmented transformer for vision-and-language. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA,
United States, July 10-15, 2022, pages 956–968. Association for Computational Linguistics.
Jiaxian Guo, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Boyang Li, Dacheng Tao, and Steven CH Hoi. 2022. From images to textual prompts: Zero-shot vqa with frozen large language models. *arXiv preprint arXiv:2212.10846*.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. *arXiv preprint arXiv:1904.09751*.
Yushi Hu, Hang Hua, Zhengyuan Yang, Weijia Shi, Noah A Smith, and Jiebo Luo. 2022. Promptcap:
Prompt-guided task-aware image captioning. arXiv preprint arXiv:2211.09699.
Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. 2022. A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2763–2775. Association for Computational Linguistics.
Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training. *CoRR*,
abs/2202.12359.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *arXiv preprint* arXiv:2205.11916.
Dongxu Li, Junnan Li, Hung Le, Guangsen Wang, Silvio Savarese, and Steven C. H. Hoi. 2022a. LAVIS:
A library for language-vision intelligence. *CoRR*,
abs/2209.09019.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H.
Hoi. 2022b. BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation. In *International Conference on* Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of *Proceedings* of Machine Learning Research, pages 12888–12900. PMLR.
Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Evaluating object hallucination in large vision-language models. *CoRR*, abs/2305.10355.
Sheng Liang, Mengjie Zhao, and Hinrich Schütze. 2022.
Modular and parameter-efficient multimodal fusion with prompting. In *Findings of the Association for* Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2976–2985. Association for Computational Linguistics.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco:
Common objects in context. In *European conference on computer vision*, pages 740–755. Springer.
Yuanze Lin, Yujia Xie, Dongdong Chen, Yichong Xu, Chenguang Zhu, and Lu Yuan. 2022. REVIVE: regional visual representation matters in knowledge-based visual question answering. *CoRR*,
abs/2206.01201.
Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external knowledge. In *Proceedings of the IEEE/cvf conference* on computer vision and pattern recognition, pages 3195–3204.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155.
Richard Yuanzhe Pang and He He. 2021. Text generation by learning from demonstrations. In *9th International Conference on Learning Representations,*
ICLR 2021, Virtual Event, Austria, May 3-7, 2021.
OpenReview.net.
Romain Paulus, Caiming Xiong, and Richard Socher.
2018. A deep reinforced model for abstractive summarization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL
2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4932–4942. Association for Computational Linguistics.
Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019.
Atomic: An atlas of machine commonsense for ifthen reasoning. In *Proceedings of the AAAI conference on artificial intelligence*, volume 33, pages 3027–3035.
Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. 2022.
A-OKVQA: A benchmark for visual question answering using world knowledge. In *Computer Vision -*
ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part VIII,
volume 13668 of *Lecture Notes in Computer Science*,
pages 146–162. Springer.
Richard S Sutton and Andrew G Barto. 2018. *Reinforcement learning: An introduction*. MIT press.
Anthony Meng Huat Tiong, Junnan Li, Boyang Li, Silvio Savarese, and Steven CH Hoi. 2022. Plug-andplay vqa: Zero-shot vqa by conjoining large pretrained models with zero training. *arXiv preprint* arXiv:2210.08773.
Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, and Felix Hill. 2021.
Multimodal few-shot learning with frozen language models. In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural* Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 200–212.
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022a. OFA: unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In *International Conference on Machine Learning, ICML*
2022, 17-23 July 2022, Baltimore, Maryland, USA,
volume 162 of *Proceedings of Machine Learning* Research, pages 23318–23340. PMLR.
Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, and Furu Wei. 2022b. Image as a foreign language: Beit pretraining for all vision and visionlanguage tasks. *CoRR*, abs/2208.10442.
Xin Wang, Yasheng Wang, Yao Wan, Fei Mi, Yitong Li, Pingyi Zhou, Jin Liu, Hao Wu, Xin Jiang, and Qun Liu. 2022c. Compilable neural code generation with compiler feedback. In *Findings of the Association* for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 9–19. Association for Computational Linguistics.
Albert Webson and Ellie Pavlick. 2022. Do promptbased models really understand the meaning of their prompts? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 2300–2344. Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*.
Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. 2022.
An empirical study of gpt-3 for few-shot knowledgebased vqa. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 36, pages 3081–
3089.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models. *CoRR*,
abs/2303.18223.
## Appendix A Training Details And Artifacts
For LAMOC training, we adopt the officially released BLIP captioning checkpoint2for model initialization. For both captioning adaptation and reinforcement learning, we adopt the following hyper-parameters: learning rate 2e − 6, warmup 600 steps, weight decay 0.05, batch size 8. The balance factor α is set to 0.9. We train the model for 10 epochs and choose the one with the highest reward (without labels from the validation set). All the experiments are conducted based on LAVIS (Li et al., 2022a) under BSD 3-Clause License. The A-OKVQA is under the Apache License 2.0.
## B Additional Ablation Study B.1 Level Of Relevance
| Level | A-OKVQA |
|----------------------------------------|-----------|
| A: 0%; B: 100% | 27.25 |
| A: 0%; B: 50%; C: 100% | 28.29 |
| A: 0%; B: 25%; C: 50%; D: 75% | 28.98 |
| A: 0%; B: 25%; C: 50%; D: 75%; E: 100% | 27.96 |
When prompting the PLM to give a correlation score for the caption, the level of relevance is part of the prompt, thus can influence the result. We try different levels for the prompt-based reward and the results are in Table 3. Since four levels gives the highest vqa score, we use four levels in our prompt-based reinforcement learning.
## B.2 Number Of Captions
Since the PLM is "blind," all visual information is carried by the captions. Thus, the number of captions is critical for the PLM to answer the question.
In Figure 5, we explore the influence of the number of captions. Our results indicate that utilizing a larger number of captions leads to improved performance across various model sizes. Performance gains continue to accumulate even when utilizing 10 captions, leading us to posit that incorporating an even greater number of captions would result in further improvements.
![11_image_0.png](11_image_0.png)
## B.3 Prompt Design
Another critical design of our method is instructing the FLAN-T5 to provide feedback and answer questions, so we explore the effects of different formats of instruction in Table 4. We can observe that prompt design has a great impact on the results Table 4, which is in line with the conclusion of previous works (Wei et al., 2022).
VQA score
![11_image_1.png](11_image_1.png)
| Prompt | OK-VQA A-OKVQA | |
|---------------------------------------------------------------------|------------------|-------|
| Answer the following question in one word. Q: [caption]. [question] | 29.53 | 29.84 |
| Please answer the following question. [caption]. [question] | 28.22 | 29.73 |
| [caption]. [question] | 27.59 | 27.99 |
| [caption]. [question] Let's think step by step. | 18.08 | 28.72 |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 4
✓ B1. Did you cite the creators of artifacts you used?
4
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
4
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
4
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
4, Appendix
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
bhaskar-etal-2023-prompted | Prompted Opinion Summarization with {GPT}-3.5 | https://aclanthology.org/2023.findings-acl.591 | Large language models have shown impressive performance across a wide variety of tasks, including text summarization. In this paper, we show that this strong performance extends to opinion summarization. We explore several pipeline methods for applying GPT-3.5 to summarize a large collection of user reviews in aprompted fashion. To handle arbitrarily large numbers of user reviews, we explore recursive summarization as well as methods for selecting salient content to summarize through supervised clustering or extraction. On two datasets, an aspect-oriented summarization dataset of hotel reviews (SPACE) and a generic summarization dataset of Amazon and Yelp reviews (FewSum), we show that GPT-3.5 models achieve very strong performance in human evaluation. We argue that standard evaluation metrics do not reflect this, and introduce three new metrics targeting faithfulness, factuality, and genericity to contrast these different methods. | # Prompted Opinion Summarization With Gpt-3.5
Greg Durrett3 UT Austin Adithya Bhaskar1 IIT Bombay Alexander R. Fabbri2 Salesforce AI
[email protected] [email protected] [email protected]
## Abstract
Large language models have shown impressive performance across a wide variety of tasks, including text summarization. In this paper, we show that this strong performance extends to opinion summarization. We explore several pipeline methods for applying GPT-3.5 to summarize a large collection of user reviews in a prompted fashion. To handle arbitrarily large numbers of user reviews, we explore recursive summarization as well as methods for selecting salient content to summarize through supervised clustering or extraction. On two datasets, an aspect-oriented summarization dataset of hotel reviews (SPACE) and a generic summarization dataset of Amazon and Yelp reviews (FewSum), we show that GPT-3.5 models achieve very strong performance in human evaluation.
We argue that standard evaluation metrics do not reflect this, and introduce three new metrics targeting faithfulness, factuality, and genericity to contrast these different methods.
## 1 Introduction
Recent years have seen several shifts in summarization research, from primarily extractive models
(Erkan and Radev, 2004; Gu et al., 2022; Kwon et al., 2021; Jia et al., 2020; Zhong et al., 2020)
to abstractive models with copy mechanisms (See et al., 2017; Song et al., 2018; Gehrmann et al., 2018) to pre-trained models (Devlin et al., 2019; Isonuma et al., 2021; Lewis et al., 2020; Zhang et al., 2020a; He et al., 2020). GPT-3 (Brown et al., 2020; Wu et al., 2021; Saunders et al., 2022; Goyal et al., 2022) and GPT-4 represent another shift: they show excellent zero- and few-shot performance across a variety of text generation tasks. However, their capabilities have not been extensively benchmarked for opinion summarization. Unlike news, where extractive lead baselines are often highly effective, opinion summarization requires balancing contradictory opinions and a higher degree of abstraction to convey all of the viewpoints faithfully.
In this paper, we apply GPT-3.5, specifically the text-davinci-002 model,1to the task of opinion summarization, focusing on reviews of products, hotels, and businesses. Applying GPT-3.5 in this setting is not straightforward, as the combined length of the reviews or posts may exceed the model's maximum input length. Furthermore, we find that certain styles of inputs can lead to GPT3.5 simply echoing back an extract of the inputs.
To mitigate these issues, we explore a family of pipelined approaches, specifically (1) filtering a subset of sentences with an extractive summarization model, (2) chunking with repeated summarization, and (3) review-score-based stratification. In the context of aspect-oriented summarization, we also explore the inclusion of a sentence-wise topic prediction and clustering step.
We show that our approaches yield high-quality summaries according to human evaluation. The errors of the systems consist of subtle issues of balancing contradictory viewpoints and erroneous generalization of specific claims, which are not captured by metrics like ROUGE (Lin, 2004) or BERTScore (Zhang et al., 2020b). This result corroborates work calling for a re-examination of current metrics (Fabbri et al., 2021; Tang et al., 2023)
and the need for fine-grained evaluation (Gehrmann et al., 2022). We therefore introduce a set of metrics, using entailment as a proxy for support, to measure the factuality, *faithfulness*, and *genericity* of produced summaries. These metrics measure the extent of over-generalization of claims and misrepresentation of viewpoints while ensuring that summaries are not overly generic.
Our results show that basic prompted GPT-3.5 produces reasonably faithful and factual summaries when the input reviews are short (fewer than 1000 words); more sophisticated techniques do not show much improvement. However, as the input size 1The most advanced model available at the time this work was being conducted.
![1_image_0.png](1_image_0.png)
grows larger, repeated summarization leads GPT3.5 to produce generalized and unfaithful selections of viewpoints relative to the first round. We demonstrate that using QFSumm (Ahuja et al., 2022),
an extractive summarization model, to filter out sentences prior to GPT-3.5 (instead of multi-level summarization) can slightly help with factuality and faithfulness. The resulting summaries also present a more specific selection of viewpoints but are generally shorter and use a higher proportion of common words. A topicwise clustering and filtering step pre-pended to the pipeline alleviates these issues while relinquishing a portion of the gains on factuality and faithfulness.
Our main contributions are: (1) We introduce two approaches to long-form opinion summarization with GPT-3.5, namely, hierarchical GPT-3.5 summarization with chunking, and pre-extraction with an extractive summarization model. (2) We establish the strength of these approaches with a human study and demonstrate the need for objective and automatic means of evaluation. (3) We develop three entailment-based metrics for factuality, faithfulness, and genericity that are better suited to evaluate extremely fluent summaries as compared to metrics based on n-gram matching. The relevant artifacts and code for this work are publicly available and can be found at https:
//github.com/testzer0/ZS-Summ-GPT3/.
## 2 Motivation And Problem Setting
Review summarization involves the summarization of the text of multiple reviews of a given product or service into a coherent synopsis. More formally, given a set of reviews R = {Ri}
n i=1 with the review Ri consisting of li sentences {rij}
li j=1, we define a *summarization system* S to be a function that takes as input the combined reviews C and then produces k output sentences S = {si}
k i=1, written as S = S(C), where C ≡ combine(R)
is typically obtained by concatenating the review sentences. We use the notation combine to refer to the combination of both sentences and reviews.
We can also instantiate this pipeline for *aspectoriented review summarization*, which involves the summarization of multiple reviews conditioned on an aspect a (such as *'cleanliness'*). In particular, the summarization is written as S = S(C | a).
We consider aspect-agnostic review summarization as a special case of aspect-oriented review summarization with the aspect *'none'* for notational simplicity.
## 2.1 Desiderata
Opinion summaries should demonstrate three key characteristics.
First, the summaries should also be **faithful**, i.e.,
select the most subjectively important viewpoints with the largest consensus. For instance, if five reviews raised the issue of small rooms while eight complained about dusty carpets, the choice (due to a limited output size) to discuss the latter over the former would be considered faithful. Thus, faithfulness is about careful management of the word budget given constrained output length.
The summaries should also be **factual**, i.e., report information grounded in statements that actu-
![2_image_1.png](2_image_1.png)
ally do appear in the set of reviews, without containing extrinsic hallucinations. For instance, if five reviews found hotel rooms to be small, but three found them large, the statement The rooms were large is considered factual despite the viewpoint being in the minority. By contrast, A pipe burst and flooded my room is unfactual if this is never actually reported in the reviews.
Finally, the summaries should be **relevant**: the points raised in them should only discuss topics relevant to the specified aspect. For example, in a summary about the cleanliness of a hotel room, bad food should be omitted even if it was frequently brought up in the reviews.
## 2.2 Framework
Based on the desiderata, we need to ensure that the summaries represent all of the reviews; however they are too many in number and too long in combined length. We, therefore, define a *summarization pipeline* to be a series of summarization systems S1, *· · ·* , Sm where each system takes as input the condensed results of the previous system. Specifically, S0 = R, Ci = combine(Si−1), Si = Si(Ci)
We showcase an example pipeline in Figure 1, with one stage extracting the relevant sentences from the reviews and the next summarizing the extracted sentences.
## 3 Gpt-3.5 Summarization Pipelines
The components of our summarization pipelines may be broadly categorized into *extractors* and
![2_image_0.png](2_image_0.png)
summarizers, which we describe next. More details can be found in Appendix A. First, *extractors* select relevant parts of a set of reviews, optionally conditioned on an aspect. Our extractors include:
GPT-3.5 Topic Clustering (T) We prompt GPT3.5 to produce a single word topic for each sentence, which we map to the closest aspect with GloVe (Pennington et al., 2014) similarity. This defines a set of sentences to be used for aspect-based summarization. This step is only used for pipelines on SPACE, as FewSum is aspect-agnostic.
QFSumm-long (Q) We use the aspect-specific extractive summarization model introduced in
(Ahuja et al., 2022) to extract up to 35 most relevant sentences from the input text. QFSumm was designed to allow extremely long inputs, and thus no truncation is required at this stage.
Review Stratification (R) This involves clustering reviews by reviewer scores (given in the dataset)
and summarizing each cluster with GPT-3.5.
In addition to extractors, we also utilize **GPT3.5-chunking (C)** in some of our pipelines. We segment the sentences from the prior step into nonoverlapping chunks, then summarize each individually with GPT-3.5. The results are then concatenated for the next step.
Our *summarizers* summarize the text one final time to produce the output summary. All of our pipelines use GPT-3.5 as the summarizer. However, we also compare to QFSumm (Ahuja et al., 2022),
AceSum (Amplayo et al., 2021a) and the model
| Pipeline | ROUGE-1 | ROUGE-L | BERTScore |
|-----------------|-----------|-----------|-------------|
| SPACE | | | |
| Q | 19.2 | 16.7 | 85.4 |
| A | 32.4 | 30.2 | 89.8 |
| TCG | 23.5 | 20.6 | 88.7 |
| QG | 25.1 | 22.1 | 89.1 |
| TQG | 25.2 | 22.3 | 89.0 |
| RG | 23.0 | 20.5 | 88.5 |
| FewSum - Amazon | | | |
| Q | 27.0 | 24.3 | 86.2 |
| FS | 32.5 | 29.6 | 88.8 |
| G | 27.0 | 23.9 | 88.7 |
| QG | 26.2 | 23.7 | 88.4 |
| CG | 25.7 | 22.9 | 88.6 |
| FewSum - Yelp | | | |
| Q | 23.8 | 20.6 | 84.3 |
| FS | 34.1 | 31.4 | 89.0 |
| G | 26.1 | 21.4 | 88.4 |
| QG | 27.1 | 22.1 | 88.5 |
| CG | 26.5 | 21.5 | 88.2 |
released with FewSum (Bražinskas et al., 2020a),
also referred to as FewSum.
These building blocks are composed to build various summarization pipelines, which we list in Table 1. An illustration of one pipeline (TCG)
is shown in Figure 1. Since topic-wise clustering is unnecessary for FewSum (due to lack of aspects), we only compare G (vanilla GPT-3.5 used to summarize the set of product reviews, truncated to fit if necessary), CG (Chunking + GPT-3.5), QG
(QFSumm-long + GPT-3.5), Q (QFSumm), and FS
(FewSum) for this dataset. The table also lists some approaches that are the first stages of pipelines that begin with GPT-3.5-chunking, which we also compare against in Section 5.
## 4 Evaluation
| SPACE | FewSum | |
|-------------------------------|----------|-------|
| Average #reviews per entity | 100.00 | 22.41 |
| Average #sentences per review | 9.16 | 3.37 |
| Average #words per sentence | 17.56 | 12.12 |
Table 3: SPACE and FewSum dataset statistics.
## 4.1 Datasets
SPACE (Amplayo et al., 2021a) involves the summarization of reviews of hotels along the aspects {*general, rooms, building, cleanliness, location, service, food*} and provides three human-
![3_image_0.png](3_image_0.png) It's not the best location as you do need to drive everywhere but that was all part of the fun for us!!
Summarize what reviewers thought of the location:
![3_image_1.png](3_image_1.png)
![3_image_2.png](3_image_2.png)
![3_image_3.png](3_image_3.png)
The vast majority of reviewers thought that the location was great. They especially appreciated how close it was to the park and to restaurants. A few reviewers **thought that the**
location was not the best because it was **not centrally located** and you had to drive to get to most places.
Figure 3: Example of errors made by GPT-3.5. The viewpoint of a single reviewer is wrongly expressed as that of a "few reviewers" and generalized to the hotel not being centrally located, contradicting other reviews
(blue).
written summaries for each *(hotel, aspect)* pair. Table 3 shows that the reviews of SPACE are too long to summarize with a non-pipelined system given text-davinci-002's context window size. We exclude the *general* aspect from our experiments.
FewSum (Bražinskas et al., 2020a) contains product reviews from Amazon and Yelp. As opposed to SPACE, FewSum is not aspect-oriented, and the reviews are typically much shorter. For many of the products, the combined length of the reviews falls below 900 words, enabling direct summarization with GPT-3.5. FewSum provides three gold summaries for only a small portion of the products. Across these two splits, FewSum provides golden summaries for 32 and 70 products in the Amazon and Yelp categories respectively.
We list SPACE and FewSum statistics in Table 3.
## 4.2 **Automatic Eval: Rouge And Bertscore**
We compute ROUGE (Lin, 2004) and BERTScore
(Zhang et al., 2020b) and show results in Table 2.
The BERTScores for AceSum, as well as all GPT-3-related models, are in the range of 88 − 90, and differences in performance are unclear. AceSum achieves the highest ROUGE-1 as well as ROUGE-L scores by far, and is followed by TQG
and QG. QFSumm does particularly poorly on the
| Pipeline | Factuality Representativeness Faithfulness Relevance | | | | | |
|--------------|------|------|------|------|----------|----|
| TCG | 2.85 | 2.99 | 4.86 | 4.60 | | |
| TQG | 2.86 | 2.95 | 4.83 | 4.32 | | |
| QG | 2.88 | 2.97 | 4.79 | 3.93 | | |
| A | 3.00 | 2.96 | 4.91 | 3.62 | | |
| Q | 3.00 | 3.00 | 4.88 | 2.30 | | |
| Maximum | 3 | 3 | 5 | 5 | | |
| Fleiss-Kappa | 0.64 | 0.49 | 0.49 | 0.64 | Pipeline | Factuality Representativeness Faithfulness Relevance |
| G | 2.63 | 2.89 | 4.68 | 4.98 | | |
| CG | 2.72 | 2.95 | 4.73 | 4.98 | | |
| QG | 2.68 | 2.90 | 4.63 | 4.98 | | |
| Q | 2.96 | 2.98 | 4.52 | 4.92 | | |
| FS | 2.74 | 2.32 | 4.30 | 4.90 | | |
| Maximum | 3 | 3 | 5 | 5 | | |
| Fleiss-Kappa | 0.26 | 0.53 | 0.19 | 0.15 | | |
ROUGE scores. The scores are all in the same ballpark on FewSum apart from FS, with it being difficult to draw any conclusions. The latter achieves the highest ROUGE-L as well as BERTScore. The GPT-3.5 systems perform slightly better than QFSumm on the Yelp split which we attribute to the smaller combined review lengths of Yelp.
We argue that these scores are not informative and that they are at times unreliable when comparing the quality of two summaries. ROUGE and BERTScore have been critiqued in prior work as inaccurate indicators of summary quality (Fabbri et al., 2021; Liu and Liu, 2008; Cohan and Goharian, 2016), particularly as the fluency and coherence of the outputs increase to near-human levels
(Goyal et al., 2022). Figure 2 demonstrates this by with an example. n-gram methods penalize GPT3.5 for generating summaries in a slightly different style: "*The reviewers found the rooms to be clean*"
instead of "*The rooms were clean*." Similarly, the extractive nature of QFSumm drives it to produce sentences like "*We were served warm cookies on arrival*." While its selections are factual, they are not completely representative of the review opinions themselves. The actual mistakes in our systems include over-generalization and misrepresentation of viewpoints of popularities thereof, which are not well-represented by matching n-grams. Figure 3 shows an example of such errors. We conclude that metrics benchmarking the summaries on different dimensions are necessary.
## 4.3 Human Evaluation
For a more reliable view of performance, we manually evaluated the summaries of the pipelines TCG,
TQG, AceSum (A) and QFSumm (Q) for 50 randomly chosen *(hotel, aspect)* pairs from the SPACE
dataset, and G, CG, QG, Q and FS for 50 randomly chosen products (25 each from the *Amazon* and Yelp splits) from the FewSum dataset. The axes of evaluation were the attributes established in Subsection 2.1, namely Factuality, *Faithfulness* and Relevance. In addition, as we often observed our systems produce summaries of the form "While most reviewers thought ..., some said ..." to highlight contrasting opinions, we also evaluate on *Representativeness*. Representativeness is a more restricted form of Faithfulness that measures if the more popular opinion was exhibited between two opposing ones. For instance, if four people found the rooms of a hotel clean but two did not, the summary is expected to convey that the former was the more popular opinion.
The three authors of this paper independently rated the summaries along the above axes on Likert scales of 1-3 for both variations of factuality, and 1-5 for faithfulness and relevance. The average scores, along with the Krippendorff's Alpha and Fleiss Kappa scores (measuring consensus among the raters) are presented in Table 4. Among the compared pipelines, TCG improves upon TQG and QG substantially in terms of relevance. All three have a very high score under Factuality, showing that GPT-3.5 models seldom make blatantly wrong statements. Viewpoints selected by QFSumm are generally faithful, and factual due to their extractive nature, but may include irrelevant statements.
We list the corresponding metrics for FewSum in Table 5. CG tends to perform well, but the consensus is low for Faithfulness and Relevance. FS performs poorly across the board due to hallucinated statements harming its Factuality and bad viewpoint selection resulting in low Faithfulness. The lack of aspects may contribute to the low agreement on FewSum; dimensions such as Relevance may be considered underconstrained, and thus more difficult to agree upon in this setting (Kryscinski et al.,
2019).
We remark that all of our systems are achieving close to the maximum scores; the small differences belie that the pipelines all demonstrate very strong performance across the board.
## 5 New Tools For Evaluation And Analysis
Enabling fast automatic evaluation of systems will be crucial for the development of future opinion summarizers. Furthermore, when a large number of reviews are presented to a system, it may be nearly impossible even for a dedicated evaluator to sift through all of them to evaluate a summary. We investigate the question of how we can automate this evaluation using existing tools.
One of the areas where automatic evaluation may help is **faithfulness**. Since faithfulness represents the degree to which a system is accurate in representing general consensus, it requires measuring the proportion of reviews supporting each claim of a summary. A viewpoint with larger support is more popular and, consequently, more faithful. Our key idea is to use entailment as a proxy for support.
Past work (Goyal and Durrett, 2021; Laban et al.,
2022) has used Natural Language Inference (NLI)
models to assess summary factuality by computing entailment scores between pairs of sentences.
However, the summaries produced by GPT-3.5 and related pipelines often consist of compound sentences that contrast two viewpoints. In addition, GPT-3.5 prefers to say "*The reviewers said...*"
instead of directly stating a particular viewpoint.
We found these artifacts to impact the entailment model. We use a split-and-rephrase step to split these sentences into atomic value judgments by prompting GPT-3.5 as shown in Figure 4. We then use the zero-shot entailment model from SummaC (Laban et al., 2022) to compute the entailment scores for these atomic value judgments. Similar to the approach in the SummaC paper, we observe that a summary statement is factual when strongly entailed by at least one sentence and thus select the top entailment score of each summary sentence as its **factuality score**, and aggregate this score to produce per-system numbers. The choice of the model as well as that of using GPT-3.5 for the split-and-rephrase step are explained further in Appendix B, and the relevant metric of abstractiveness is discussed in Appendix D.
A system could potentially game this metric by producing relatively "safe" statements (like most reviewers found the rooms clean). We therefore
![5_image_0.png](5_image_0.png)
also want to evaluate **genericity**.
## 5.1 Terminology
The set of sentences in the summary of the reviews of a hotel h ∈ H w.r.t aspect a ∈ A is called Sh,a.
Passing these to the split-and-rephrase step gives us a set of split sentences Zh,a. For any two sentences s1, s2 we denote the entailment score of s2 with respect to s1 according to the SummaC-ZS (Laban et al., 2022) model by e(s1, s2) ∈ [−1.0, 1.0]. A
score of 1.0 indicates perfect entailment while that of −1.0 denotes complete contradiction. Finally, we denote by Nn(s) the (multi-)set of n-grams
(with multiplicity) of the sentence s. In particular, N1(s) is the set of words in the sentence s.
## 5.2 Evaluation Of Entailment
We first evaluate whether entailment is effective at identifying the support of the mentioned viewpoints by human evaluation. The three authors of this paper marked 100 random pairs (50 each from SPACE and FewSum) of sentences and assertions entailed with a score above 0.5 on the scale of 0−2.
Here, 2 indicates that the assertion is completely supported, and 1 that the assertion's general hypothesis is supported, but some specifics are left out.
The average score of the selection across the raters was **1.88** with a Fleiss Kappa consensus score of 0.56 (moderate agreement). Many of the lowerrated entailed sentences also had lower entailment scores (closer to 0.5). The score illustrates that the precision of the entailment approach is high.
## 5.3 Faithfulness: Support Set Sizes
We propose an entailment metric for determining how the viewpoints in the summary reflect the consensus of the input. We first compute per-sentence entailment scores as shown in Figure 4. For each sentence of the split-and-rephrased summary, we
| Pipeline | Percentage of split-and-rephrased sentences with n supports SPACE | | | |
|------------|---------------------------------------------------------------------|-----------|-----------|--------|
| n = 0 | n = 1 | n = 2 − 4 | n = 5+ | |
| Q | 8.1 | 29.0 | 21.2 | 41.8 |
| A | 7.7 | 8.6 | 12.7 | 71.0 |
| First-TCG | 18.7 | 16.8 | 18.1 | 46.2 |
| TCG | 22.8 | 16.9 | 19.4 | 41.0 |
| QG | 14.9 | 16.6 | 16.3 | 52.2 |
| TQG | 18.6 | 19.2 | 17.8 | 44.4 |
| First-RG | 23.7 | 22.0 | 19.9 | 34.4 |
| RG | 27.4 | 22.1 | 20.8 | 29.6 |
| FewSum | | | | |
| (Amazon) | n = 0 | n = 1 | n = 2 − 4 | n = 5+ |
| Q | 9.5 | 51.6 | 26.1 | 12.7 |
| FS | 76.9 | 11.2 | 8.39 | 3.45 |
| G | 28.0 | 32.4 | 27.7 | 12.0 |
| QG | 27.6 | 34.7 | 23.6 | 14.2 |
| First-CG | 27.8 | 26.6 | 25.0 | 20.5 |
| CG | 31.9 | 32.2 | 22.6 | 13.3 |
| (Yelp) | n = 0 | n = 1 | n = 2 − 4 | n = 5+ |
| Q | 8.2 | 46.2 | 31.3 | 14.2 |
| FS | 52.3 | 17.1 | 20.0 | 10.6 |
| G | 27.2 | 24.3 | 29.3 | 19.3 |
| QG | 30.6 | 30.3 | 27.4 | 11.6 |
| First-CG | 24.4 | 25.6 | 26.8 | 23.3 |
| CG | 26.3 | 28.3 | 26.2 | 19.2 |
measure the number of review sentences that entail it with a score greater than a threshold τ = 0.75
(the "support" of the sentence). This threshold was determined based on manual inspection. We bin these counts into 0, 1, 2 − 4 and 5+. The frequencies of the bins are converted to percentages and listed in Table 6. FS performs poorly due to presenting hallucinated viewpoints, and repeated summarization slightly hurts CG on the Amazon split. G and CG outperform other methods on the Yelp split, likely because it has fewer reviews per product than Amazon, making it much likelier for the combined reviews of a product to fit in a manageable number of words. The "pure" GPT-3.5 systems generally perform well on the short review sets of FewSum. As we move to the long combined lengths of the reviews on SPACE, however, the pure GPT-3.5 pipelines fall behind in terms of faithfulness. Repeated summarization causes a major dip from First-TCG to TCG, indicating that this is not effective for long-form inputs. QG outperforms other GPT-3-related pipelines by a large margin. As we saw in human evaluation, however,
| Pipeline | Average Top Score | Pipeline | Average Top Score | |
|------------|---------------------|------------|---------------------|-------|
| SPACE | FewSum | | | |
| Q | 91.59 | (Amazon) | (Yelp) | |
| A | 92.49 | Q | 85.29 | 86.62 |
| First-TCG | 84.96 | FS | 24.36 | 47.23 |
| TCG | 82.06 | G | 65.81 | 68.59 |
| QG | 87.50 | QG | 67.63 | 65.04 |
| TQG | 84.68 | First-CG | 68.34 | 69.86 |
| First-RG | 81.54 | CG | 66.43 | 68.58 |
| RG | 79.85 | | | |
QG may include some irrelevant viewpoints in this process. Abating this behavior by performing a topic-clustering step first brings its numbers down to a level comparable with First-TCG, which is still more faithful than the TCG pipeline. AceSum has the largest number of statements with 5+ supports on the SPACE; however, as we will see later, many of its summaries are very generic, and support for them can be easily found among the large number of reviews. Q has the smallest percentage of statements with no support because it is extractive.
## 5.4 Factuality: Top Score
As depicted in Figure 4, averaging the per-sentence entailment scores (first per-summary, then persystem) gives us the *Top Score* metric. The average top score is a proxy for factuality since true statements will typically be strongly entailed by at least one sentence of the reviews. We list the computed average top scores in Table 7. FS performs poorly on FewSum in terms of Factuality. The numbers for other systems are similar, with QG
and CG performing best on the Amazon and Yelp splits. However, on the longer inputs of SPACE,
the differences in factuality become more apparent. In particular, to reconcile similar but distinct viewpoints, repeated summarization leads to a type of generalizing that hurts the factuality of TCG
and TG. Among the GPT-3.5 pipelines, QG performs the best, followed by TQG. TQG yet again delivers performance comparable to First-TCG and therefore presents a reasonable trade-off with some gains on factuality and increased relevance.
## 5.5 Genericity
As mentioned before, we want to measure whether reviews contain largely generic statements like the service was helpful, which are likely to be faithful
| Pipeline | Genericity | Percentage of scores greater than τ | | |
|------------|--------------|---------------------------------------|--------|------|
| SPACE | | | | |
| Q | 0.640 | 64.6 | | |
| A | 0.828 | 82.8 | | |
| TCG | 0.781 | 80.1 | | |
| QG | 0.759 | 76.5 | | |
| TQG | 0.738 | 73.7 | | |
| RG | 0.788 | 80.0 | | |
| FewSum | | | | |
| (Amazon) | (Yelp) | (Amazon) | (Yelp) | |
| Q | 0.339 | 0.406 | 32.6 | 37.8 |
| FS | 0.529 | 0.636 | 54.2 | 62.6 |
| G | 0.582 | 0.654 | 56.9 | 65.2 |
| QG | 0.565 | 0.653 | 53.9 | 64.7 |
| First-CG | 0.604 | 0.732 | 63.4 | 69.1 |
| CG | 0.554 | 0.682 | 56.7 | 68.1 |
| Pipeline | Average IDF | Pipeline | Average IDF | |
|------------|---------------|------------|---------------|------|
| SPACE | FewSum | | | |
| Q | 12.00 | (Amazon) | (Yelp) | |
| A | 5.77 | Q | 4.38 | 4.33 |
| TCG | 8.40 | FS | 3.16 | 3.26 |
| QG | 6.93 | G | 3.02 | 2.93 |
| TQG | 7.82 | QG | 3.10 | 2.93 |
| RG | 8.87 | CG | 3.00 | 2.86 |
and factual but not very useful to a user of a system.
We first focus on *semantic* genericity, i.e. the use of statements generally applicable to other products/services in the same class. On the other hand, lexical genericity involves the overuse of generic words and is tackled next. Our approach to measuring semantic genericity employs the observation that generic sentences from a summary are often widely applicable and thus likely to be strongly entailed by statements from other summaries. We calculate the similarity sim(*S, S*′) of two sets of sentences using the averaged top score, as Figure 4 shows. Similarly, we also measure the fraction frac(S, S′, τ ) of sentences whose top score exceeds a threshold τ . Equation 1 computes the average similarity score between sentences that belong to two reviews by the same system but different Table 10: Spearman Correlation Coefficients of our metrics and ROUGE with human judgments.
(*hotel, aspect*) pairs (normalizing by the number of pairs N). Equation 2 computes the corresponding metric based on frac.
| Evaluation Axis | Entailment-Based Metric | ROUGE |
|-------------------|---------------------------|---------|
| Factuality | 0.36 | 0.05 |
| Faithfulness | 0.29 | -0.03 |
$$G=\frac{1}{N}\sum_{(h,a)\neq(h^{\prime},a^{\prime})}\mathsf{sim}(Z_{h,a},Z_{h^{\prime},a^{\prime}})\tag{1}$$ $$F_{\tau}=\frac{1}{N}\sum_{(h,a)\neq(h^{\prime},a^{\prime})}\mathsf{frac}(Z_{h,a},Z_{h^{\prime},a^{\prime}},\tau)\tag{2}$$ We report these two metrics in Table 8. On the short
inputs of FewSum, all GPT-3.5 pipelines give similar results, with FewSum being slightly less generic.
Moving to SPACE, however, the range of scores becomes much wider. Forced to reconcile disparate opinions during repeated summarization, TCG and RG produce generic summaries, although AceSum is the most generic. We note that pre-extraction with QFSumm and Topic-wise clustering help QG
and TQG remain less generic.
To measure *lexical genericity*, we use the sentences from all summaries on the corresponding dataset as the set of documents to calculate an averaged Inverse Document Frequency (IDF) of the summaries, with stopwords removed and stemming applied. Since generic words are likely to occur more frequently and therefore have a low IDF, a smaller score indicates higher genericity.
The scores calculated this way are listed in Table 9.
As expected, QFSumm is highly specific due to being extractive. We observe that AceSum generates summaries that over-use generic words, in line with our prior observations. We also note that pre-extraction with QFSumm helps with lexical genericity as it did with semantic genericity. Finally, on FewSum, we observe that FS does better than every other pipeline apart from Q. This bolsters our previous claim that its low Factuality and Faithfulness scores were due to hallucinated, but specific, viewpoints.
## 5.6 Correlation With Human Judgments
Our entailment-based approaches set out to measure Factuality and Faithfulness; how well do these correlate with our human evaluation? We compute Spearman's rank correlation coefficient on the human-annotated SPACE examples with the averaged annotator scores, as the consensus among rater scores was high on that dataset. In particular, we use the average of the Factuality scores among the raters as the net human score on Factuality on an example and the mean score on Faithfulness as that for Faithfulness. Correspondingly, we consider the Top Score metric as the automatic measurement of Factuality and the percentage of statements with 3 or more supports as Faithfulness. We list the obtained Spearman correlation coefficients in Table 10. While there is room for stronger metrics, the fact that the introduced metrics correlate with human judgments better than ROUGE provides an encouraging signal that these target the factors of interest.
## 6 Related Work
Text Summarization Historically, most work tackling text summarization has been *extractive* in nature (Ku et al., 2006; Paul et al., 2010; Carenini et al., 2006; Angelidis and Lapata, 2018), with more recent work applying pre-trained extractive systems to this task (Zhong et al., 2020; Jia et al.,
2020; Kwon et al., 2021; Gu et al., 2022; Ahuja et al., 2022). *Abstractive* approaches (Carenini et al., 2006; Ganesan et al., 2010; Di Fabbrizio et al., 2014) to summarizing reviews have become more successful in recent years (Liu and Lapata, 2019a; Bražinskas et al., 2020b; Amplayo et al.,
2021b; Isonuma et al., 2021). We follow in this vein, capitalizing on the strength of GPT-3.5.
Multi-Stage Summarization Most systems of both types are now end-to-end (Liu and Lapata, 2019b; Du et al., 2022; Ahuja et al., 2022). However, multi-stage approaches (Chen and Bansal, 2018; Li et al., 2021; Zhang et al., 2022) like ours have recently shown great promise. For instance, Li et al. (2021) extracts relevant evidence spans and then summarizes them to tackle long documents.
Recursive summarization has been explored in (Wu et al., 2021) for book summarization, but involved fine-tuning GPT-3.5 to the task. Other approaches such as the mixture-of-experts re-ranking model Ravaut et al. (2022) can be considered as a two-step approach where the combine function ranks and filters the outputs of the first stage.
Evaluation Metrics The domain of news summarization has recently seen interest in using factuality/faithfulness for evaluation (Scialom et al.,
2021; Kryscinski et al., 2020; Tang et al., 2023). In news, faithfulness and factuality are quite similar, as news articles usually do not present incorrect information or conflicting opinions. Opinion summarization is therefore quite distinct in this regard, and a separate treatment of factuality and faithfulness is sensible. For the same reason, although unified approaches to evaluating text generation
(Deng et al., 2021; Zhong et al., 2022) are useful, more targeted metrics are likely to be more informative for opinion summarization specifically.
Aspect-Oriented Summarization In addition to opinion summarization (Amplayo et al., 2021a),
aspect-oriented summarization has also been explored in other domains of NLP (Bahrainian et al.,
2022; Yang et al., 2022). However, as highlighted above, opinion summarization differs from news summarization with respect to desired characteristics, and this work focuses specifically on those issues.
## 7 Conclusion
In this work, we show that GPT-3.5-based opinion summarization produces highly fluent and coherent reviews, but is not perfectly faithful to input reviews and over-generalizes certain viewpoints.
ROUGE is unable to capture these factors accurately. We propose using entailment as a proxy for support and develop metrics that measure the faithfulness, factuality, and genericity of the produced summaries. Using these metrics, we explore the impact of two approaches on controlling the size of the input via pre-summarization on two opinion summarization datasets. With the reasonably sized inputs of FewSum, GPT-3.5 and CG produce faithful and non-generic outputs. However, as we move to long-form review summarization, the factuality and faithfulness of these approaches drop. A preextraction step using QFSumm helps in this setting but leads to generally shorter and more generic summaries; a topic clustering step can then make summaries less generic and more relevant at a small cost to faithfulness and factuality. We hope that our efforts inspire future improvements to systems and metrics for opinion summary evaluation.
## Limitations
Our study here focused on the most capable GPT3.5 model, text-davinci-002, at the time the experiments were conducted. We believe that models like ChatGPT and GPT-4, as well as those in the future, are likely to perform at least as well as these, and if they improve further, the metrics we have developed here will be useful in benchmarking that progress. However, significant further paradigm shifts could change the distribution of errors in such a way that certain of our factors (e.g., genericity)
become less critical. In addition, the latest iterations of GPT have a much greater input window size, which help them digest much larger swaths of text in one go and potentially make our pipelined approaches less needed in certain settings.
Furthermore, the text-davinci-002 model is fine-tuned with data produced by human demonstrations. The precise data used is not publicly available, so it is difficult to use our results to make claims about what data or fine-tuning regimen leads to what failure modes in these models.
Recent work has noted that language models may be susceptible to learning biases from training data (Sheng et al., 2019; Wallace et al., 2019; Shwartz et al., 2020), and this phenomenon has also been observed for GPT-3.5 (Lucy and Bamman, 2021). We did not stress test the models studied for biases and furthermore only experimented on English-language data.
When properly used, the summarization models described in this paper can be time-saving. However, as noted above, summary outputs may be factually inconsistent with the input documents or not fully representative of the input, and in such a case could contribute to misinformation. This issue is present among all current abstractive models and is an area of active research.
## Acknowledgments
This work was partially supported by NSF CAREER Award IIS-2145280, a grant from Open Philanthropy, a gift from Salesforce, Inc., and a gift from Adobe. Thanks as well to the anonymous reviewers for their helpful comments.
## References
Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, and Greg Durrett. 2022. ASPECTNEWS:
Aspect-oriented summarization of news documents.
In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 6494–6506, Dublin, Ireland.
Association for Computational Linguistics.
Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021a. Aspect-controllable opinion summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6578–6593, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021b. Unsupervised opinion summarization with content planning. *Proceedings of the AAAI*
Conference on Artificial Intelligence, 35(14):12489–
12497.
Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3675–3686, Brussels, Belgium. Association for Computational Linguistics.
Seyed Ali Bahrainian, Sheridan Feucht, and Carsten Eickhoff. 2022. NEWTS: A corpus for news topicfocused summarization. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 493–503, Dublin, Ireland. Association for Computational Linguistics.
Arthur Bražinskas, Mirella Lapata, and Ivan Titov.
2020a. Few-shot learning for opinion summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 4119–4135, Online. Association for Computational Linguistics.
Arthur Bražinskas, Mirella Lapata, and Ivan Titov.
2020b. Unsupervised opinion summarization as copycat-review generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5151–5169, Online. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In *Advances in Neural Information Processing Systems*,
volume 33, pages 1877–1901. Curran Associates, Inc.
Giuseppe Carenini, Raymond Ng, and Adam Pauls.
2006. Multi-document summarization of evaluative text. In *11th Conference of the European Chapter of* the Association for Computational Linguistics, pages 305–312, Trento, Italy. Association for Computational Linguistics.
Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In *Proceedings of the 56th Annual Meeting* of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686, Melbourne, Australia. Association for Computational Linguistics.
Arman Cohan and Nazli Goharian. 2016. Revisiting summarization evaluation for scientific articles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16),
pages 806–813, Portorož, Slovenia. European Language Resources Association (ELRA).
Mingkai Deng, Bowen Tan, Zhengzhong Liu, Eric Xing, and Zhiting Hu. 2021. Compression, transduction, and creation: A unified framework for evaluating natural language generation. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7580–7605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Giuseppe Di Fabbrizio, Amanda Stent, and Robert Gaizauskas. 2014. A hybrid approach to multidocument summarization of opinions in reviews. In Proceedings of the 8th International Natural Language Generation Conference (INLG), pages 54–63, Philadelphia, Pennsylvania, U.S.A. Association for Computational Linguistics.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM:
General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335, Dublin, Ireland. Association for Computational Linguistics.
Günes Erkan and Dragomir R. Radev. 2004. Lexrank:
Graph-based lexical centrality as salience in text summarization. *J. Artif. Int. Res.*, 22(1):457–479.
Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´
Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating Summarization Evaluation. Transactions of the Association for Computational Linguistics, 9:391–409.
Kavita Ganesan, ChengXiang Zhai, and Jiawei Han.
2010. Opinosis: A graph based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 340–348, Beijing, China. Coling 2010 Organizing Committee.
Yanjun Gao, Ting-Hao Huang, and Rebecca J. Passonneau. 2021. ABCD: A graph framework to convert complex sentences to a covering set of simple sentences. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 3919–3931, Online. Association for Computational Linguistics.
Sebastian Gehrmann, Elizabeth Clark, and Thibault Sellam. 2022. Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text. *arXiv preprint arXiv:2202.06935*.
Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2021. Annotating and modeling fine-grained factuality in summarization. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1449–1462, Online. Association for Computational Linguistics.
Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022.
News Summarization and Evaluation in the Era of GPT-3. *arXiv*.
Nianlong Gu, Elliott Ash, and Richard Hahnloser. 2022.
MemSum: Extractive summarization of long documents using multi-step episodic Markov decision processes. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics*
(Volume 1: Long Papers), pages 6507–6522, Dublin, Ireland. Association for Computational Linguistics.
Junxian He, Wojciech Krysci ´ nski, Bryan McCann, ´
Nazneen Rajani, and Caiming Xiong. 2020. CTRLsum: Towards Generic Controllable Text Summarization. *arXiv*.
Masaru Isonuma, Junichiro Mori, Danushka Bollegala, and Ichiro Sakata. 2021. Unsupervised abstractive opinion summarization by generating sentences with tree-structured topic guidance. *Transactions of the* Association for Computational Linguistics, 9:945–
961.
Ruipeng Jia, Yanan Cao, Hengzhu Tang, Fang Fang, Cong Cao, and Shi Wang. 2020. Neural extractive summarization with hierarchical attentive heterogeneous graph network. In *Proceedings of the 2020*
Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3622–3631, Online. Association for Computational Linguistics.
Joongwon Kim, Mounica Maddela, Reno Kriz, Wei Xu, and Chris Callison-Burch. 2021. BiSECT: Learning to split and rephrase sentences with bitexts. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6193–
6209, Online and Punta Cana, Dominican Republic.
Association for Computational Linguistics.
Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019.
Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540–551, Hong Kong, China. Association for Computational Linguistics.
Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
pages 9332–9346, Online. Association for Computational Linguistics.
Lun-Wei Ku, Yu-Ting Liang, and Hsin-Hsi Chen. 2006.
Opinion extraction, summarization and tracking in news and blog corpora. In *AAAI Spring Symposium:*
Computational Approaches to Analyzing Weblogs.
Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, and Manabu Okumura. 2021. Considering nested tree structure in sentence extractive summarization with pre-trained transformer. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 4039–4044, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-visiting NLIbased models for inconsistency detection in summarization. *Transactions of the Association for Computational Linguistics*, 10:163–177.
Faisal Ladhak, Esin Durmus, He He, Claire Cardie, and Kathleen McKeown. 2022. Faithful or extractive?
on mitigating the faithfulness-abstractiveness tradeoff in abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 1410–1421, Dublin, Ireland. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Haoran Li, Arash Einolghozati, Srinivasan Iyer, Bhargavi Paranjape, Yashar Mehdad, Sonal Gupta, and Marjan Ghazvininejad. 2021. EASE: Extractiveabstractive summarization end-to-end using the information bottleneck principle. In *Proceedings of the* Third Workshop on New Frontiers in Summarization, pages 85–95, Online and in Dominican Republic.
Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Feifan Liu and Yang Liu. 2008. Correlation between ROUGE and human evaluation of extractive meeting summaries. In *Proceedings of ACL-08: HLT, Short* Papers, pages 201–204, Columbus, Ohio. Association for Computational Linguistics.
Yang Liu and Mirella Lapata. 2019a. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3730–3740, Hong Kong, China. Association for Computational Linguistics.
Yang Liu and Mirella Lapata. 2019b. Text summarization with pretrained encoders. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3730–3740, Hong Kong, China. Association for Computational Linguistics.
Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit.
Li Lucy and David Bamman. 2021. Gender and representation bias in GPT-3 generated stories. In *Proceedings of the Third Workshop on Narrative Understanding*, pages 48–55, Virtual. Association for Computational Linguistics.
George A. Miller. 1994. WordNet: A lexical database for English. In *Human Language Technology: Proceedings of a Workshop held at Plainsboro, New* Jersey, March 8-11, 1994.
Michael Paul, ChengXiang Zhai, and Roxana Girju.
2010. Summarizing contrastive viewpoints in opinionated text. In *Proceedings of the 2010 Conference* on Empirical Methods in Natural Language Processing, pages 66–76, Cambridge, MA. Association for Computational Linguistics.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar.
Association for Computational Linguistics.
Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022.
SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4504–4524, Dublin, Ireland.
Association for Computational Linguistics.
William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. 2022.
Self-critiquing models for assisting human evaluators.
arXiv.
Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. QuestEval: Summarization asks for fact-based evaluation. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6594–6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan, Linda Petzold, and Dong Yu.
2022. Oasum: Large-scale open domain aspectbased summarization.
Abigail See, Peter J. Liu, and Christopher D. Manning.
2017. Get to the point: Summarization with pointergenerator networks. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1073–
1083, Vancouver, Canada. Association for Computational Linguistics.
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407–
3412, Hong Kong, China. Association for Computational Linguistics.
Kaiqiang Song, Lin Zhao, and Fei Liu. 2018. Structureinfused copy mechanisms for abstractive summarization. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 1717–
1729, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yahvuz, Wojciech Krys-´
cinski, Justin F. Rousseau, and Greg Durrett. 2023. ´
Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics.
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In *Proceedings of the 2019 Conference on Empirical Methods*
in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China. Association for Computational Linguistics.
Jeff Wu, Long Ouyang, Daniel M. Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano.
2021. Recursively Summarizing Books with Human Feedback. *arXiv*.
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020a. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization.
In *Proceedings of the 37th International Conference* on Machine Learning, volume 119 of *Proceedings* of Machine Learning Research, pages 11328–11339. PMLR.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with BERT. In *8th International Conference on Learning Representations,*
ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Yusen Zhang, Ansong Ni, Ziming Mao, Chen Henry Wu, Chenguang Zhu, Budhaditya Deb, Ahmed Awadallah, Dragomir Radev, and Rui Zhang. 2022. Summn: A
multi-stage summarization framework for long input dialogues and documents. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1592–
1604, Dublin, Ireland. Association for Computational Linguistics.
Vered Shwartz, Rachel Rudinger, and Oyvind Tafjord.
2020. "you are grounded!": Latent name artifacts in pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6850–6861, Online. Association for Computational Linguistics.
Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197–6208, Online. Association for Computational Linguistics.
Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multidimensional evaluator for text generation. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 2023–
2038, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
## A Pipeline Details A.1 **Details Of The Infrastructure, Models, And** Datasets Used
Computational Resources All experiments were run on a machine equipped with an Intel Xeon W-2123, and utilized a TITAN RTX GPU with a 24 GB memory. We estimate the total computational GPU budget to be roughly 100 GPU-hours.
Model Sizes QFSumm (Ahuja et al., 2022) is a fine-tuned version of BERT and therefore has 110M parameters. The FewSum model from
(Bražinskas et al., 2020a) has 25.1M parameters including the plug-in network. AceSum (Amplayo et al., 2021a) has a combined total of 142M parameters between the Controller Induction Model and Opinion Summarization Model. We use the VitC variant of the entailment model SummaC-ZS
(Laban et al., 2022), which relies on the ALBERTxlarge architecture with 60M parameters. For all models, we used the default parameters as reported in Ahuja et al. (2022), Bražinskas et al. (2020a),
Amplayo et al. (2021a), and Laban et al. (2022).
Consequently, no hyperparameter search was necessary. All models have been publicly released under the MIT License on GitHub by the respective authors.
Datasets and Evaluation Both the SPACE and FewSum datasets consist of reviews in English.
The former consists of reviews of hotels, and the latter product reviews from Amazon and service reviews from Yelp. We are using pre-existing datasets that are standard in opinion summarization. Through our human evaluation, we did not see any personal identifying information or offensive content in the reviews we assessed. All of our human evaluation experiments were performed once by the authors, and we report the Krippendorff's Alpha and Fleiss Kappa scores as measurements of consensus. We used ROUGE with the default settings.2 We used NLTK's (Loper and Bird, 2002)
WordNet (Miller, 1994) lemmatizer as the lemmatizer where needed. Sentence splitting was done using the sent_tokenize() function of NLTK.
## A.2 Details Of The Configurations And Prompts
Here we provide more details of the configuration and/or prompts used for various models. Below, 2The rouge.properties file at https://github.com/
kavgan/ROUGE-2.0 GPT-3.5 refers to the text-davinci-002 model.
QFSumm and QFSumm-long (Q) QFSumm allows one to specify the number n of sentences to extract from the reference text to shape into a summary. We use n = 3 (the default setting) for QFSumm (summarizer) and n = 35 for QFSummlong (extractor). On the SPACE dataset, we use the aspect-specific keywords from Ahuja et al. (2022)
to pass to the model. On the FewSum dataset, however, the set of relevant keywords may be drastically different across examples. Therefore, for each product, we pass 5 randomly chosen reviews to GPT-3.5 with the prompt consisting of the reviews and the directive "*Output up to eight commaseparated keywords that capture these reviews most* saliently:". The produced keywords are then used with QFSumm to summarize the reviews.
GPT-3.5 Topic Clustering (T) The prompt we use is "Describe the topic of each sentence in one word", followed by three examples and then the sentence whose topic is to be determined. We then map the produced words to their corresponding normalized GloVe (Pennington et al., 2014) vectors, which are then mapped to the closest aspects in terms of L2 distance. This is functionally equivalent to using cosine similarity as the vectors are normalized.
GPT-3.5 Chunking (C) We strive for the length of the chunks (in sentences) to be both as close to each other and to 30 as possible; thus, when there are l sentences total to be chunked, we take c = ⌈
l 30 ⌉ to be the number of chunks, and allocate
⌊
l c⌋ sentences to each chunk (except the last one, which may have fewer).
Review Stratification (R) If a cluster's length exceeds GPT-3.5's upper limit at this stage, it is truncated to the maximum number of sentences that fit.
GPT-3.5 (G) When used as a summarizer, we feed the penultimate set of sentences to GPT-3.5 with the prompt *"Summarize what the X said of* the Y:," where X is either "*reviewers*" or "*accounts*"
based on whether GPT-3.5-chunking was used so far. Y is the aspect being summarized (SPACE) or just "*Product*" (FewSum). The preamble is either
"Here's what some reviewers said about a hotel:" or
"Here are some accounts of what some reviewers said about the hotel" in the case of SPACE. The word "*hotel*" is replaced by "*product*" for FewSum.
![14_image_0.png](14_image_0.png)
![14_image_1.png](14_image_1.png)
Supporting Weakening
## B Entailment And Decomposition
In line with our motivation, we would like to be able to use an NLI (Natural Language Inference)
model to retrieve entailment scores of the produced summaries with respect to the input reviews. We tested several approaches including BERTScore, due to it being trained on entailment/contradiction pairs, but finally settled on using the zero-shot model from SummaC (Laban et al., 2022) to produce the entailment scores. SummaC is already becoming a standard evaluation tool for summarization factuality. We chose to forego the trained
"Conv" SummaC model as we found that it did not generalize well to the kind of data we were working with. Specifically, two common issues were that (1)
the range of scores assigned to the sentences from the reviews was very small, and (2) sometimes (especially for the most weakening statements) the scores assigned to the sentences seemed arbitrary and did not make a lot of sense. In comparison, the zero-shot model had neither of these issues. This issue is highlighted in Figure 6.
Further, a proposition X is typically not judged by models to entail statements of the form "*The reviewers said X*", or "*X and Y*", where Y is another proposition. Accordingly, the entailment scores are not very high for these two cases. We highlight this in Figure 7. Thus, we decide to split and rephrase all sentences of the produced summary to simple value propositions for all entailment-related metrics. Note that here rephrasing also includes removing any attribution such as "*The guests said...*".
![15_image_1.png](15_image_1.png)
![15_image_0.png](15_image_0.png)
We considered several models to this end, including BiSECT (Kim et al., 2021) and ABCD (Gao et al., 2021), but found two common issues with all of them:
- The split sentences maintained the words from the original sentences, so a sentence such as
"*The food was received well but it was served* late" would have one output part as "It was served late", which requires a round of entity disambiguation to follow the split-andrephrase step.
- These models do not remove attribution of viewpoints as we would like.
- A statement such as "I liked the setting of the movie but not its cast" produces one of the outputs as "*Not its cast*", which does not make any sense by itself.
Thus, we utilize GPT-3.5 to perform the split-andrephrase task, with few shot prompting used to illustrate the removal of attribution and other desired characteristics. We also experimented with having separate steps for split-and-rephrase and found no significant difference in the outputs or quality thereof. We utilize the split-and-rephrased sentences for all of the automatic metrics that involve entailment of any sort.
## C Measuring Complexity
One of the challenges of opinion summarization is that sentences may contrast opinions: "*Most reviewers liked the service, but there were a few*
| Pipeline | Complexity (%) | Pipeline | Complexity (%) | |
|------------|------------------|------------|------------------|------|
| SPACE | FewSum | | | |
| Q | 16.8 | (Amazon) | (Yelp) | |
| A | 5.1 | Q | 14.7 | 7.8 |
| First-TCG | 28.6 | FS | 16.8 | 12.3 |
| TCG | 30.7 | G | 36.1 | 31.9 |
| QG | 27.0 | QG | 34.6 | 32.8 |
| TQG | 27.3 | First-CG | 28.8 | 22.0 |
| First-RG | 24.0 | CG | 27.5 | 19.6 |
| RG | 30.7 | | | |
complaints about sluggish response times." We quantify the percentage of simple and contrasting statements in the model outputs since it is subtly related to the extent of expression of opposing viewpoints. We use the original (non-split) sentences for this purpose and classify a sentence as contrasting if it contains one or more words from the set K =
{'while', 'but', 'though', 'although',
'other', 'others', 'however'}, as Equation 3 depicts. We present these percentages in Table 11.
$$C={\frac{\sum_{h\in{\mathcal{H}},a\in{\mathcal{A}}}\sum_{s\in S_{h,a}}1(N_{1}(s)\cap{\mathcal{K}}\neq\varnothing)}{\sum_{h\in{\mathcal{H}},a\in{\mathcal{A}}}|S_{h,a}|}}\quad{\mathrm{(3)}}$$
We note that AceSum produces the smallest percentage of contrasting statements. We see that topic-wise clustering pushes up the number of contrasting statements for QG. We hypothesize that this is because when bringing together statements with the same topics in a cluster two opposing statements are likelier to fall into the same chunk. In
| Pipeline | Percentage of novel n-grams | | |
|------------|-------------------------------|-------|------|
| n = 3 | n = 4 | n = 5 | |
| Q | 4.3 | 5.3 | 6.3 |
| A | 30.1 | 61.7 | 79.1 |
| First-TCG | 71.9 | 87.4 | 92.8 |
| TCG | 78.3 | 93.1 | 97.5 |
| QG | 62.1 | 81.0 | 88.2 |
| TQG | 70.4 | 86.4 | 92.6 |
| First-RG | 71.6 | 87.2 | 92.6 |
| RG | 79.1 | 93.0 | 97.1 |
| Pipeline | Percentage of novel n-grams | | | | | |
|------------|-------------------------------|-------|-------|-------|-------|------|
| Amazon | Yelp | | | | | |
| n = 3 | n = 4 | n = 5 | n = 3 | n = 4 | n = 5 | |
| Q | 4.5 | 5.7 | 7.0 | 4.2 | 5.4 | 6.6 |
| FS | 89.2 | 96.4 | 99.0 | 90.7 | 97.5 | 99.3 |
| G | 93.1 | 97.5 | 98.8 | 94.4 | 97.9 | 99.4 |
| QG | 91.0 | 95.5 | 97.7 | 94.2 | 97.9 | 99.0 |
| First-CG | 91.8 | 96.3 | 98.1 | 92.9 | 96.6 | 97.9 |
| CG | 91.8 | 96.2 | 97.9 | 93.3 | 97.0 | 98.0 |
SPACE
FewSum
![16_image_0.png](16_image_0.png)
cases where two opposing statements fall into different chunks, say X and Y, the chunks are likely to each contain statements similar to others in the same chunk. Thus, the summaries of those chunks are likely to be highly contrasting and thus increase the above measure even more for the final stage, as is observed above for TCG.
## D Abstractiveness
We further investigate how the choice of the pipeline affects abstractiveness. To measure this, we calculate the percentage of n-grams in the summaries that do not appear in the input reviews for n ∈ {3, 4, 5}. For this, we use the original (nonsplit) sentences from the output summaries. The results are tabulated in Table 8.
Since QFSumm is a purely extractive model, it is no surprise that Q has low abstractiveness. The numbers are non-zero due to some quirks of QFSumm about splitting into sentences - this leads to some partial sentences ending up next to each other. The next stand-out is that A has very low abstractiveness. This is in line with our observation that even though AceSum is abstractive, it tends to highly generic observations such as "*The rooms* were clean", which very likely appear almost verbatim in some user reviews. We also observe that QG has a relatively low abstractiveness and that topic clustering drives up abstractiveness. We suspect that the above is a result of GPT-3.5 simply mashing together some sentences when presented with chunks containing highly disparate sentences
(since it is hard to find a common thread among them), which promotes extraction over abstraction. Another observation is that multi-GPT-3.5 pipelines (TCG and RG) are more abstractive than single-GPT-3.5 ones since there are two rounds of abstraction as opposed to one. All the GPT-3.5derived pipelines are highly abstractive in the case of FewSum, and slightly more so than FS. This is unsurprising since the combined length of the reviews in the case of FewSum is much smaller when compared to SPACE, and therefore there are relatively fewer propositions to compress into general statements. Motivated by Ladhak et al. (2022),
we display the line graph of the average Top Score vs. 3-gram Abstractiveness for the SPACE dataset in Figure 9. The trio of QG, TQG, and TCG define the best frontier on the Factuality-Abstractiveness tradeoff, followed by RG, then A and Q.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
After section 6
✓ A2. Did you discuss any potential risks of your work?
In the Limitations section (after section 6)
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1
✓ B1. Did you cite the creators of artifacts you used?
Section 4.1
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Section A.1
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Section A.1
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Section A.1
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Section A.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 4.1
## C ✓ **Did You Run Computational Experiments?**
Section 3 introduces the models being run, and Section 5 details the computed metrics.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Section A.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section A.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Sections 4.3, 5.2, and A.1
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section A.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Sections 4.3 and 5.2 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. The human evaluators were the authors themselves. The ratings were on Likert scales - the explanation of the scales has been included in section 4.3 D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. The human evaluators were the authors themselves.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. The human evaluators were the authors themselves.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. The human evaluators were the authors themselves.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. The human evaluators were the authors themselves. |
jia-etal-2023-sentence | Sentence Ordering with a Coherence Verifier | https://aclanthology.org/2023.findings-acl.592 | This paper presents a novel sentence ordering method by plugging a coherence verifier (CoVer) into pair-wise ranking-based and sequence generation-based methods. It does not change the model parameters of the baseline, and only verifies the coherence of candidate (partial) orders produced by the baseline and reranks them in beam search. We also propose a coherence model as CoVer with a novel graph formulation and a novel data construction strategy for contrastive pre-training independently of the sentence ordering task. Experimental results on four benchmarks demonstrate the effectiveness of our method with topological sorting-based and pointer network-based methods as the baselines. Detailed analyses illustrate how CoVer improves the baselines and confirm the importance of its graph formulation and training strategy. Our code is available at \url{https://github.com/SN-Jia/SO_with_CoVer}. |
## Sentence Ordering With A Coherence Verifier Sainan Jia1, Wei Song1*, Jiefu Gong2, Shijin Wang2**, And Ting Liu**3
1Information Engineering College, Capital Normal University, Beijing, China 2State Key Laboratory of Cognitive Intelligence, iFLYTEK Research, Hefei, China 3Harbin Institute of Technology, Harbin, China [email protected], [email protected]
{jfgong, sjwang3}@iflytek.com, [email protected]
## Abstract
This paper presents a novel sentence ordering method by plugging a coherence verifier
(COVER) into pair-wise ranking-based and sequence generation-based methods. It does not change the model parameters of the baseline, and only verifies the coherence of candidate
(partial) orders produced by the baseline and reranks them in beam search. We also propose a coherence model as COVER with a novel graph formulation and a novel data construction strategy for contrastive pre-training independently of the sentence ordering task. Experimental results on four benchmarks demonstrate the effectiveness of our method with topological sorting-based and pointer network-based methods as the baselines. Detailed analyses illustrate how COVER improves the baselines and confirm the importance of its graph formulation and training strategy. Our code is available at https://github.com/SN-Jia/
SO_with_CoVer.
## 1 Introduction
Coherence is essential for effective communication.
The correct order of sentences is a necessary attribute of text coherence. Sentence ordering aims to organize a set of possibly unordered sentences into a coherent text. It is closely associated with coherence modeling. On one hand, it has been used as an objective for learning coherence models. On the other hand, it can be viewed as a follow-up module of coherence evaluation, e.g., for improving texts with low coherence scores. So sentence ordering has highly practical value in downstream tasks for evaluating and improving the quality of human writing (Amorim et al., 2018; Mim et al.,
2019) or machine-generated content (Reiter and Dale, 1997; Fan et al., 2019; Hu et al., 2020; Guan et al., 2021).
*Corresponding author, supported by the National Natural Science Foundation of China (No. 61876113)
Recent sentence ordering studies can be classified into 2 categories: pair-wise ranking-based and sequence generation-based methods.
Pair-wise ranking-based methods first model the relative order of each sentence pair and then integrate all the predicted relative orders with some ranking methods to get the final order (Chen et al., 2016; Prabhumoye et al., 2020; Ghosal et al., 2021; Zhu et al., 2021). For example, B-TSort (Prabhumoye et al., 2020) uses BERT for pair-wise classification, builds a constraint graph to integrate pair-wise predictions, and adopts the topological sorting algorithm for sentence ranking.
Sequence generation-based methods are mainly based on the pointer networks (Vinyals et al., 2015).
An encoder encodes all unordered sentences in various ways to capture the paragraph-level contextual information (Cui et al., 2018; Yin et al.,
2019; Wang and Wan, 2019; Yin et al., 2021; Lai et al., 2021), then a decoder iteratively selects the next one from the set of unordered sentences conditioned on the states of the encoder and the already ordered sentence sequence.
However, both categories of methods have a shortcoming in that the coherence of ordered sentences is not directly optimized but is approximated by optimizing auxiliary tasks, e.g., pair-wise ordering and ranking algorithms, or optimizing a series of conditional decisions, e.g., iterative sentence selection by a pointer network. These sub-optimal objects have a misalignment with the purpose of finding an order with maximal global coherence.
In this paper, we propose a simple sentence ordering method by introducing a Coherence Verifier
(COVER). It can be plugged into the ranking-based and sequence generation-based models. Figure 1 shows an example of how COVER works together with a sequence generation baseline. COVER only intervenes in the generation process. At each inference step, we let the baseline provide top candidates as the next sentence (e.g., s4 and s3) and
![1_image_0.png](1_image_0.png)
use COVER to verify the coherence of the sentence sequence candidates (e.g., s1, s2, s4 and s1, s2, s3)
and re-rank the candidates for future generations.
As a result, our method combines local conditional evidence and global coherence.
COVER is trained to measure coherence independently of the sentence ordering task. This is reasonable and important since the input of a coherence model is an ordered sentence sequence rather than a set of unordered sentences, and the model can be pre-trained with multi-domain datasets. We propose a novel coherence model, with a new graph formulation to model sentence pair orders, sequence order, and paragraph-to-sentence relations, and a novel gradual permutation-based data construction strategy for effective contrastive pretraining from pairs of sentence orders with different coherence degrees.
We evaluate the effectiveness of COVER by letting it work with a topological sorting-based baseline B-TSort (Prabhumoye et al., 2020) and a pointer network-based sequence generation baseline BERSON (Cui et al., 2020). Experimental results on four benchmarks demonstrate that our method improves both baselines and especially, obtains a large gain for the topological sorting-based baseline. It also outperforms other recent methods.
We conduct a series of in-depth analyses showing that our method can correct a large ratio of sentence pair classification errors made by B-TSort and improve ordering accuracy at the early decoding stage for BERSON, which alleviates the gap between training and inference, and reduces error propagation. These effects come from the key designs of our coherence model. Moreover, the COVER pre-trained with larger cross-domain datasets obtains better performance than the models trained with domain-specific datasets. The results verify the importance of pre-training the independent coherence model and also indicate that sentence ordering and coherence modeling can cooperate and interact well.
## 2 Background 2.1 Coherence Modeling
The main coherence modeling methods can be classified into the following categories.
Entity grid-based Methods measure local coherence by tracking the transitions of the grammatical roles of entities between sentences (Barzilay and Lapata, 2008; Lin et al., 2011). Tien Nguyen and Joty (2017) proposed the first neural entity model based on convolutional neural networks (CNNs).
Jeon and Strube (2022) proposed to compute coherence by constraining the input to noun phrases and proper names since they explicitly lead to the notion of focus in sentences.
Graph-based Methods are another framework for modeling local coherence. Guinaudeau and Strube
(2013) described relations between sentences and entities with graphs and measured local coherence by computing the average out-degree of graphs.
Mesgar et al. (2021) adopted graph convolutional networks (GCNs) for encoding entity graphs to model local coherence.
Data-driven Methods focus on learning domainindependent neural models of discourse coherence (Li and Jurafsky, 2017; Farag and Yannakoudakis, 2019). The key is to define proper learning objects, including discriminative models to distinguish coherent from incoherent discourse, generative models to produce coherent texts (Li and Jurafsky, 2017), and multi-task learning with auxiliary tasks (Farag and Yannakoudakis, 2019).
## 2.2 Sentence Ordering
Sentence ordering task takes possibly out-of-order sentences s = s1, s2*, ..., s*n as input, and aims to find the best order o∗ = o1, o2*, ..., o*n to make the sentence sequence so1
, so2
, ..., son with maximal global coherence.
Recent sentence ordering methods are mainly based on neural networks and can be classified into the following two categories.
## 2.2.1 Pair-Wise Ranking Based Methods
The main procedure of this category of methods is:
Step 1: Learn a pair-wise classifier to determine the relative order of each sentence pair. The classifier can be trained based on BERT (Prabhumoye et al., 2020; Zhu et al.,
2021) or GCNs (Ghosal et al., 2021).
Step 2: Integrate the relative orders to build relations between sentences. A common way is to build a constraint graph based on the relative orders.
Step 3: Rank the sentences based on the graph with a ranking algorithm like topological sorting (Prabhumoye et al., 2020; Ghosal et al.,
2021), or using a neural network to score sentences (Zhu et al., 2021), or modeling it as the asymmetric traveling salesman problem (Keswani and Jhamtani, 2021).
## 2.2.2 Sequence Generation Based Methods
Sequence generation-based models mainly depend on the pointer networks (Vinyals et al., 2015). The encoder maps a set of sentences into a fixed-length vector representation in various ways (Cui et al.,
2018; Yin et al., 2019, 2021; Lai et al., 2021; Basu Roy Chowdhury et al., 2021; Cui et al., 2020). The decoder iteratively generates the sentence sequence based on the attention scores over input sentences.
Formally, the decoders focus on modeling an autoregressive factorization of the joint coherence probability of a predicted order oˆ,
$p(\hat{\mathbf{o}}|s)=\prod\limits_{i=1}^{n}\underbrace{p(\hat{o}_{i}|\hat{\mathbf{o}}_{<i},s)}_{\text{conditional probability}}$ (1) $\propto\prod\limits_{i=1}^{n}\underbrace{a_{U_{i}}(\hat{o}_{i}|\hat{\mathbf{o}}_{<i},s)}_{\text{attention score}}$
where oˆ<i is the sequence of already ordered sentences, Uiis the set of unselected sentences at step i, oˆiis the i-th sentence in ˆo and aUi
(si|ˆo<i, s) is the attention score for a candidate sentence si ∈ Ui.
Beam search can be used for enlarging the search space and ranking partially generated hypotheses during decoding. But the ranking in beam search is still based on conditional evidence (Equation 1).
## 3 The Proposed Framework 3.1 The Motivation
The existing sentence ordering methods make the best decisions based on conditional evidence or local constraints, but do not directly optimize global coherence. This is natural because the model cannot see the complete global information before generating the final ordering.
When people do the same task, we also start from incomplete information. However, once we have a partial or final ordering, we often revisit the already-ordered sentences to verify whether the current text is coherent or needs to be revised. The verification step is intuitive and important since we can see more complete information.
Motivated by the above observations, we propose a simple sentence ordering framework by incorporating an independent coherence verifier. We call it COVER. COVER reads an ordered sentence sequence and gives a coherence score. We expect COVER can verify the predicted results of a baseline model and rerank the candidates to get a more coherent one.
We will introduce the details of COVER in §4.
In this section, we focus on demonstrating that COVER can be flexibly incorporated with sequence generation-based (§3.2) and topological sortingbased (§3.3) models through beam search.
## 3.2 Cover **For Sequence Generation-Based** Models
As Figure 1 shows, COVER can be easily incorporated into a pointer network-based baseline model.
It only intervenes in the decoding process. At each decoding step, we compute the score of a candidate sentence si as
$$g(s_{i})=\alpha\underbrace{a_{U_{i}}(s_{i}|\hat{\mathbf{o}}_{<i},s)}_{\mathrm{attention~score}}+\underbrace{\mathrm{CovER}(\hat{\mathbf{o}}_{<i},s_{i})}_{\mathrm{coherence~ verifier}}\tag{2}$$
where aUi
(si|ˆo<i, s) is the attention score. We put si at the end of oˆ<i and COVER(oˆ<i, si) returns a coherence score for the resulted sentence sequence.
COVER can be incorporated through beam search and g(si) in Equation 2 becomes g(ˆo<i, si).
A beam B = {oˆ<i} stores the top k preceding orders where k is the beam size and each candidate si ∈ Uiis combined with the items in B. We score each combination (oˆ<i, si) based on g(ˆo<i, si) and store the top k combinations in B.
## 3.3 Cover **For Pair-Wise Ranking-Based** Methods
For a pair-wise model, COVER does not affect the pair-wise classifier and only affects the ranking part as long as the model can provide multiple ordering candidates. In this paper, we focus on improving topological sorting-based methods.
The topological sorting algorithm reads a constraint graph G = (V, E), where an edge from vi ∈ V to vj ∈ V indicates the sentence siis predicted to be preceding sj in the document. At each time, the node without any incoming edges would be selected. Then the algorithm removes this node and its associated edges from G and repeats the above process until all nodes are processed.
We can see that the ordering process is also a generation process. As a result, we slightly modify the generation process and describe it in Algorithm 1.
Algorithm 1: COVER for Topological Sorting through Beam Search Input: Directed graph G = (V, E), beam size k, steps t to look ahead, start returns the start node in a graph based on the topological sorting algorithm, top_k returns the top k ranked items in a list Output: beam B = [o1, o2*, ..., o*k] , each oi is an
![3_image_1.png](3_image_1.png)
We introduce a beam B to store the top k partial orderings (line 1). A key operation is letting the topological sorting algorithm look ahead t steps to have more and longer partial ordering candidates and store them in a temporary list b (line 3 to line 11). COVER scores the partial ordering candidates
![3_image_0.png](3_image_0.png)
in b and the top k ones are stored in the beam B for future generation (line 12 to line 13).
In this way, COVER plays a role in the whole generation process and corrects the errors made by the pair-wise classifier in time by measuring coherence, which is ignored by topological sorting.
## 4 Cover**: The Coherence Model**
We propose a new graph-based coherence model as COVER. Specially, we propose a new graph formulation and model it with GNNs for coherence evaluation (§4.1). We also propose a new data construction strategy for contrastive pre-training of the coherence model (§4.2).
## 4.1 Graph Formulation And Modeling
Given ordered sentences in a paragraph d, we construct a graph Gd = (V, E, R). V is a set of nodes, E is a set of directed edges connecting nodes and R
is the set of edge types. Figure 2 shows an example of the graph for a paragraph with 5 sentences. The graph is a tournament digraph, in which every pair of distinct nodes is connected by a directed edge.
We consider two types of nodes V = {vd*} ∪ V*s:
- **Sentence nodes** Vs: Each sentence si with an ordered index i has a node vi ∈ Vs.
- **Paragraph node** vd: The paragraph has a node to represent the general topic of the paragraph.
We also consider three types of directed edges and the edge types are R = {rd, rs, rk}:
- **Paragraph-to-sentence edges**: We build a directed labeled edge (vd, rd, vi) from the paragraph node (para-node) to each sentence node, where rd indicates the edge type.
- **Sequential edges**: We build a directed labeled edge (vi, rs, vi+1) with a type rs between sentence si and si+1.
- **Skip edges**: We build a directed labeled edge
(vi, rk, vj ) with a type rk between sentence si and sj , if *j > i* + 1.
Sequential edges are the most natural choice for describing local coherence (Mesgar et al., 2021).
We further use densely connected skip edges to describe long-distance ordering information so that every sentence sj can directly receive information from all preceding sentences in the same paragraph rather than only receiving summarized information from sj−1. The formulation is rarely explored in previous coherence modeling work.
Node Representations We map the nodes to dense vectors. Specifically, we use DeBERTa (He et al.,
2021) to get the representation of each sentence node. Each sentence is fed to DeBERTa independently and the hidden state of the [CLS] token is used as the node representation. For the paragraph node, we let DeBERTa read the entire paragraph to get the representation of the para-node. So the positional embeddings naturally encode the ordering information.
Graph Modeling Following previous work (Mesgar et al., 2021; Ghosal et al., 2021), we use Relational Graph Convolutional Networks
(RGCN) (Schlichtkrull et al., 2018) to further encode the relations between nodes, which is a natural choice for the modeling of edges between nodes.
The RGCN model can accumulate relational evidence from the neighborhood around a given node viin multiple inference steps, i.e.,
$$h_{i}^{(l+1)}=\sigma(\sum_{r\in\mathcal{R}}\sum_{j\in\mathcal{N}_{i}^{r}}\frac{W_{r}^{(l)}h_{j}^{(l)}}{|\mathcal{N}_{i}^{r}|}+W_{0}^{(l)}h_{i}^{(l)})\quad(3)$$
where h
(l)
irepresents the hidden state of node vi in the l-th layer of the neural network. We use the representation of node vi from DeBERTa as h
(0)
i.
r ∈ R is one of the edge types and N r i represents the set of nodes connected to vithrough edge type r. Wr is the parameter matrix for r and W0 is the parameter matrix for the self-connection edge, which is an extra type in addition to R. σ(·) is set as ReLU(·). RGCN stacks L layers and we set L = 2, the same as (Ghosal et al., 2021).
Coherence Evaluation After getting the final representations of all nodes, we get the representation of the graph G via hG =Pv∈V hv and map it to a coherence score Coh(G), i.e.,
$$C o h(G)=\operatorname{sigmoid}(\operatorname{FFN}(h_{G}))$$
$$(4)$$
![4_image_0.png](4_image_0.png)
where FFN is a single-layer feed-forward neural network.
## 4.2 Model Training
Training Objective We train our model based on a pair-wise ranking manner. Given a text d
+ with a higher coherence degree than a text d−, we use the following loss function for updating model parameters, L = max(0, τ − Coh(Gd+ ) + Coh(Gd− )),
where Gd+ and Gd− are corresponding graphs for d
+ and d−, and τ = 0.1 is the margin.
Training Instance Construction The model can be trained using documents with manually annotated coherence degrees. However, the data scale is very limited. Another common way is distinguishing a coherent document from its permutations, where a coherent document and one of its random sentence permutations form a training instance. We call this way **random permutation**.
We propose a **gradual permutation** strategy by *gradually* corrupting a coherence document through pair-wise sentence permutation. Figure 3 illustrates an example of gradual permutation. A
pair-wise permutation operation is to randomly select a pair of sentences that are not selected before in the current order and exchange them to get a new order. We assume the new order is less coherent than the previous one. By repeating this process, we can get a sequence of order samples o1, o2*, ...*
with descending coherence degrees. Finally, we sample pairs of orders in the final sequence to form pair-wise training instances according to their relative coherence degrees. For one document, gradual permutation can be done multiple times.
Compared with random permutation, gradual permutation pays more attention to evaluating relative coherence between imperfect orders with different coherence degrees, instead of only distinguishing a perfect order from imperfect ones.
Dataset→ NIPS AAN SIND ROCStory
Model Acc τ PMR Acc τ PMR Acc τ PMR Acc τ PMR
HAN - 66.71 14.06 - 69.03 31.29 - 50.21 15.01 - 73.22 39.62 DARN - 74.62 24.13 - 77.48 39.18 - 56.52 15.48 - 76.02 38.02
SEK-Graph 58.25 76.49 - 65.06 78.60 - 17.17 53.16 - - - -
Con-Graph - 80.29 32.84 - 82.36 49.81 - 58.56 19.07 - 81.22 49.52 STaCK 63.60 81.66 37.31 71.60 85.56 54.01 54.20 61.94 20.79 76.70 85.34 55.96
B-TSORT 61.07 79.91 32.63 65.05 79.39 47.29 45.43 47.34 13.16 65.02 72.58 39.14
+ COVERdom 69.55 84.39 46.42 74.06 86.00 61.43 55.95 61.22 28.55 81.63 86.06 68.44
+ COVER 70.77 85.92 48.54 74.15 **86.06 62.04** 54.85 60.06 27.00 81.04 85.57 67.33
BERSON 69.08 82.32 38.73 77.73 85.04 58.56 59.74 65.53 31.89 83.51 88.35 69.17
+ COVERdom 72.42 84.59 46.42 78.11 85.26 59.14 60.07 65.92 32.50 84.76 **89.22** 71.82
+ COVER **74.86 86.10 50.93 78.13** 85.21 59.17 **60.31 66.01 32.96 84.80** 89.16 **72.27**
Pre-Training The training of the coherence model can be independent of the sentence ordering task.
As a result, COVER can be pre-trained with domainindependent resources and be maintained as a verifier for sentence ordering in specific domains.
## 5 Experiment 5.1 Experimental Settings
Datasets We conduct experiments on four widely used benchmarks. **NIPS** and AAN contains abstracts from NIPS and ACL anthology network papers (Logeswaran et al., 2018). **SIND** is originally used for visual storytelling (Huang et al.,
2016), where natural language descriptions are provided for five images of each story. **ROCStory** is a dataset of short stories, each of which has five sentences (Mostafazadeh et al., 2016). We follow the original papers to split each dataset into training, validation, and test sets. Detailed statistics are shown in Table 2.
Table 2: Statistics of datasets used in our experiments.
Evaluation Metrics We adopt the following three commonly used metrics for evaluation.
Perfect Match Ratio (PMR): PMR measures the percentage of documents for which the entire sequence is correctly predicted (Chen et al., 2016).
Kendall's τ : It measures the difference between the predicted order and the gold order of sentences based on the number of inversions (Lapata, 2003).
| Dataset | Length statistics | Data split | | | |
|-----------|---------------------|--------------|-------|------|------|
| mean | max | train | valid | test | |
| NIPS | 6 | 14 | 2427 | 408 | 377 |
| AAN | 5 | 20 | 8568 | 962 | 2626 |
| SIND | 5 | 5 | 40155 | 4990 | 5055 |
| ROCStory | 5 | 5 | 78529 | 9816 | 9816 |
Accuracy (ACC): It measures the percentage of sentences, whose absolute positions are correctly predicted (Logeswaran et al., 2018).
Baselines and Settings We use B-TSort (Prabhumoye et al., 2020)
* and BERSON (Cui et al.,
2020)
†as our main baselines. We choose them because they are recent representative pair-wise ranking-based and pointer network-based methods, with top performance and almost reproducible results with publicly released codes. We use optimized parameters provided by the original papers and re-run the source codes in our machine. We run these baselines for three times with different random seeds and use the baseline models with the best performance in our experiments.
Our method lets COVER work together with BTSort and BRESON, utilizing and adjusting their predictions with beam search. The same as the setting of BERSON, we set the beam size as 16 for both baselines. The looking ahead steps t in Algorithm 1 is 2. The hyper-parameter α in Equation 2 is 0.1, which is chosen from {0.01, 0.1, 0.5, 1}
based on the validation performance.
We use the AdamW optimizer for training the coherence model. The learning rate for the parameters of DeBERTa, which is used for getting node representations, is 1e-6 and the learning rate for the parameters of the RGCN model is 1e-4.
We pre-train COVER using the combination of the training sets of the four benchmarks with an A100 GPU for 40 hours and train a domain-specific COVERdom for each dataset using the corresponding training set. For one document, we sample two sentence permutations as negative instances.
*https://github.com/shrimai/Topological-Sort-forSentence-Ordering/
†https://github.com/hwxcby/BERSON
## 5.2 General Results On Sentence Ordering
Table 1 shows the performance of our method, two main baselines, and other recent methods.
First of all, we can see that both COVER and COVERdom improve the two baselines on all benchmarks. The pre-trained COVER outperforms the domain-specific COVERdom in most cases, indicating pre-training the coherence model is feasible and useful. We can maintain a single coherence model instead of domain-specific ones and even have a boost in overall performance.
Based on the beam search algorithm for topological sorting, COVER obtain 11.1%, 9.6%, and 18.1%
average absolute improvements in Acc, Kendall's τ , and PMR compared with B-TSort.
Based on adding coherence verification in the beam search-based decoding, COVER achieves 2.0%, 1.3%, and 4.2% average absolute improvements in Acc, Kendall's τ , and PMR compared with BERSON. The improvements are smaller but still significant. Especially, our method has significant performance improvement in PMR.
We also conduct comparisons with other recent methods, including hierarchical attention network
(HAN) (Wang and Wan, 2019), deep attentive ranking network (DARN) (Kumar et al., 2020),
constraint graph (ConGraph) (Zhu et al., 2021),
knowledge-enhanced GNN (SEK-Graph) (Yin et al., 2021) and STaCK (Ghosal et al., 2021). Our method based on either B-TSort or BERSON outperforms all these methods.
## 5.3 Effect Of Cover **For B-Tsort**
Our method gets large improvements for B-TSort.
We hope to analyze the improvements more deeply and conduct investigations on the NIPS dataset.
We start by analyzing the predictions made by BTSort's pair-wise classifier. Specifically, we group sentence pairs according to the distance between two sentences. We investigate the *error ratio* for different distance d, where
$$\mathbf{\omega}={\frac{\#\mathrm{incor}}{\#\mathrm{all}}}$$
error ratio =
\#incorrect pair-wise prediction
\#all pairs within distance d, and analyze the *confidence* of the pair-wise classifier, using its prediction probability as a confidence measure.
Figure 4 illustrates the error ratio and averaged prediction confidence for different values of d. BTSort's classifier is more confident and accurate for determining the relative order of sentence pairs
![6_image_0.png](6_image_0.png)
with larger distances but is less confident and struggles in handling the relative order of nearby sentences. This is reasonable since nearby sentences share similar topics in content so it is hard to determine the relative order without a larger context.
The topological sorting algorithm does not consider content information and cannot deal with lowconfidence predictions as well.
Figure 4 also shows the error ratio of B-TSort plus COVER. Our method reduces 21% to 27%
errors for sentence pairs with distance d ≤ 4 and reduces more than 50% errors for long-distance sentence pairs. This indicates that based on Algorithm 1, COVER overcomes the limitations of the original topological sorting algorithm and gradually improves the predictions.
## 5.4 Effect Of Cover **For Berson**
We infer that one of the reasons that COVER improves BERSON is alleviating the gap between the training and inference. We conduct a controlled experiment to verify this assumption.
During inference, we experiment with different input orders to the decoder of BERSON: 1) perfect:
the input order is the right-shift of the gold order, which is the same as the training phase; 2) predicted: the input order is according to the predicted order, which is the normal way for inference. In either case, we evaluate the outputs of the decoder.
**Acknowledgments** I would like to thank my supervisor, for his kind of support. I would like to thank my supervisor, for his kind of support.
$\underline{\;\;\;\;\;\;\;\;\;\;\;\;}$ 20020.
| Method | Input Order | Acc | τ | PMR |
|----------|---------------|-------|-------|-------|
| BERSON | Perfect | 79.84 | 83.47 | 57.55 |
| BERSON | Predicted | 72.51 | 80.31 | 49.58 |
| + COVER | Predicted | 75.53 | 81.62 | 53.83 |
Table 3: Average performance with perfect and predicted order as the input of BERSON's decoder.
| Method | Acc | τ | PMR |
|------------------------------|-------|-------|-------|
| B-TSORT+Mesgar et al. (2021) | 63.19 | 73.91 | 34.51 |
| B-TSORT+COVER | 70.20 | 79.40 | 51.23 |
| - Skip edges | 69.19 | 78.87 | 49.46 |
| - Para node | 58.90 | 71.23 | 29.94 |
| BERSON+Mesgar et al. (2021) | 72.60 | 80.41 | 50.23 |
| BERSON+COVER | 74.53 | 81.62 | 53.83 |
| - Skip edges | 73.17 | 80.91 | 51.02 |
| - Para node | 72.39 | 80.11 | 49.58 |
Table 3 shows the average performance over four datasets. BERSON with perfect input orders sets an upper bound. In contrast, in the normal way for inference, BERSON's performance drops a lot because there are likely errors during order generation and the imperfect preceding order would affect the future generation as well. With the help of COVER, BERSON can get a performance closer to that with perfect input order.
A natural assumption about the effect is that COVER improves the predictions in the early decoding stage so that future generation is based on a closer-to-perfect preceding order.
## 5.5 Ablation Study Of Cover
We further investigate the effectiveness of key designs of COVER, mainly from two aspects: the graph formulation and the training strategy.
Graph Formulation We focus on analyzing the importance of the skip edges and the paragraph node, which mainly encode ordering information.
Table 4 shows the results. For B-TSort, removing skip edges leads to a small performance decrease, while removing the paragraph node leads to a large decrease. The reason may be that the topological sorting algorithm depends on the predicted pair-wise relative orders but does not consider any content information. So encoding the content of a paragraph is more important. For BERSON, the paragraph node and the skip edges are both important. The skip edges explicitly connect preceding sentences and the candidate sentence, which may help deal with imperfect partial orders.
A state-of-the-art coherence model Mesgar et al.
(2021) is also used as the verifier. It improves BTSort and BERSON, but has a certain gap with COVER, indicating the advantage of coherence verification and the designs of COVER.
Training Strategies We compare the random permutation and gradual permutation strategies. Ta-
Table 5: Average performance with two strategies.
| B-SORT | Acc | τ | PMR |
|---------------------|-------|-------|-------|
| Random permutation | 69.19 | 78.23 | 49.76 |
| Gradual permutation | 70.20 | 79.40 | 51.23 |
| BERSON | Acc | τ | PMR |
| Random permutation | 73.75 | 81.15 | 51.77 |
| Gradual permutation | 74.53 | 81.62 | 53.83 |
ble 5 shows the average performance over four datasets. Gradual permutation consistently gets better performance than random permutation.
We further analyze the error ratio for sentences at different positions in the gold orders on the NIPS
dataset. Table 6 shows that using either strategy, our method can obviously reduce the error ratio for almost all positions. Random permutation outperforms BERSON at all positions, while gradual permutation has the lowest error ratio for sentences at the front and middle of the documents. This is because, with the training instances constructed by gradual permutation, the model can better compare relative coherence between imperfect orders so it can correct more errors in preceding sentences, making the decoding more robust. But gradual permutation has a slightly worse error ratio at the end of the documents. The reason may be that the training instances containing perfect orders are less, affecting the judgment for sentences at the end.
In the future, we will investigate better sampling strategies that can keep a trade-off between random permutation and gradual permutation.
Connecting the above observations, COVER significantly improves the accuracy at the front of documents and can gradually improve the partial orderings. These factors can reasonably explain the effects of COVER for B-TSort and BERSON.
| Error Ratio | | | |
|---------------|--------|--------|---------|
| position | BERSON | Random | Gradual |
| 1 | 7.69 | 5.57 | 3.98 |
| 2 | 28.38 | 21.22 | 17.51 |
| 3 | 40.69 | 32.18 | 26.60 |
| 4-5 | 43.22 | 40.69 | 36.07 |
| 6-7 | 40.76 | 39.67 | 40.49 |
| 8+ | 40.71 | 39.29 | 42.86 |
## 5.6 Predicting The First And Last Sentences
The first and last sentences are important to documents. Following previous studies, we report the
| Method | NIPS | AAN | SIND | ROCStory |
|----------------|--------|-------|--------|------------|
| First Sentence | | | | |
| B-TSORT | 91.51 | 89.68 | 74.28 | 88.30 |
| + COVER | 94.43 | 92.74 | 80.24 | 94.50 |
| BERSON | 92.31 | 93.18 | 85.44 | 96.34 |
| + COVER | 96.02 | 93.54 | 85.64 | 97.13 |
| Last Sentence | | | | |
| B-TSORT | 79.05 | 78.59 | 52.58 | 72.10 |
| + COVER | 79.84 | 82.07 | 61.17 | 83.38 |
| BERSON | 80.37 | 81.54 | 66.25 | 85.85 |
| + COVER | 79.58 | 81.54 | 66.41 | 84.90 |
performance of our model against two baselines in correctly predicting these two sentences on four benchmarks.
As displayed in Table 7, our method obtains significant improvements in predicting the first sentences across four benchmarks for both B-TSort and BERSON. However, it performs better than B-TSort but slightly worse than BERSON in predicting the last sentences. This observation is consistent with the analysis in §5.5.
## 5.7 Performance On Short And Long Documents
We conduct experiments on NIPS and AAN
datasets to analyze the effects of COVER for short and long texts. The documents in the test set are divided into short ones (with less than 8 sentences)
and long ones (with 8 or more sentences). There are 298 short and 79 long documents in NIPS, and 2358 short and 268 long documents in AAN.
Table 8 shows the results. For two baselines, our method has great improvements in both short and long documents.
## 5.8 Performance On Coherence Rating
We also evaluate our model for the summary coherence rating (SCR) task. We use the dataset proposed by Barzilay and Lapata (2008), which contains English summaries produced by human experts and an extractive summarization system.
Each instance in the dataset is a pair of two summaries with different ratings of the same text.
We compare our model with *EntGraph* (Guinaudeau and Strube, 2013), *Neural EntGrid* (Tien Nguyen and Joty, 2017) and Mesgar et al. (2021). Table 9 shows that our model performs very close to the best performance Table 8: Results on short and long texts in the NIPS and AAN datasets.
| Dataset | NIPS | AAN | | |
|------------|--------|-------|-------|-------|
| Method | τ | PMR | τ | PMR |
| Short Text | | | | |
| B-TSORT | 81.77 | 40.27 | 81.32 | 52.13 |
| + COVER | 87.20 | 56.71 | 87.47 | 67.46 |
| BERSON | 83.39 | 46.31 | 86.27 | 63.12 |
| + COVER | 86.47 | 56.71 | 86.45 | 63.67 |
| Long Text | | | | |
| B-TSORT | 72.86 | 3.80 | 62.49 | 4.85 |
| + COVER | 81.08 | 17.72 | 73.76 | 14.55 |
| BERSON | 78.29 | 10.13 | 74.26 | 18.66 |
| + COVER | 84.72 | 29.11 | 74.31 | 19.78 |
| Model | Acc |
|----------------------|-------|
| EntGraph | 80.0 |
| Neural EntGrid | 86.3 |
| Mesgar et al. (2021) | 87.5 |
| COVER | 87.4 |
system, indicating that our method is effective for multiple coherence evaluation tasks.
## 6 Conclusion
This paper has presented a novel sentence ordering method by incorporating a coherence verifier
(COVER). We show that COVER works well with pair-wise ranking-based and sequence generationbased baselines. Our framework combines local evidence from the baselines and larger context coherence from COVER and can gradually improve partial orderings. The coherence verifier is independent of the sentence ordering task but can be optimized for sentence ordering (e.g., via gradual permutation), and can be pre-trained with multidomain datasets, obtaining superior performance compared with domain-specific models. So it is effective and easy to maintain and transfer.
Sentence ordering is often used as a training task for coherence modeling. This paper, however, suggests that coherence models can also support sentence ordering methods to correct incoherent texts. Coherence models are able to identify sentences that are not well-connected. Sentence ordering models can then be used to reorder these sentences to improves the coherence of the text with the assistance of the coherence models.
## Limitations
While the proposed method performs well on four benchmarks, we discuss some of its limitations.
On one hand, as discussed in §5.5, our method is not accurate enough to predict sentences at the end of the documents. There may be some better strategies to construct training samples so that the model can better take into account each part of the documents and make more accurate predictions.
On the other hand, our model is not pre-trained with more diverse domains and larger scale data.
Our datasets are limited to two types, i.e., paper abstracts and short stories, both of which have comparatively obvious order characteristics. In addition, we do not use some larger scale datasets, such as NSF abstracts and arXiv abstracts, because of computation and time constraints. With more diverse and larger data, the performance of our model should be further improved.
## References
Evelin Amorim, Marcia Cançado, and Adriano Veloso.
2018. Automated essay scoring in the presence of biased ratings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 229–237, New Orleans, Louisiana. Association for Computational Linguistics.
Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. *Computational Linguistics*, 34(1):1–34.
Somnath Basu Roy Chowdhury, Faeze Brahman, and Snigdha Chaturvedi. 2021. Is everything in order? a simple way to order sentences. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 10769–10779, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2016.
Neural sentence ordering. *ArXiv*, abs/1607.06952.
Baiyun Cui, Yingming Li, Ming Chen, and Zhongfei Zhang. 2018. Deep attentive sentence ordering network. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 4340–4349, Brussels, Belgium. Association for Computational Linguistics.
Baiyun Cui, Yingming Li, and Zhongfei Zhang. 2020.
BERT-enhanced relational sentence ordering network. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing
(EMNLP), pages 6310–6320, Online. Association for Computational Linguistics.
Angela Fan, Mike Lewis, and Yann Dauphin. 2019.
Strategies for structuring story generation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 2650–
2660, Florence, Italy. Association for Computational Linguistics.
Youmna Farag and Helen Yannakoudakis. 2019. Multitask learning for coherence modeling. In *Proceedings of the 57th Annual Meeting of the Association* for Computational Linguistics, pages 629–639. Association for Computational Linguistics (ACL).
Deepanway Ghosal, Navonil Majumder, Rada Mihalcea, and Soujanya Poria. 2021. STaCK: Sentence ordering with temporal commonsense knowledge.
In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 8676–8686, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021. Long text generation by modeling sentence-level and discourse-level coherence. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 6379–6393, Online. Association for Computational Linguistics.
Camille Guinaudeau and Michael Strube. 2013. Graphbased local coherence modeling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 93–103, Sofia, Bulgaria. Association for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations.
Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, and Graham Neubig. 2020. What makes a good story? designing composite rewards for visual storytelling. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7969–7976.
Ting-Hao Kenneth Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, and Margaret Mitchell. 2016. Visual storytelling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1233–1239, San Diego, California. Association for Computational Linguistics.
Sungho Jeon and Michael Strube. 2022. Entity-based neural local coherence modeling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 7787–7805, Dublin, Ireland. Association for Computational Linguistics.
Vishal Keswani and Harsh Jhamtani. 2021. Formulating neural sentence ordering as the asymmetric traveling salesman problem. In Proceedings of the 14th International Conference on Natural Language Generation, pages 128–139, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Pawan Kumar, Dhanajit Brahma, Harish Karnick, and Piyush Rai. 2020. Deep attentive ranking networks for learning to order sentences. 34(05):8115–8122.
Shaopeng Lai, Ante Wang, Fandong Meng, Jie Zhou, Yubin Ge, Jiali Zeng, Junfeng Yao, Degen Huang, and Jinsong Su. 2021. Improving graph-based sentence ordering with iteratively predicted pairwise orderings. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 2407–2417, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Mirella Lapata. 2003. Probabilistic text structuring:
Experiments with sentence ordering. In *Proceedings* of the 41st Annual Meeting of the Association for Computational Linguistics, pages 545–552, Sapporo, Japan. Association for Computational Linguistics.
Jiwei Li and Dan Jurafsky. 2017. Neural net models of open-domain discourse coherence. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 198–209, Copenhagen, Denmark. Association for Computational Linguistics.
Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011.
Automatically evaluating text coherence using discourse relations. In *Proceedings of the 49th Annual* Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT '11, page 997–1006, USA. Association for Computational Linguistics.
Lajanugen Logeswaran, Honglak Lee, and Dragomir Radev. 2018. Sentence ordering and coherence modeling using recurrent neural networks. In *Proceedings of the Thirty-Second AAAI Conference on* Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18.
AAAI Press.
Mohsen Mesgar, Leonardo F. R. Ribeiro, and Iryna Gurevych. 2021. A neural graph-based local coherence model. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 2316–
2321, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Farjana Sultana Mim, Naoya Inoue, Paul Reisert, Hiroki Ouchi, and Kentaro Inui. 2019. Unsupervised learning of discourse-aware text representation for
essay scoring. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 378–
385, Florence, Italy. Association for Computational Linguistics.
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In *Proceedings of the 2016* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849, San Diego, California. Association for Computational Linguistics.
Shrimai Prabhumoye, Ruslan Salakhutdinov, and Alan W Black. 2020. Topological sort for sentence ordering. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 2783–2792, Online. Association for Computational Linguistics.
Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. *Natural Language Engineering*, 3(1):57–87.
Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling.
2018. Modeling relational data with graph convolutional networks. In *The Semantic Web*, pages 593–
607, Cham. Springer International Publishing.
Dat Tien Nguyen and Shafiq Joty. 2017. A neural local coherence model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1320–1330, Vancouver, Canada. Association for Computational Linguistics.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly.
2015. Pointer networks. *Advances in neural information processing systems*, 28.
Tianming Wang and Xiaojun Wan. 2019. Hierarchical attention networks for sentence ordering.
33(01):7184–7191.
Yongjing Yin, Shaopeng Lai, Linfeng Song, Chulun Zhou, Xianpei Han, Junfeng Yao, and Jinsong Su.
2021. An external knowledge enhanced graph-based neural network for sentence ordering. *Journal of* Artificial Intelligence Research, 70:545–566.
Yongjing Yin, Linfeng Song, Jinsong Su, Jiali Zeng, Chulun Zhou, and Jiebo Luo. 2019. Graph-based neural sentence ordering. In *Proceedings of the* Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5387–5393. ijcai.org.
Yutao Zhu, Kun Zhou, Jian-Yun Nie, Shengchao Liu, and Zhicheng Dou. 2021. Neural sentence ordering based on constraint graphs. 35(16):14656–14664.
## A Detailed Experimental Results
Table 10 lists the detailed error ratio data in Figure 4 for reference. We also report the detailed results of four benchmarks about the controlled experiment for analyzing the gap between training and inference in §5.4 (Table 11) , ablation study of the graph formulation in §5.5 (Table 12) and the performance with random and gradual strategies in
§5.5 (Table 13).
| Distance | 1 | 2 | 3-4 | 5-6 | 7+ |
|------------|-------|-------|-------|-------|------|
| B-TSORT | 19.41 | 13.38 | 6.95 | 4.30 | 3.28 |
| + COVER | 14.18 | 10.48 | 5.35 | 2.15 | 1.23 |
Table 10: The detailed error ratio of predicted pair-wise relative orders w/ and w/o COVER for B-TSort Table 11: The detailed results for Table 3 in §5.4.
Dataset→ NIPS AAN SIND ROCStory
Method Acc τ PMR Acc τ PMR Acc τ PMR Acc τ PMR
B-TSORT + Mesgar et al. (2021) 61.73 80.54 28.38 65.42 80.96 45.10 51.71 56.08 20.99 73.88 78.06 43.57 B-TSORT + COVER 70.77 85.92 48.54 74.15 86.06 62.04 54.85 60.06 27.00 81.04 85.57 67.33
- SkipEdges 68.77 84.51 43.50 71.38 84.56 58.60 54.37 59.29 26.55 80.86 85.23 67.01
- ParaNode 58.03 76.71 25.46 60.80 77.58 39.22 49.02 54.92 15.71 67.74 75.69 39.36
BERSON + Mesgar et al. (2021) 69.11 82.43 40.27 77.83 85.09 58.55 59.81 65.73 32.03 83.65 88.39 70.08 BERSON + COVER 74.86 86.10 50.93 78.13 85.21 59.17 60.31 66.01 32.96 84.80 89.16 72.27
- SkipEdges 70.67 83.61 41.91 77.88 85.13 58.72 60.05 65.91 32.50 84.08 88.97 70.94
- ParaNode 68.87 81.77 38.99 77.79 85.09 58.49 59.73 65.50 31.93 83.18 88.07 68.89
Table 12: The detailed results for Table 4 §5.5.
Dataset→ NIPS AAN SIND ROCStory
B-TSort Acc τ PMR Acc τ PMR Acc τ PMR Acc τ PMR
Random Permutation 69.73 84.46 46.15 73.10 85.15 60.59 54.76 59.53 27.75 79.15 83.78 64.53 Gradual Permutation 70.77 85.92 48.54 74.15 86.06 62.04 54.85 60.06 27.00 81.04 85.57 67.33
BERSON Acc τ PMR Acc τ PMR Acc τ PMR Acc τ PMR
Random Permutation 72.32 84.35 44.30 77.88 85.11 58.79 60.08 65.89 32.56 84.73 89.23 71.43
Gradual Permutation 74.86 86.10 50.93 78.13 85.21 59.17 60.31 66.01 32.96 84.80 89.16 72.27
Table 13: The detailed results for Table 5 in §5.5.
Dataset→ NIPS AAN SIND ROCStory
Method Order Acc τ PMR Acc τ PMR Acc τ PMR Acc τ PMR
BERSON Perfect 75.32 86.96 44.03 85.52 87.71 68.04 69.54 68.79 42.04 88.99 90.42 76.10
BERSON Predicted 69.08 82.32 38.73 77.73 85.04 58.56 59.74 65.53 31.89 83.51 88.35 69.17
+ COVER 74.86 86.10 50.93 78.13 85.21 59.17 60.31 66.01 32.96 84.80 89.16 72.27
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations
✓ A2. Did you discuss any potential risks of your work?
Limitations
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 5
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
5.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5.1
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5.1 and A
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5.1
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
liu-zeldes-2023-gumsum | {GUMS}um: Multi-Genre Data and Evaluation for {E}nglish Abstractive Summarization | https://aclanthology.org/2023.findings-acl.593 | Automatic summarization with pre-trained language models has led to impressively fluent results, but is prone to {`}hallucinations{'}, low performance on non-news genres, and outputs which are not exactly summaries. Targeting ACL 2023{'}s {`}Reality Check{'} theme, we present GUMSum, a small but carefully crafted dataset of English summaries in 12 written and spoken genres for evaluation of abstractive summarization. Summaries are highly constrained, focusing on substitutive potential, factuality, and faithfulness. We present guidelines and evaluate human agreement as well as subjective judgments on recent system outputs, comparing general-domain untuned approaches, a fine-tuned one, and a prompt-based approach, to human performance. Results show that while GPT3 achieves impressive scores, it still underperforms humans, with varying quality across genres. Human judgments reveal different types of errors in supervised, prompted, and human-generated summaries, shedding light on the challenges of producing a good summary. | # Gumsum: Multi-Genre Data And Evaluation For English Abstractive Summarization
Yang Janet Liu and **Amir Zeldes**
Department of Linguistics Georgetown University
{yl879, amir.zeldes}@georgetown.edu
## Abstract
Automatic summarization with pre-trained language models has led to impressively fluent results, but is prone to 'hallucinations', low performance on non-news genres, and outputs which are not exactly summaries. Targeting ACL 2023's 'Reality Check' theme, we present GUMSum, a small but carefully crafted dataset of English summaries in 12 written and spoken genres for evaluation of abstractive summarization. Summaries are highly constrained, focusing on substitutive potential, factuality, and faithfulness. We present guidelines and evaluate human agreement as well as subjective judgments on recent system outputs, comparing general-domain untuned approaches, a fine-tuned one, and a prompt-based approach, to human performance. Results show that while GPT3 achieves impressive scores, it still underperforms humans, with varying quality across genres. Human judgments reveal different types of errors in supervised, prompted, and human-generated summaries, shedding light on the challenges of producing a good summary.
## 1 Introduction
Recent advances in supervised summarization models as well as prompt-based approaches using large pre-trained language models have led to substantial improvements in summary fluency, with prompt-based outputs now surpassing supervised approaches in human evaluation (Goyal et al., 2022). At the same time, researchers in the field repeatedly note that the most commonly used datasets, such as CNN/DailyMail (CNN/DM, Hermann et al. 2015) and Extreme Summarization
(XSum, Narayan et al. 2018), which are large-scale
'found' datasets not designed to facilitate high quality summarization, are problematic, and in many cases contain texts which are not summaries, are incomplete or unfaithful to the texts they relate to, add information not present in texts, or any combination of the above (Reiter, 2022; Liu et al.,
2022a). Existing datasets are also limited to mainly newswire text (cf. Zopf et al. 2016), which is a fraction of extant genres in general and on the Web.
The main contributions of this paper are in providing and evaluating a very genre-diverse dataset and guidelines for carefully crafted, rather than
'found' summaries, which follow the same design across text types. Building on the UD English GUM treebank (Zeldes, 2017), which contains 213 spoken and written texts balanced across 12 different genres, our summaries target three goals:
1) to be **substitutive** (i.e. informative, functioning as a substitute for reading a text, cf. Edmundson 1969; Nenkova and McKeown 2011) rather than indicative (e.g. 'clickbait' designed to attract readership); 2) to be **faithful** to the text, adhering to original formulations wherever possible; 3) to be hallucination-free, meaning summaries make a strong effort not to add any information (even if it is likely to be true), mentioning only entities and events actually contained in the text, thereby preventing typical errors associated with datasets such as XSum (Narayan et al., 2018). Instructions on obtaining the dataset and responses from the human evaluation study as well as evaluation code can be found at https://github.com/janetlauyeung/ GUMSum4EVAL.
1
## 2 Related Work
The problem of mitigating factuality and faithfulness issues in Natural Language Generation (NLG)
has recently received considerable attention, with studies proposing auxiliary tasks using the MultiTask Learning approach to constrain models, such as overlapping entities (Nan et al., 2021), encoding of predicate triples from source documents (Zhu et al., 2021) or encouraging systems to incorporate or copy entities from source documents (Xiao and 1Data is also available from the corpus website at https:
//gucorpling.org/gum/ and guidelines at https://wiki.
gucorpling.org/en/gum/summarization.
Carenini, 2022; Maddela et al., 2022). In addition, Tang et al. (2022) present a thorough investigation of factual errors in summarization and propose a taxonomy of error types with a focus on entity and predication errors, while Thomson et al. (2023)
examine types of accuracy errors made by neural systems and contrast them with human errors.
These papers share concerns about the nature of widely used datasets for English, such as XSum and CNN/DM, but are limited by the lack of evaluation data specifically targeting genre-diverse texts with high-quality summaries: ones which ideally maximize faithfulness, rule out hallucinations, and follow consistent guidelines for what constitutes a summary. Although there are some non-news single-document summarization datasets covering Reddit (Kim et al., 2019) and Podcast data (Rezapour et al., 2022), text types are still quite limited and data is often not publicly available (Tang et al.,
2022). This motivates our work to create openaccess, multi-genre data with consistent guidelines across text types.
## 3 Dataset
Contents GUMSum covers the 213 documents
(amounting to ∼200K tokens) from the 12-genre UD English GUM corpus (Zeldes 2017; specifically GUM V9), which provides gold syntax trees, entity types, coreference resolution, and discourse parses for the data. For this paper, we added summaries to each document in the corpus, by the authors and students in a Computational Linguistics course as part of a class-based project,2 guided by general and genre-specific instructions. Although the range of ∼20 human summarizers is broad as a result, we defined guidelines to constrain summaries and ensure they are maximally 'realitychecked', i.e. faithful and factual, as evaluated below. Documents vary in length, ranging between 167 and 1,878 tokens (mean=957, sd=249.6), and cover the genres in Table 1. Because of the classroom context in which summaries are collected and the natural variation in student styles and adherence to guidelines, all summaries are thoroughly checked by a teaching assistant and the course instructor. For the 24 documents in the UD treebank's official test set of GUM V9, we provide two summaries to support inter-annotator agreement and multiple-reference evaluation.
| Genres | Source | Docs | Toks | øSum.Len (sd) |
|-----------------|------------|---------|-----------|-----------------|
| Interviews | Wikinews | 19 | 18,190 | 49 (6.3) |
| News stories | Wikinews | 23 | 16,145 | 51 (9.0) |
| Travel guides | Wikivoyage | 18 | 16,514 | 59 (8.9) |
| How-to guides | WikiHow | 19 | 17,081 | 67 (6.5) |
| Academic | various | 18 | 17,169 | 35 (11.2) |
| Biographies | Wikipedia | 20 | 18,213 | 44 (9.8) |
| Fiction | various | 19 | 17,510 | 47 (10.3) |
| Web forums | Reddit | 18 | 16,364 | 50 (8.7) |
| Conversations | SBC | 14 | 16,416 | 41 (13.7) |
| Speeches | various | 15 | 16,720 | 46 (9.2) |
| Vlogs | YouTube | 15 | 16,864 | 50 (11.8) |
| Textbooks | OpenStax | 15 | 16,693 | 51 (8.9) |
| total / average | 213 | 203,879 | 50 (12.2) | |
Table 1: Overview and Statistics of GUMSum.
Guidelines Previous literature has characterized
'good' summaries primarily as ones that are concise, accurate, fluent, and coherent (Fabbri et al.,
2021). What these qualities mean varies depending on the summary's objective: whether it is domain-specific or general, indicative (enticing readers to read the text) or informative (aiming to substitute reading it, Nenkova and McKeown 2011) etc. GUMSum's summaries explicitly target a **domain-general, substitutive, maximally**
concise format, which is therefore constrained to:
[1] have at most one sentence / 380 characters3
[2] have the goal of replacing reading the text [3] give participants/time/place/manner of events
[4] form a sentence rather than a fragment
[5] omit distracting information
[6] avoid entities or information not present in the text, even if we are fairly sure it is true
[7] reject synonyms for words in the text
For instance, the summary in (1) for a story involving 'robbers plundering a vault' follows guidelines by providing a declarative-sentence (criteria
[1], [4]), synopsis of events, participants (exactly five robbers), time (a date) and place (*Poughkeepsie*) ([3]), as well as additional details (exact name of the bank, mode of escape). (2) is underspecified
(we do not know when or where the event occurred, criterion [3]). (3) paraphrases the robbers' escape by introducing an entity not in the original text
(uncaught by *police*, violating [6]), and substitutes
'robbed' for 'plundered', a near synonym but a deviation from the original text's style ([7]).
(1) -On March 23, 1999, five bank robbers plundered the vault of First National Bank in Poughkeepsie, NY and escaped in a bus they had stolen.
(2) ,*Bank robbers plundered a vault and escaped.*
(3) ,*Bank robbers who robbed a bank in Poughkeepsie were never caught by police.*
(4) *Some people debate whether the original 3* hour cut of Snyder's movie about Batman and Superman should have been released instead of the shorter version, which prioritized getting to the action faster in order to appeal to a general audience. (Reddit)
(5) *Ash tells about her day, which includes a* yoga class, marketing brand management class, doing some work while having coffee at Saxby's, and finally cooking pasta with peppers for dinner together with her boyfriend Harry. (YouTube CC-BY vlog)
## 4 Evaluation
| R-1 | R-2 | R-L | BS | MS | METEOR | BLEU | BLEURT | |
|-------------------------|--------------------------|-------|------|------|----------|--------|----------|------|
| SimCLS | 23.1 | 6.2 | 17.2 | 86.0 | 12.1 | 13.4 | 2.1 | 31.9 |
| BRIO | 27.8 | 10.2 | 21.2 | 87.2 | 15.9 | 18.0 | 3.7 | 36.3 |
| GPT3-DV2 31.1 12.1 25.1 | 88.5 | 21.1 | 20.8 | 3.8 | 42.2 | | | |
| BRIO-FT∗ | 37.3 12.0 27.1 | 88.7 | 27.4 | 27.6 | 6.1 | 44.3 | | |
| Human 2 | 38.9 12.7 28.4 88.8 28.5 | 33.0 | 7.5 | 50.2 | | | | |
(Liu and Liu, 2021), as well as prompt-based outputs using a GPT3 model (Brown et al., 2020),
GPT3-text-davinci-002 (GPT3-DV2), with the prompt '*Summarize the text above in one sentence.*'.
We chose system models trained on the XSum dataset, since it has one-sentence summaries more in line with the GUMSum data. However, because systems have never seen data in many of GUMSum's genres, we also add an additional experiment in which we fine-tune the higher-scoring supervised system, i.e. BRIO's trained-model on XSum for *generation*, by continuing training it on the 165 documents in the UD treebank's train set of the underlying GUM V9 corpus (BRIO-FT in Table 2; details/splits and system output selection can be found in Appendix B). Scores are compared to a second human-written summary obtained from a human evaluation study, using the same guidelines.
Although these examples illustrate newswire language, GUMSum covers very different spoken and written text types as well:
Table 2: Automatic Evaluation Metrics of System Outputs and Human Agreement (∗ = 3 run average).
Table 2 shows that while systems have impressive scores for ROUGE (Lin, 2004), BERTScore
(BS, Zhang et al. 2020), MoverScore (MS, Zhao et al. 2019), METEOR (Banerjee and Lavie, 2005),
BLEURT (Sellam et al., 2020), and BLEU (Papineni et al., 2002), they still lag behind the human summaries across the board. Reproducing findings by Goyal et al. (2022), GPT3-DV2 outperforms supervised systems trained on XSum, though our data contains much more diverse genres than those in that paper. However, fine-tuning on even a small amount of GUMSum data (165 documents)
in this paper already outperforms GPT3-DV2. This strongly suggests that a major problem with supervised systems in domain-general settings is simply the training data itself. Qualitative inspection of outputs suggests fine-tuning was particularly helpful for summarizing conversations, Reddit, and how-to guides, on which all systems struggled. For humans, genre differences were much less pronounced, with lowest scores surprisingly for news.
Figure 2 gives a detailed breakdown of BLEURT
scores (Sellam et al., 2020) by genre for each scenario. Human scores lead in every genre except academic, news, and interview, and generally vary less by genre than systems. BRIO-FT is improved The summary in (4) follows the guidelines by not mentioning that the discussion is on Reddit
([6], the interlocutors are simply 'people'), since Reddit is not mentioned. Similarly, while Zack Snyder's film *Batman v Superman: Dawn of Justice* is most likely being discussed, it is not named explicitly, leading to the formulation 'Snyder's movie about Batman and Superman'. In (5), the summary focuses on conveying events which happen over the course of a vlog, but again, the unmentioned word 'vlog' is avoided, while specific details about the participants and circumstances (people, but also the type of class) are prioritized. Summaries are thus highly constrained to remain faithful and avoid even minor potential hallucinations, such as completing the title of a film. For more on genre-specific guidelines and examples, see Appendix A. Automatic Evaluation To evaluate how well current neural approaches produce 'reality-checked' summaries approaching the ones in GUMSum, we obtain system outputs from two recent supervised systems, BRIO (Liu et al., 2022b) and SimCLS
![3_image_0.png](3_image_0.png)
especially on genres that diverge from XSum, such
![3_image_1.png](3_image_1.png)
as conversations, travel guides from Wikivoyage, and how-to guides from Wikihow.
Finally, the human scores provide some numbers for ceiling performance as reflected by automatic metrics. Comparing human numbers to the bestsystem numbers suggests that there is a substantial gap for systems which have never been trained on in-domain data. However, for the the fine-tuning
(FT) scenario, we notice that ROUGE scores are neck-and-neck with the second human summary, likely because the system is trained with an objective averaging R1, R2, and R-L, on which it excels. By contrast, metrics more focused on verbatim overlap, such as BLEU, or semantic similarity, such as BLEURT, retain a more substantial gap, with FT results on BLEURT being close to GPT3-DV2 and still nearly 6 points below human performance.
It is an established finding however that metrics do not tell the whole story (Novikova et al., 2017; Reiter, 2018; Marasovic´, 2018; Gehrmann et al.,
2022). In fact, we regularly observe hallucinations, especially in XSum-trained systems, such as prefixing generic leads (e.g. '*In our series of letters from* British journalists ...', when there are no journalists involved) or inserting entities and events not mentioned in the text. We thus conduct a human evaluation of system outputs below, focusing on substituitivity, hallucinations, and faithfulness, and more importantly, apply the same evaluation criteria to the human-written summaries for a more targeted evaluation, as advocated by Liu et al. (2022a).
Human Evaluation We asked 12 Linguistics students to evaluate the full texts and the summaries of the 24 documents in the test set of the source GUM V9 corpus and to produce an additional summary for their assigned texts (see detailed instructions in Appendix C).4 Figure 1 shows humans overwhelmingly preferred the human-written summary (1(a), 83%, with exceptions citing gold summaries as less pleasant to read), and also found it best at substituting reading the text (1(b), 79%).
Pretrained supervised systems were judged to be highly non-substitutive (88% for SimCLS, 79% for BRIO), while 71% of GPT3-DV2 outputs were judged moderately so.
While all systems exhibited some hallucinations and unfaithfulness, GPT3-DV2 performed best, in part because its outputs tended to be short (mean 138 characters vs. human 272 characters) and general, giving fewer chances for issues. At the same time, hallucination types varied substantially. Human violations in both categories were rare and subtle, resulting from evaluators adhering to guidelines very literally: for example, one evaluator proposed that a human summary's use of the pronoun
'she' in reference to a vlogger whose pronouns had not been stated is a form of hallucination, while another pointed out that a mention of 'Washington' in a news article was a faithfulness issue, since without specifying 'DC', the place is ambiguous. Hallucinations from GPT3-DV2 were more pronounced
(e.g. designating a speaker mentioning retirement as an attendee of a seminar about retirement, which was not mentioned), while XSum-trained systems had more extreme cases, such as incorrectly attributing a speech about New Zealand to its former Prime Minister John Key (BRIO), claiming a fictional short story is a BBC documentary (SimCLS),
or adding to a textbook excerpt on the Civil War by calling it the longest, most expensive conflict in US
history (BRIO and SimCLS). Below we provide a comparison of outputs for two documents and a qualitative analysis.
We also asked evaluators whether they could tell if summaries were NLG outputs, and learned that while 'NLG' guesses were correct, and most human summaries were also recognized, humans could not tell for certain in 56% of the outputs they evaluated (incl. 8% of human-written cases).
Qualitative Analysis Figure 3 shows two humanwritten and several system-generated summaries, for a conversation in (a) and for a news text in (b).5 Note the typical hallucinated lead about journalists in the first BRIO output, which disappears after fine-tuning, and a similar insertion about a Nigerian writer in the output for SimCLS. GPT3-DV2 does not show similar issues, but misses important context information, e.g. the purpose of the conversation revolving around whether speakers should go to a specific dance class, and why or why not.
The news output is substantially better for all systems. BRIO disagrees with SimCLS and GPT3 on the number of 'remaining' space shuttles: three remained to be retired, but there were four total in the article, including the already retired shuttle Discovery. All pre-trained system outputs are substantially less detailed than the human summaries, which add information about time and place of the announcement, or the list of space shuttles. Human 2 commits a similar hallucination error to BRIO in identifying the already retired Discovery as being retired at document creation time. However, both human summaries agree that a prominent part of the story involves the disappointment or criticism from sites that were not selected to house retired shuttles, a topic to which most of the latter half of the original story is dedicated. The fine-tuned model successfully adds more details in line with 5The PDFs of the full-text of these two documents are provided in the repository of the paper for reference.
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
(b) *news*
the human summaries, but also fails to capture the site controversy in the second half of the document.
## 5 Conclusion
The dataset and guidelines introduced in this paper make a step towards consistent and constrained multi-genre evaluation of factual summarization.
Our results show that domain-general summarization is still hampered by serious reliability and factuality problems, which may only become apparent when confronted with a dataset with strict
'reality check' constraints and diverse text types.
Even small amounts of such data can be used to fine-tune pre-trained systems, with measurable improvements for system outputs.
The human evaluation study also revealed that pre-trained systems are bad at delivering substitutive summaries, perhaps because, as pointed out in Reiter (2022), "summarisation datasets should contain summaries," but often they do not. Meanwhile, human identification of possibly more minor hallucinations in human-written summaries also suggests that more work is needed in delimiting what a 'reality check' for summaries should include.
## Limitations
GUMSum is designed to constrain summaries to one sentence for all 12 genres, which raises the question of whether one-sentence summaries are useful for all possible genres or long-document summarization. This is a complex topic that needs in-depth investigation. For GUMSum, as mentioned in Section 3, document length is limited to 167–1,878 tokens. Moreover, in analyzing human evaluators' responses to two open-ended questions
([1] and [2] in Appendix C), we noticed that virtually all evaluators mentioned that limiting the summary to one-sentence is very difficult and that some genres were easier than others. For example, one evaluator who was given a vlog and a travel guide commented that,
"The travel guide was much more difficult than the vlog, likely because it was longer and denser. [...] the travel guide packed a lot more information into its pages and within each sentence."
This indicates that genre differences at the summary-level is not trivial due to the style of the original text.
Additionally, this paper examined a specific subset of pre-trained systems and one version of GPT3's pretrained language model
(i.e. GPT3-text-davinci-002), producing findings which may not generalize to other settings.
The dataset used for the evaluation is also substantially smaller than those used in most work on summarization, due to the fact that it was carefully crafted based on both general and genre-specific guidelines to be substitutive and to avoid hallucinations and faithfulness issues, rather than originating in a found dataset, in order to conduct a more targeted evaluation, as recommended by Liu et al. (2022a). While it is inevitable that more data would lead to different results, we do not believe that system rankings or overall findings would be substantially different, so long as the guidelines and genres examined here remain stable.
Finally, we must raise a further limitation involving text type and language: our study encompasses 12 specific written and spoken genres available in the UD English GUM corpus, but does not capture findings for other genres, or indeed other languages, which deserve more attention in future studies.
## Ethics Statement
The data produced in this paper is made openly available in accordance with the original licenses of the underlying resources and academic fair use.
we are keenly aware that NLP, and particularly NLG technology can be misused adversely, for example to generate fake news, we believe the risks posed by models which are not 'reality-checked' outweigh those associated with improving models to prevent factuality and generalization issues across domains. The latter issue is particularly relevant, since technologies limited to particular domains and styles will primarily benefit actors in sectors engaged with that data (e.g. news, for example, financial reporting), while underserving the public in other areas (e.g. computer-mediated communication). We therefore concur with this year's ACL theme that work towards 'reality checking' our outputs is a net positive.
## Acknowledgements
The human evaluation study was funded by a GSAS-GradGov Research Project Award (GRPA)
towards graduate students' research and professional development endeavors at Georgetown University. We thank the following participants for their valuable participation and insightful feedback in our human evaluation study (alphabetically ordered by last names): Kris Cook, Jessica Cusi, Helen Dominic, Luke Gessler, Caroline Gish, Lauren Levine, Cynthia Li, Kristina Lignell, Davide Locatelli, Emma Manning, and others who prefer to stay anonymous. We thank Nathan Schneider and the anonymous reviewers for their feedback.
## References
Satanjeev Banerjee and Alon Lavie. 2005. METEOR:
An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack
Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language Models are Few-Shot Learners. In *Advances in Neural Information Processing Systems*
(NIPS), volume 33 of *NIPS'20*, pages 1877–1901, Red Hook, NY, USA. Curran Associates, Inc.
H. P. Edmundson. 1969. New Methods in Automatic Extracting. *J. ACM*, 16(2):264–285.
Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´
Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics, 9:391–409.
Sebastian Gehrmann, Elizabeth Clark, and Thibault Sellam. 2022. Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text. *arXiv*.
Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022.
News Summarization and Evaluation in the Era of GPT-3. *arXiv*.
Karl Moritz Hermann, Tomáš Kociský, Edward Grefen- ˇ
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *Proceedings of the 28th International Conference on Neural Information Processing* Systems - Volume 1, NIPS'15, page 1693–1701, Cambridge, MA, USA. MIT Press.
Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim.
2019. Abstractive summarization of Reddit posts with multi-level memory networks. In *Proceedings* of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers), pages 2519–2531, Minneapolis, Minnesota. Association for Computational Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Yixin Liu, Alexander R. Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, and Dragomir Radev. 2022a. Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation. *arXiv*.
Yixin Liu and Pengfei Liu. 2021. SimCLS: A simple framework for contrastive learning of abstractive summarization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 1065–1072, Online. Association for Computational Linguistics.
Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022b. BRIO: Bringing order to abstractive summarization. In Proceedings of the 60th Annual
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903, Dublin, Ireland. Association for Computational Linguistics.
Mounica Maddela, Mayank Kulkarni, and Daniel Preotiuc-Pietro. 2022. EntSUM: A data set for entitycentric extractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 3355–3366, Dublin, Ireland. Association for Computational Linguistics.
Ana Marasovic. 2018. ´ NLP's generalization problem, and how researchers are tackling it. *The Gradient*.
Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, and Bing Xiang. 2021. Entitylevel factual consistency of abstractive text summarization. In *Proceedings of the 16th Conference of* the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2727–2733, Online. Association for Computational Linguistics.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Ani Nenkova and Kathleen R. McKeown. 2011. Automatic Summarization. Foundations and Trends in Information Retrieval, 5(2-3):103–233.
Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In *Proceedings* of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 311–318, USA.
Association for Computational Linguistics.
Ehud Reiter. 2018. A Structured Review of the Validity of BLEU. *Computational Linguistics*, 44(3):393–
401.
Ehud Reiter. 2022. Summarisation datasets should contain summaries!
Rezvaneh Rezapour, Sravana Reddy, Rosie Jones, and Ian Soboroff. 2022. What makes a good podcast summary? In *Proceedings of the 45th International ACM*
SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 2039–2046, New York, NY, USA. Association for Computing Machinery.
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020.
BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics.
Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yahvuz, Wojciech Krys-´
cinski, Justin F. Rousseau, and Greg Durrett. 2022. ´
Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors. *arXiv*.
Craig Thomson, Ehud Reiter, and Barkavi Sundararajan.
2023. Evaluating factual accuracy in complex datato-text. *Computer Speech & Language*.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing. *arXiv*.
Wen Xiao and Giuseppe Carenini. 2022. Entity-based SpanCopy for Abstractive Summarization to Improve the Factual Consistency. *Arxiv Preprint*.
Amir Zeldes. 2017. The GUM Corpus: Creating Multilayer Resources in the Classroom. *Language Resources and Evaluation*, 51(3):581–612.
Tianyi Zhang, Varsha Kishore, Felix Wu*, Kilian Q.
Weinberger, and Yoav Artzi. 2020. BERTScore:
Evaluating Text Generation with BERT. In *International Conference on Learning Representations*.
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore:
Text generation evaluating with contextualized embeddings and earth mover distance. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Linguistics.
Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, and Meng Jiang. 2021. Enhancing factual consistency of abstractive summarization. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 718–733, Online.
Association for Computational Linguistics.
Markus Zopf, Maxime Peyrard, and Judith EckleKohler. 2016. The next step for multi-document summarization: A heterogeneous multi-genre corpus built with a novel construction approach. In Proceedings of COLING 2016, the 26th International
Conference on Computational Linguistics: Technical Papers, pages 1535–1545, Osaka, Japan. The COLING 2016 Organizing Committee.
## A Genre-Specific Guidelines
The following excerpts from genre-specific guidelines exemplify instructions which were given to annotators working on documents in those specific genres. The full guidelines can be viewed at https:
//wiki.gucorpling.org/gum/summarization.
## A.1 Biographies
Summaries for biographies and other texts centered around an individual:
- typically take the form "Kim is/was a French X who ... "
- typically include information about what this person is/was known for ("... best known for
...")
- information about the time period and place is typically included ("a Japanese X", "a German X living in France", "a 19th century Kenyan X")
## Examples:
- Jared Padalecki is an award winning American actor who gained prominence in the series Gilmore Girls, best known for playing the role of Sam Winchester in the TV series Supernatural, and for his active role in campaigns to support people struggling with depression, addiction, suicide and self-harm.
- Jenna Nicole Mourey, better known as Jenna Marbles, is a very successful American YouTube personality, vlogger, comedian and actress, known for her videos "How To Trick People Into Thinking You're Good Looking" and "How To Avoid Talking To People You Don't Want To Talk To".
## A.2 Fiction
- In non-metalinguistic texts (i.e. fiction itself, not texts about fiction), summarize the text as if it is a literal, true story; for example, "Huckleberry Finn is fishing", not "In this extract from the novel Huckleberry Finn, fictional character Huck is..."
- Even if described events are factually incorrect, or involve science fiction or imaginary contexts, we summarize without commenting on this (e.g. "Three unicorns chat and decide to go fishing")
- Unnamed active protagonists should be referred to as "a/the protagonist"
- An unnamed narrator who is not an agent in the story can be referred to as "a/the narrator"
Examples:
- Jacques Chalmers, a starfighter pilot for the Empire, is terrified of overwhelming enemy forces as he leaves his deployment carrier together with his comrades, and later narrowly escapes the Enemy after witnessing the destruction of the Kethlan system.
- Santa Claus's second wife, Betty Moroz, plays online video games with her friends Williams and Gomez while making dinner on Christmas Eve, and is then disappointed when Santa gets a call from his secretary Ginny and goes out to take care of the children of the world, missing dinner.
## A.3 Vlogs
- Typically a present tense third person style is used, and events are ordered in sequence, for example: "Ash tells about her day, which includes a yoga class, marketing brand management class, doing some work while having coffee at Saxby's, and finally cooking pasta with peppers for dinner together with her boyfriend Harry."
- As in conversations, people other than the vlogger who play a significant role in the vlog should be mentioned, but if their name is not mentioned within the excerpt being annotated, then they can only be referred to using generic terms ("a friend/relative/...")
- If the vlogger does not mention that they are a vlogger in the video, or that this is a vlog, do not refer to them as such (e.g. "Jasmine tells about ...", not "YoutTube vlogger Jasmine tells
...")
- Jasmine tells about how she tested positive for Covid on December 16th after she spent time without a mask with her sister, who also tested positive, and recounts her symptoms over several days, starting from a sore throat, then fever and congestion, and finally a partial loss of smell and taste and shortness of breath.
## B Experiment Details B.1 Fine-Tuning On Brio
All three fine-tuning sessions were conducted using 1 NVIDIA A100 40GB GPU on Google Cloud Platform, which cost $2.8 per hour.6 The configurations of BRIO for XSum7 were used except that the default number of epochs was increased to 1000 from 100 in order to achieve better validation performance on GUMSum's dev set. Specifically, we take BRIO's *generation* model checkpoint on XSum from Huggingface's Transformers (Wolf et al., 2019).8 The average training time for a single run was about 7 hours. Table 3 shows the validation performance of each run on the documents from the dev set of GUM V9. Both dev and test partitions contain 24 documents, 2 for each genre, leaving 165 documents for training.9
| VAL_LOSS | VAL_R-1 | VAL_R-2 | VAL_R-L BEST_epoch | | |
|------------|-----------|-----------|----------------------|------|-----|
| RUN 1 | 72.3 | 39.3 | 14.5 | 29.3 | 899 |
| RUN 2 | 71.9 | 39.9 | 15.3 | 29.2 | 799 |
| RUN 3 | 73.0 | 38.3 | 14.1 | 28.6 | 849 |
| AVG. | 72.4 | 39.1 | 14.6 | 29.0 | − |
Table 3: FT Validation Performance on 24 dev docs.
## B.2 Gpt3 Output Selection
We use OpenAI's GPT3-text-davinci-00210 with the prompt Summarize the text above in one sentence. and keep the default settings. Due to the nondeterministic nature and in order to ensure a fair comparison, we generated 3 summaries for each text and computed average ROUGE scores (the mean of R-1/2/L) against the human-written summaries and selected the summary with the middle average ROUGE score. At the time, the Davinci Examples:
model costs $0.0200 / 1K tokens. To avoid repetitive computation and to facilitate further research, we release all the GPT3-generated summaries for GUMSum. No post-editing was made on the GPT3generated summaries.
## B.3 Brio-/Simcls- Generated Summaries
We use BRIO's *generation* model checkpoint on XSum available on Huggingface
(i.e. Yale-LILY/brio-xsum-cased) to obtain BRIO-generated summaries for GUMSum's texts.
For SimCLS (Liu and Liu, 2021), we use the checkpoint on XSum provided by the authors in their GitHub repository.11 Although some BRIO-/SimCLS-generated summaries contain trailing punctuation, no post-editing was made on these system outputs.
## C Human Evaluation Details
We recruited 12 students who are native speakers of English to participate in this human evaluation study. Each student was assigned two documents from two different genres. They were given
![9_image_1.png](9_image_1.png)
4 weeks to work on a series of tasks for each document, as shown in Figure 4 below. Every student received a Google Form for each assigned text.
Tasks 1 and 2 Students were asked to review both general and genre-specific guidelines before writing their own one-sentence summary for the assigned document. We also asked for their consent 11https://github.com/yixinL7/SimCLS
to release their written-summaries to GUMSum to
![9_image_0.png](9_image_0.png)
facilitate multiple-reference evaluation and interannotator agreement, as shown in Figure 5.
Tasks 3 and 4 Students were presented both system-generated and human-written summaries in order to evaluate various aspects of each summary candidate. The order of outputs shown to the evaluators was randomized for each source text, and we also ask them to not modify their written summary after viewing the presented ones. In addition, we ask the evaluators to justify their decisions in a few sentences for certain questions:
[1] **Please choose your most and least preferred**
summaries respectively. You can select more than one for each category below if multiple summaries are equally most or least preferred by you.
- Please justify your decisions above in a few sentences below. For instance, you could say, "I prefer summary X over summary Y because X doesn't contain the main point (while a minor one is included) or Y contains incorrect information" etc. The more detailed the justifications, the better!
[2] **How substitutive is each summary candidate?** According to the guidelines, substitutive summaries replace reading the text as best as possible in one sentence - they are not just meant to attract readers to reading the text; they are meant to save you the trouble of reading it)
[3] Does the summary include information NOT
PRESENT in the text **even if you happen to**
know that it is factually correct?
- Please justify your decisions (esp. the ones you chose YES for) above in a few sentences below. For instance, you can list the relevant information below.
[4] Does the summary include **INCORRECT information**? (i.e. information **PRESENT** in the original text but used or interpreted *in a* different, misleading, or incorrect way in the summary; in other words, this summary is not faithful to the original text)
- Please justify your decisions (esp. the ones you chose YES for) above in a few sentences below. For instance, you can list the relevant information below.
[5] **Is the summary written in good English?**
(e.g. no grammar errors or incomplete sentences etc.)
[6] **Can you tell which summary is humanwritten and which one is computergenerated?** If you are very unsure about this (confidence level at or below 50%), then choose the "can't tell" category.
- Please justify your decisions above in a few sentences below. *In particular, if* you have a very strong opinion about a specific summary or certain summaries, we'd highly appreciate it if you could share your valuable thoughts with us.
Wrapping-up The last part of the evaluation study is to ask evaluators to first rate the level of difficulty of the entire evaluation task on a scale of 1 to 5 where 1 means 'Not difficult at all' and 5 means 'Extremely difficult'. We also collect their responses to the following open-ended questions in order to help us get a better idea of the challenges of producing a good summary for various text types, which are very valuable insights to guide future research on designing more specifically defined guidelines and targeted evaluation.
[1] Based on your experience here, what's the most difficult or challenging thing you found when writing a one-sentence summary for the genre you are assigned?
[2] Is there anything else you would like to share regarding your experience of writing a summary and/or evaluating other existing summaries?
## C.1 Additional Plots Of Responses From The Human Evaluation Study
Figure 6 shows additional responses on English fluency quality for selected systems vs. human performance, as well as a breakdown of annotators' guesses as to whether they were looking at human or system summaries.
![10_image_0.png](10_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations A2. Did you discuss any potential risks of your work?
Not applicable. Left blank.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3
✓ B1. Did you cite the creators of artifacts you used?
1 and 3
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
3 and Ethics Statement
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
3
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
3 and Appendix B.1
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
4 and Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
4 and Appendix B
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
4 and Appendix B
D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
4
✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Appendix C
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
3, 4 (Human Evaluation), and Appendix C
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
3, 4 (Human Evaluation), and Appendix C
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
3, 4 (Human Evaluation), and Appendix C |
fang-etal-2023-improving | Improving Grammatical Error Correction with Multimodal Feature Integration | https://aclanthology.org/2023.findings-acl.594 | Grammatical error correction (GEC) is a promising task aimed at correcting errors in a text. Many methods have been proposed to facilitate this task with remarkable results. However, most of them only focus on enhancing textual feature extraction without exploring the usage of other modalities{'} information (e.g., speech), which can also provide valuable knowledge to help the model detect grammatical errors. To shore up this deficiency, we propose a novel framework that integrates both speech and text features to enhance GEC. In detail, we create new multimodal GEC datasets for English and German by generating audio from text using the advanced text-to-speech models. Subsequently, we extract acoustic and textual representations by a multimodal encoder that consists of a speech and a text encoder. A mixture-of-experts (MoE) layer is employed to selectively align representations from the two modalities, and then a dot attention mechanism is used to fuse them as final multimodal representations. Experimental results on CoNLL14, BEA19 English, and Falko-MERLIN German show that our multimodal GEC models achieve significant improvements over strong baselines and achieve a new state-of-the-art result on the Falko-MERLIN test set. | # Improving Grammatical Error Correction With Multimodal Feature Integration
Tao Fang1∗ Jinpeng Hu2∗† Derek F. Wong1† **Xiang Wan**2 Lidia S. Chao1 **Tsung-Hui Chang**2 1NLP2CT Lab, Department of Computer and Information Science, University of Macau [email protected] {derekfw,lidiasc}@um.edu.mo 2Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen, Guangdong, China [email protected] [email protected] [email protected]
## Abstract
Grammatical error correction (GEC) is a promising task aimed at correcting errors in a text. Many methods have been proposed to facilitate this task with remarkable results.
However, most of them only focus on enhancing textual feature extraction without exploring the usage of other modalities' information
(e.g., speech), which can also provide valuable knowledge to help the model detect grammatical errors. To shore up this deficiency, we propose a novel framework that integrates both speech and text features to enhance GEC. In detail, we create new multimodal GEC datasets for English and German by generating audio1 from text using the advanced text-to-speech models. Subsequently, we extract acoustic and textual representations by a multimodal encoder that consists of a speech and a text encoder. A mixture-of-experts (MoE) layer is employed to selectively align representations from the two modalities, and then a dot attention mechanism is used to fuse them as final multimodal representations. Experimental results on CoNLL14, BEA19 English, and FalkoMERLIN German show that our multimodal GEC models achieve significant improvements over strong baselines and achieve a new stateof-the-art result on the Falko-MERLIN test set.
## 1 Introduction
Grammatical error correction (GEC) is one of the promising applications in natural language processing (NLP), aiming to correct sentences containing grammatical errors. GEC has attracted substantial attention in the past few decades owing to its importance in writing assistance for language learners
(Rothe et al., 2021; Zhao and Wang, 2020; Qorib et al., 2022; Wan et al., 2020; Chollampatt and Ng, 2018; Tarnavskyi et al., 2022; Kaneko et al., 2020;
![0_image_0.png](0_image_0.png)
Figure 1: A comparison between general GEC and multimodal GEC tasks. The top is the general GEC system, which only relies on text modality, and the bottom is the proposed multimodal GEC task combining text and its corresponding speech.
Zhang et al., 2022a; Fang et al., 2023a; Zhang et al., 2023a; Fang et al., 2023b; Zhang et al., 2023b).
In recent years, pre-trained Transformer-based models have proven effective in many NLP tasks
(Hu et al., 2022a,b; Clinchant et al., 2019; Liu and Lapata, 2019; Hu et al., 2023b; Zhong et al., 2022; Liu et al., 2021; Li et al., 2022), including GEC
(Gong et al., 2022; Li et al., 2023), because these models consist of multiple-layer multi-head attention and are trained with massive language data so that they are more powerful in feature extraction than other counterpart models. For example, Kaneko et al. (2020) first proposed to fine-tune BERT with the GEC corpus and then use the output of BERT as additional features to enhance GEC.
Rothe et al. (2021) used the T5 structure (Raffel et al., 2020) to refine the GEC corpus (i.e., CLang8)
and obtained promising results in GEC for different languages. Furthermore, Qorib et al. (2022); Tarnavskyi et al. (2022) employed binary classification or majority votes on span-level edits to ensemble multiple Transformer-based models.
Although these methods have achieved considerable improvements, they may focus on the better use of textual data while failing to take other modalities into consideration (e.g., speech). Many studies have shown that other modality data (e.g.,
speech) can effectively enhance feature extraction and thus promote model performance, such as risk forecasting (Sawhney et al., 2020), semantic matching (Huzaifah and Kukanov, 2022), etc. For example, Huzaifah and Kukanov (2022) studied a joint speech-text embedding space through a semantic matching objective and achieved better results in downstream tasks. Kim and Kang (2022)
proposed to learn the cross-modality interaction between acoustic and textual information for emotion classification, which outperformed unimodal models. These works illustrate that audio signals can be regarded as complementary information and provide valuable features to promote text processing.
Besides, intuitively, the audio with grammatical errors can be easily captured by the native speakers according to their spoken language experiences, which can implicate that speech should be effective in helping the model to distinguish whether the text contains ungrammatical elements.
Therefore, in this paper, we propose to integrate speech and text features to promote GEC, with an example shown in Figure 1. Firstly, owing to the lack of multimodal datasets for GEC, we adopt advanced text-to-speech (TTS) models to automatically generate audio for each instance in GEC
datasets. Afterward, we extract acoustic and textual representations by a multimodal encoder that consists of pre-trained speech and text encoders. Furthermore, we propose to utilize an MoE layer to selectively align features from speech and text modalities, and then simple dot attention is applied to fuse them as final multimodal representations, which are then input to a pre-trained decoder to generate corrected sentences. Experimental results on English and German benchmarks illustrate the effectiveness of our proposed model, where our model achieves significant improvements over strong unimodal GEC baselines. Further analysis shows that our multimodal GEC model demonstrates significant improvements in most POS-based fine-grained error types, as well as in the major Operation-Level error types such as word substitutions, missing words, and unnecessary words.
The contributions are concluded as follows:
- To the best of our knowledge, this paper is the first to utilize a multimodal model to combine audio and text features to facilitate GEC.
- This paper constructs multimodal GEC datasets for English and German, where each sample in the dataset is a triple (ungrammatical text, audio, grammatical text).
- This paper proposes to use a mixture-of-experts module to dynamically align text and speech pairs for multimodal GEC.
- This paper reveals the gains and losses of incorporating speech modality into GEC on error types, providing clues for future research.
## 2 Data Construction
Owing to the lack of speech data in the GEC task, we need to construct multimodal GEC datasets for multimodal GEC tasks. Therefore, in this section, we give the details of dataset construction. Speech processing has achieved promising improvement over the past few decades, including converting sentences in the text into utterances (Ren et al., 2019; Qi et al., 2023). Therefore, we employ the advanced speech synthesis system to convert each piece of source side of GEC data (i.e., the ungrammatical side) into audio data to construct GEC multimodal data. As a result, each example in the GEC
dataset is expanded into a triplet consisting of the ungrammatical sentence, the audio generated from the corresponding ungrammatical sentence, and the grammatical sentence.
## 2.1 English Gec Multimodal Data
For constructing the English GEC multimodal dataset, we adopt the FastSpeech22text-to-speech model (Wang et al., 2021) to produce audio data from the source side of English GEC data. Specifically, to construct GEC multimodal training data, we convert the distilled English CLang8 GEC data
(Rothe et al., 2021) into audio data. For constructing development and test sets, we select the widelyused CoNLL14 (Ng et al., 2014) and BEA19
(Bryant et al., 2019) English GEC benchmarks. For the CoNLL14 benchmark, the CoNLL13 (Ng et al.,
2013) and the official-2014.combined.m2 version of CoNLL14 are used for constructing multimodal development and test sets, respectively. For the BEA19 benchmark, we use the BEA19 development and test sets to construct audio data.
![2_image_0.png](2_image_0.png)
| LAN. | DATA | TRAIN | DEV | TEST |
|------------|------------|------------|-------|--------|
| (#Triples) | (#Triples) | (#Triples) | | |
| CL8-EN | 2.2M | - | - | |
| BEA19 | - | 4,384 | 4,477 | |
| EN | CONLL13 | - | 1,379 | - |
| CONLL14 | - | - | 1,312 | |
| DE | CL8-DE | 110K | - | - |
| FALKO-ME. | 12.9K | 2,503 | 2,337 | |
## 2.2 German Gec Multimodal Data
For building German multimodal GEC datasets, we employ gTTS (Google Text-to-Speech) toolkit3to generate audio data from the source side of German GEC training, development and test data. We build multimodal training data from German CLang8 and the official Falko-MERLIN (Boyd et al., 2014)
training data. As for the multimodal development and test sets, we produce the audio data from FalkoMERLIN German validation and test sets.
## 2.3 Data Processing
To prepare the text GEC datasets for audio generation, we first remove duplicate instances from the English CLang8 dataset, while keeping the other datasets unaltered. Additionally, we follow Katsumata and Komachi (2020) to use Moses script
(Koehn et al., 2007) to detokenize GEC data for English and German. The statistics of the final multimodal datasets are shown in Table 1.
## 3 Method 3.1 Problem Definition
Existing approaches mainly utilize an encoderdecoder framework to address the GEC problem.
In detail, the input is a sentence with grammatical errors X = x1, x2, · · · , xN , where N is the number of tokens, and the goal of this task is to correct the input sentence and generate a right one Y = y1, y2, · · · , yL, where L is the length of target sentence. Motivated by the success of multimodal in other tasks (Li et al., 2018; Sawhney et al., 2020), in this paper, we propose a novel multimodal GEC
task and take a text-audio pair (*X, S*) as input (text and audio, respectively), aiming to integrate acoustic and textual features to enhance GEC. Therefore, the generation process for the multimodal GEC
problem can be formulated as:
$$p(Y|X,S)=\prod_{t=1}^{L}p(y_{t}\mid y_{1},\ldots,y_{t-1},X,S).\tag{1}$$ Moreover, we utilize the negative conditional log
Moreover, we utilize the negative conditional loglikelihood of Y given the pair (*X, S*) to train the model:
$$\theta^{*}=\arg\operatorname*{max}_{\theta}\sum_{t=1}^{L}\log p\left(y_{t}\mid y_{1},...,y_{t-1},X,S;\theta\right),$$
where θ is the trainable parameters of the model.
An overall structure of our proposed method is presented in Figure 2.
## 3.2 Multimodal Encoder
The multimodal encoder in our model consists of two main feature extractors: speech encoder and text encoder, respectively.
Speech Encoder We utilize a pre-trained
Transformer-based model (e.g., wav2vec2 (Baevski
et al., 2020)) as our speech encoder, which can
learn powerful representations from speech audio
and achieve promising results in many downstream
tasks.
$[\bf C_{1},C_{2},\cdots,C_{P}]=f_{ae}(S)$, $[\bf C_{1},C_{2},\cdots,C_{P}]=f_{ae}(S)$,
where c is the features extracted from speech, fae
refers to speech encoder and P is length of acoustic
features.
Text Encoder We adopt a pre-trained model (e.g.,
T5 encoder (Raffel et al., 2020)) as our text encoder
to capture textual features z from X:
$$\left[\mathbf{z}_{1},\mathbf{z}_{2},\cdots,\mathbf{z}_{N}\right]=f_{t e}(X),$$
## Where Ziis High Dimensional Vector For Representing Token Xi And Fte Refers To The Text Encoder. 3.3 Multimodal Alignment And Fusion
On the one hand, it is intuitive that the speech should be semantically close to the corresponding text if they are in one pair since they actually represent similar meanings through different modalities. On the other hand, audio is used to provide complementary information instead of completely consistent information to help the model to better recognize and detect grammatical errors. As a result, we should allow some variance between features extracted from different modalities during multimodal alignment.
Therefore, we adopt a mixture-of-experts (MoE)
to dynamically select semantically similar information from acoustic features, which is used to align with textual representation. The MoE layer in our model consists of M experts, denoted as E1, E2, · · · , EM, and each expert is a simple MLP
with ReLU. Note that although these experts have identical structures, they have separate parameters instead of shared ones. We first obtain the overall representation of the speech S and text X by mean pooling, which can be formulated as:
$$\begin{array}{c}{{(5)}}\\ {{(6)}}\end{array}$$
$$\begin{array}{l}{{\bar{\mathbf{c}}=M e a n([\mathbf{c}_{1},\mathbf{c}_{2},\cdots,\mathbf{c}_{P}]),}}\\ {{\bar{\mathbf{z}}=M e a n([\mathbf{z}_{1},\mathbf{z}_{2},\cdots,\mathbf{z}_{N}]),}}\end{array}$$
We utilize the MoE to further extract the features from c¯ that should be close to z¯. Specifically, the output of i-th expert is denoted as Ei(c¯) and we follow Shazeer et al. (2017) to generate a gate Gi(c¯)
for each expert. The output of the MoE module can be written as:
$$\mathbf{b}=\sum_{i=1}^{M}G_{i}({\bar{\mathbf{c}}})E_{i}({\bar{\mathbf{c}}}),$$
$$\quad(7)$$
$\left(8\right)$.
where b should be the information that is semantically close to the text. We utilize a simple mean squared error (MSE) objective to constrain this process and align these textual and acoustic features, which can be formulated as:
$${\mathcal{L}}_{m s e}=M S E(\mathbf{b},{\bar{\mathbf{z}}}),$$
Lmse = MSE(b, z¯), (8)
After dynamic alignment between audio and text,
we utilize dot attention to fuse these two features.
In detail, we first compute the attention weight with the softmax function:
$$\mathbf{a}_{i}=\mathrm{Softmax}(\mathbf{z}_{i}\mathbf{c}^{\mathrm{T}}).$$
$\eqref{eq:walpha}$.
T). (9)
Herein, ai can be viewed as a probability distribution and used to produce a weighted sum over the
visual patch representations:
$$\mathbf{z}_{i}^{c}=\sum_{k=1}^{P}a_{i,k}\mathbf{c}_{k}.\tag{10}$$ Finally, we sum the $\mathbf{z}^{c}$ and $\mathbf{z}$ as final multimodal
$\eqref{eq:walpha}$
representation h.
## 3.4 Decoder
The multimodal representation h is input to the pretrained decoder (e.g., T5) to generate the correct
sequence:
$$y_{t}=f_{d e}({\bf h},y_{1},\cdots,y_{t-1})\qquad\qquad(11)$$
$\downarrow$ 1.
9331 This process is repeated until the complete sentence is obtained.
As for training, the final objective is the linear combination of losses from the sequence generation and multimodal alignment:
L = Lge + λLmse, (12)
where Lge is the basic sequence-to-sequence loss and λ is the weight to control the MSE loss.
## 4 Experimental Settings And Results 4.1 Data And Evaluation
The multimodal GEC data used for training is presented in Table 1 in section 2. With respect to English, we follow Rothe et al. (2021) and use only the English CLang8 multimodal data for training as they reported that further fine-tuning on highquality English datasets, such as FCE v2.1 (Yannakoudakis et al., 2011) and W&I (Yannakoudakis et al., 2018), led to a drop in performance. For validation, we use the CoNLL13 multimodal data and the BEA19 multimodal development data when testing on the CoNLL14 and BEA19 English test sets, respectively. In terms of German, we first train our models on the German CLang8 multimodal data as Rothe et al. (2021), and then finetune the models on the official Falko-MERLIN German multimodal training data. For the development and test data, we use the official Falko-MERLIN
German benchmark. Additionally, to establish a stronger baseline, we follow Katsumata and Komachi (2020) to use the same 10M synthetic data
(Náplava and Straka, 2019)
4to pre-train T5/mT5-
Large model for English and German.
For evaluation, we use the M2 scorer (Dahlmeier and Ng, 2012) to evaluate the model performance on the CoNLL14 English test and the official FalkoMERLIN German benchmark. The BEA19 English test is evaluated by ERRANT (Bryant et al.,
2017). We employ the T-test method to test the significance of the results, except for the BEA19 English test, which is a blind test set.
## 4.2 Implementation Details And Training
In our experiments, we adopt Huggingface5library to build our multimodal GEC model. Specifically, for the basic experiments, we utilize T5-
Large (Raffel et al., 2020) and mT5-Large (Xue 4https://github.com/ufal/
low-resource-gec-wnut2019/tree/master/data 5https://github.com/huggingface/transformers et al., 2020) as our text backbone models (including both text encoder and decoder), with the former being used for English and the latter for German. We follow their default setting, which uses 24 layers of self-attention with 16 heads. For the experiments with stronger baselines, we use our T5-Large and mT5-Large models fine-tuned on 10M synthetic data as text backbone models for English and German, respectively. The details of the training settings can be found in Appendix A.1. For the speech encoder, we adopt Hubert Large pre-trained model
(Hsu et al., 2021) to extract features for English audio and wav2vec2-xls-r-300m pre-trained model
(Babu et al., 2021) for German speech. We also follow the default settings for these speech models.
As for training, we utilize Adafactor (Shazeer and Stern, 2018) to optimize all trainable parameters in our model. We set the number of experts to 6.
The weight hyper-parameter λ is set to 0.1 for both English and German experiments. The other settings for training the multimodal GEC models are reported in Appendix A.2.
## 4.3 Baselines
To explore the effect of the proposed multimodal model for GEC, we compare our model with the following baselines:
- **LRGEC** (Náplava and Straka, 2019): it pretrains a Transformer seq2seq model on synthetic data and then fine-tunes on authentic data.
- TAGGEC (Stahlberg and Kumar, 2021): the model improves GEC performance by data augmentation (e.g., generating synthetic data with the guidance of error type tags).
- **GECT**OR (Omelianchuk et al., 2020), **TMTC**
(Lai et al., 2022), **EKDGEC** (Tarnavskyi et al.,
2022): these models utilize the sequence tagging approach to improve GEC performance with multiple stage training, where they firstly pre-train on errorful-only sentences and further fine-tune on a high-quality dataset.
- **SADGEC** (Sun et al., 2021), gT5 XXL and T5/MT5 LARGE/XXL (Rothe et al., 2021):
these GEC models borrow knowledge from pretrained language models, where SADGEC is based on the BART (Lewis et al., 2020) pretrained model, gT5 XXL is a large teacher model for distilling Lang8 data, which is first pre-trained from scratch on a large amount of synthetic data followed by fine-tuning on high-quality data. T5/MT5 LARGE/XXL adopt
| CONLL14 | BEA19 (TEST) | | | | | |
|------------------------------------|----------------|------|------|------|------|------|
| SYSTEM | Pre. | Rec. | F0.5 | Pre. | Rec. | F0.5 |
| LRGEC (Náplava and Straka, 2019) | - | - | 63.4 | - | - | 69.0 |
| GECTOR (Omelianchuk et al., 2020) | 77.5 | 40.1 | 65.3 | 79.2 | 53.9 | 72.4 |
| TAGGEC (Stahlberg and Kumar, 2021) | 72.8 | 49.5 | 66.6 | 72.1 | 64.4 | 70.4 |
| SADGEC (Sun et al., 2021) | 71.0 | 52.8 | 66.4 | - | - | 72.9 |
| TMTC (Lai et al., 2022) | 77.8 | 41.8 | 66.4 | 81.3 | 51.6 | 72.9 |
| EKDGEC (Tarnavskyi et al., 2022) | 74.4 | 41.1 | 64.0 | 80.7 | 53.4 | 73.2 |
| T5 LARGE (Rothe et al., 2021) | - | - | 66.0 | - | - | 72.1 |
| T5 XXL (Rothe et al., 2021) | - | - | 68.8 | - | - | 75.9 |
| gT5 XXL (Rothe et al., 2021) | - | - | 65.7 | - | - | 69.8 |
| OURS (T5 LARGE) | 73.6 | 52.7 | 68.2 | 75.5 | 67.9 | 73.9 |
| OURS (PRET5 LARGE) | 75.0 | 53.2 | 69.3 | 77.1 | 66.7 | 74.8 |
SYSTEM DATA
FALKO**-ME.**
Pre. Rec. F0.5
LRGEC offic. 78.2 59.9 73.7 MT5 LARGE cl8 - - 70.1 MT5 XXL cl8 - - 74.8
gT5 XXL offic. - - 76.0
OURS (MT5) cl8 76.1 59.8 72.1
+offic. 77.2 65.4 74.5†
OURS (PMT5) cl8 77.6 63.0 74.2
+offic. 78.5 68.4 **76.3**†
T5/mT5 as the backbone structure and fine-tune on the corresponding distilled CLang8 data for GEC tasks in different languages.
## 4.4 Experimental Results
Results on English dataset To illustrate the effectiveness of our proposed model, we compare our model with existing studies with the results reported in Table 2. We obtain several observations from the results. First, the comparison between OURS and other baselines illustrate the effectiveness of our design in the GEC task, where our model achieves much better performance even though these competitors utilize many ways (e.g.,
data augmentation) to enhance feature extraction in GEC. The reason might be that compared to pure textual information, audio can provide complementary information to help the model better grasp the grammatical error in the sentence, and our model can selectively align these features from speech and text by the MoE module. It is easy to follow that a native speaker can distinguish whether the audio is grammatically correct. Second, compared to the sequence tagging method (e.g., GECTOR), sequence-to-sequence based models (e.g.,
T5-LARGE) perform better in recall score but are inept at precision. Especially it is found that the strength of our proposed model lies in its high recall compared to other baselines. Third, continuing training T5-Large on 10M synthetic data can further improve the model performance, illustrating that synthetic data can alleviate the gap between GEC data and pre-training corpus. Appendix A.3 shows some examples generated by the unimodal and multimodal GEC models.
Results on German dataset To further demonstrate the validity of our model, we also conduct experiments on the German dataset, with the results reported in 3. We can obtain similar trends as in English GEC, where our proposed mode out-
| CONLL14 | FALKO-ME. | |
|---------------|-------------------|-------------------|
| MODEL | P/ R/ F0.5 | P/ R/ F0.5 |
| OURS ((M)T5) | 73.6/ 52.7/ 68.2† | 76.1/ 59.8/ 72.1† |
| −MOE | 73.0/ 52.9/ 67.9 | 75.5/ 59.8/ 71.7 |
| −SPEECH ENC. | 72.2/ 51.4/ 66.8 | 75.7/ 56.5/ 70.9 |
| OURS (P(M)T5) | 75.0/ 53.2/ 69.3† | 77.6/ 63.0/ 74.2† |
| −MOE | 74.8/ 52.6/ 69.0 | 77.6/ 62.8/ 74.1 |
| −SPEECH ENC. | 73.5/ 53.7/ 68.5 | 77.3/ 62.3/ 73.8 |
performs other baselines and achieves a superior F0.5 score. Especially, by further fine-tuning the models on the official data, we achieve a new stateof-the-art result (i.e., 76.3 F0.5). This result further demonstrates that audio can provide valuable benefits in GEC tasks regardless of language type. Additionally, even though the German dataset is much smaller compared to the English dataset, our model still achieves significant improvements, which highlights its effectiveness in low-resource settings.
## 5 Analyses 5.1 Ablation Study
To explore the effectiveness of our proposed method, we conduct the ablation studies with the following settings: a) removing the MoE layer
(−MOE) and retaining the dot attention module to fuse acoustic and textual features. b) removing the speech encoder (−SPEECH ENC.), which degenerates our multimodal GEC model into a text-only unimodal GEC model. As shown in Table 4, when we remove the MoE layer, the results of the multimodal GEC model show a decrease in all settings, demonstrating the validity of MoE in the multimodal feature fusion. Moreover, if we discard the speech encoder, the results of the reverted text-only unimodal GEC baseline models are significantly lower than the multimodal model for both English and German, which illustrates the effectiveness of our proposed multimodal GEC models.
## 5.2 Error Type Performance
To investigate the ability of GEC systems to correct different error types, we used the ERRANT toolkit
(Bryant et al., 2017) to analyze the evaluation results on the CoNLL14 test set with respect to both POS-based fine-grained error types and OperationLevel error types.
Fine-grained Error Types Figure 3 shows the performance of the POS-based fine-grained error types. We can observe that while multimodal GEC is inferior to text-only unimodal GEC systems in certain error types (i.e., PUNCT, ADV,
CONJ, and PREP), our model obtains better results in most types of errors, including ADJ, NOUN,
NOUM: NUM, PRON, VERB, VERB: TENSE,
DET, MORPH, ORTH, and PART, which further confirms the effectiveness of multimodal feature integration in the GEC task. In fact, adverb and conjunction error types account for a relatively small percentage of all grammatical errors (not more than 1.6%). In other words, multimodal GEC can improve the performance of common errors in GEC
and thus bring considerable improvements overall.
Operation-Level Error Types We evaluate the performance of Operation-Level error types using the ERRANT toolkit, which categorizes them into three categories: Replacement, Missing, Unnecessary. Considering that word order (WO) is a sub-type of Replacement, which is different from other types of errors, we manually separate into a separate category. As shown in Table 5, compared to text-only unimodal GEC baseline models, our multimodal GEC models are better at correcting the major operation-level error types, such as word substitutions (64.3%), missing words (17.9%), and unnecessary words (17.0%), demonstrating that the corresponding speech information is beneficial to GEC. However, the multimodal GEC model does not perform well in correcting word order, even if it is a minor issue (0.8%). We hypothesize that correcting word order requires sentence structure information (Zhang et al., 2022b), but the speech may not provide such information to GEC models.
## 6 Related Work Grammatical Error Correction (Gec) Is The
task of automatically identifying and correcting grammatical errors in a text (Ng et al., 2013). Previous research in this field has primarily focused on strengthening the representations of text data through data augmentation techniques, such as using the back-translation method (Sennrich et al.,
2016) for the GEC task (Kasewa et al., 2018; Xie et al., 2018; Kiyono et al., 2019), and injecting noise with specific rules into grammatical sentences (Lichtarge et al., 2019; Zhao et al., 2019; Xu et al., 2019; Stahlberg and Kumar, 2021). More recently, pre-trained language models (PLMs) have
Baseline (T5 large) Multimodal (T5 large (MoE)) Baseline (PreT5 large) Multimodal (PreT5 large (MoE))
![7_image_0.png](7_image_0.png)
| R (64.3%) | M (17.9%) | U (17.0%) | WO (0.8%) | | | | | | | | | |
|-----------------|-------------|-------------|-------------|------|------|------|------|------|------|------|------|------|
| METHOD | Pre. | Rec. | F0.5 | Pre. | Rec. | F0.5 | Pre. | Rec. | F0.5 | Pre. | Rec. | F0.5 |
| T5 (BASE.) | 50.9 | 37.1 | 47.4 | 42.5 | 38.1 | 41.5 | 56.0 | 35.9 | 50.4 | 35.7 | 46.2 | 37.4 |
| T5 (MULTIM.) | 52.5 | 36.5 | 48.3 | 45.7 | 37.3 | 43.7 | 58.9 | 34.2 | 51.5 | 31.4 | 35.5 | 32.1 |
| PRET5 (BASE.) | 52.3 | 37.9 | 48.6 | 45.3 | 37.8 | 43.6 | 57.8 | 35.1 | 51.2 | 37.1 | 50.9 | 39.2 |
| PRET5 (MULTIM.) | 53.0 | 37.0 | 48.8 | 47.2 | 37.2 | 44.8 | 59.8 | 35.7 | 52.7 | 37.0 | 43.2 | 38.1 |
been demonstrated to be effective in improving the performance of GEC tasks. Studies such as Choe et al. (2019) have leveraged sequential transfer learning to adapt pre-trained Transformer models to the GEC domain. Kaneko et al. (2020)
initialized an encoder-decoder GEC model with pre-trained BERT weights to enhance GEC performance. Katsumata and Komachi (2020) utilized the pre-trained BART model as a generic pre-trained encoder-decoder model for GEC, and Rothe et al.
(2021) adopted a pre-trained T5 model to distill GEC corpus and used the pre-trained structure as part of the network for distilled GEC training, achieving promising results. However, to date, no previous work has attempted to incorporate multimodal information (e.g., speech modality) into the GEC task. Our work is the first to explore the use of multimodal information for GEC.
Multimodal Many studies have demonstrated the potential of incorporating multimodal information in improving the performance of single-modal tasks in the NLP domain. For example, Schifanella et al. (2014) and Cai et al. (2019) integrated image modality into the Twitter sarcasm detection task and found that incorporating image information can enhance the performance of this text-only task.
Hu et al. (2023a) proposed to integrate radiology images and textual findings to improve impression generation. Additionally, Zheng et al. (2021) fused acoustic and text encoding to jointly learn a unified representation, thereby improving speech-to-text translation tasks. Li et al. (2017) demonstrated that fusing speech modality can enhance the readability of text summarization tasks. Huzaifah and Kukanov (2022) studied a joint speech-text embedding space through a semantic matching objective, achieving improved results in downstream tasks.
Kim and Kang (2022) proposed a method for learning the cross-modality interaction between acoustic and textual information, which outperformed the unimodal models in emotion classification. In this work, we are the first to attempt to fuse acoustic and text to improve the GEC task.
## 7 Conclusion
This paper presents a novel approach to the task of multimodal GEC that integrates speech and text features to improve grammatical error correction. Due to the scarcity of speech data in GEC, we expand the original GEC data to create new multimodal GEC datasets for English and German, where each sample in our datasets is a triple (grammatically incorrect text, audio, and corrected text). Our approach utilizes a speech and text encoder to extract acoustic and textual features from the speech and input text, respectively. Then, we employ an MoE
approach to selectively extract audio features that align with the textual features and use a dot attention layer to fuse the features from different modalities as the final representation. This fused representation is input to the decoder to generate the corrected sentence. Our experimental results on widely-used benchmarks demonstrate the effectiveness of our proposed model, achieving significant improvements compared to existing studies.
## Limitations
Our proposed multimodal Grammatical Error Correction (GEC) model is based on a Seq2Seq generative framework, which utilizes different encoders to extract information from each modality, and then fuses them to provide input to an autoregressive decoder. However, in this work, we did not explore the use of a sequence tagging framework, which may be a consideration for future research, as it has the advantage of faster decoding speed.
Additionally, this study focuses on the use of audio representations of the source-side of GEC data, rather than the target-side, to construct multimodal GEC data. Our further analysis concludes that our proposed multimodal GEC model has limitations in correcting certain minor error types (e.g., ADV,
CONJ, PUNCT, and word order) when compared to text-only GEC models.
## Acknowledgments
This work was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/060/2022/AFJ,
FDCT/0070/2022/AMJ), and the Multi-year Research Grant from the University of Macau
(Grant No. MYRG2020-00054-FST), and the Shenzhen Science and Technology Program
(JCYJ20220818103001002), and the Guangdong Provincial Key Laboratory of Big Data Computing, the Chinese University of Hong Kong, Shenzhen.
This work was performed in part at SICC which is supported by SKL-IOTSC, and HPCC supported by ICTO of the University of Macau.
## References
Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, et al.
2021. Xls-r: Self-supervised cross-lingual speech representation learning at scale. arXiv preprint arXiv:2111.09296.
Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations.
In *Advances in Neural Information Processing Systems*, volume 33, pages 12449–12460. Curran Associates, Inc.
Adriane Boyd, Jirka Hana, Lionel Nicolas, Detmar Meurers, Katrin Wisniewski, Andrea Abel, Karin Schöne, Barbora Štindlová, and Chiara Vettori. 2014.
The MERLIN corpus: Learner language and the CEFR. In Proceedings of the Ninth International Conference on Language Resources and Evaluation
(LREC'14), pages 1281–1288, Reykjavik, Iceland.
European Language Resources Association (ELRA).
Christopher Bryant, Mariano Felice, Øistein E. Andersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52–75, Florence, Italy. Association for Computational Linguistics.
Christopher Bryant, Mariano Felice, and Ted Briscoe.
2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
pages 793–805, Vancouver, Canada. Association for Computational Linguistics.
Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019. Multimodal sarcasm detection in Twitter with hierarchical fusion model. In *Proceedings of the 57th Annual* Meeting of the Association for Computational Linguistics, pages 2506–2515, Florence, Italy. Association for Computational Linguistics.
Yo Joong Choe, Jiyeon Ham, Kyubyong Park, and Yeoil Yoon. 2019. A neural grammatical error correction system built on better pre-training and sequential transfer learning. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 213–227, Florence, Italy. Association for Computational Linguistics.
Shamil Chollampatt and Hwee Tou Ng. 2018. A multilayer convolutional encoder-decoder neural network for grammatical error correction. In *Proceedings of* the AAAI conference on artificial intelligence, volume 32.
Stephane Clinchant, Kweon Woo Jung, and Vassilina Nikoulina. 2019. On the use of BERT for neural machine translation. In *Proceedings of the 3rd Workshop on Neural Generation and Translation*, pages 108–117, Hong Kong. Association for Computational Linguistics.
Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In *Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational*
Linguistics: Human Language Technologies, pages 568–572, Montréal, Canada. Association for Computational Linguistics.
Tao Fang, Xuebo Liu, Derek F. Wong, Runzhe Zhan, Liang Ding, Lidia S. Chao, Dacheng Tao, and Min Zhang. 2023a. Transgec: Improving grammatical error correction with translationese. In *Findings of* the Association for Computational Linguistics: ACL
2023, Toronto, Canada. Association for Computational Linguistics.
Tao Fang, Shu Yang, Kaixin Lan, Derek F. Wong, Jinpeng Hu, Lidia S. Chao, and Yue Zhang. 2023b. Is chatgpt a highly fluent grammatical error correction system? a comprehensive evaluation. arXiv preprint arXiv:2304.01746.
Peiyuan Gong, Xuebo Liu, Heyan Huang, and Min Zhang. 2022. Revisiting grammatical error correction evaluation and beyond. In *Proceedings of the* 2022 Conference on Empirical Methods in Natural Language Processing, pages 6891–6902, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451–3460.
Jinpeng Hu, Zhihong Chen, Yang Liu, Xiang Wan, and Tsung-Hui Chang. 2023a. Improving radiology summarization with radiograph and anatomy prompts. In Findings of the Association for Computational Linguistics: ACL 2023. Association for Computational Linguistics.
Jinpeng Hu, DanDan Guo, Yang Liu, Zhuo Li, Zhihong Chen, Xiang Wan, and Tsung-Hui Chang. 2023b.
A Simple Yet Effective Subsequence-Enhanced Approach for Cross-Domain NER. In Proceedings of the AAAI Conference on Artificial Intelligence.
Jinpeng Hu, Zhuo Li, Zhihong Chen, Zhen Li, Xiang Wan, and Tsung-Hui Chang. 2022a. Graph enhanced contrastive learning for radiology findings summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4677–4688.
Jinpeng Hu, Yaling Shen, Yang Liu, Xiang Wan, and Tsung-Hui Chang. 2022b. Hero-gang neural model for named entity recognition. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1924–1936.
Muhammad Huzaifah and Ivan Kukanov. 2022. Analysis of joint speech-text embeddings for semantic matching. *arXiv preprint arXiv:2204.01235*.
Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 4248–4254, Online. Association for Computational Linguistics.
Sudhanshu Kasewa, Pontus Stenetorp, and Sebastian Riedel. 2018. Wronging a right: Generating better errors to improve grammatical error detection. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 4977–4983, Brussels, Belgium. Association for Computational Linguistics.
Satoru Katsumata and Mamoru Komachi. 2020.
Stronger baselines for grammatical error correction using a pretrained encoder-decoder model. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 827–832, Suzhou, China. Association for Computational Linguistics.
Donghwa Kim and Pilsung Kang. 2022. Cross-modal distillation with audio–text fusion for fine-grained emotion classification using bert and wav2vec 2.0.
Neurocomputing, 506:168–183.
Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1236–1242, Hong Kong, China. Association for Computational Linguistics.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
Shaopeng Lai, Qingyu Zhou, Jiali Zeng, Zhongli Li, Chao Li, Yunbo Cao, and Jinsong Su. 2022. Typedriven multi-turn corrections for grammatical error correction. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3225–3236, Dublin, Ireland. Association for Computational Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training
for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*,
pages 7871–7880, Online. Association for Computational Linguistics.
Bei Li, Quan Du, Tao Zhou, Yi Jing, Shuhan Zhou, Xin Zeng, Tong Xiao, JingBo Zhu, Xuebo Liu, and Min Zhang. 2022. ODE transformer: An ordinary differential equation-inspired model for sequence generation. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 8335–8351, Dublin, Ireland. Association for Computational Linguistics.
Haoran Li, Junnan Zhu, Tianshang Liu, Jiajun Zhang, and Chengqing Zong. 2018. Multi-modal sentence summarization with modality attention and image filtering. In *Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence*,
IJCAI'18, page 4152–4158.
Haoran Li, Junnan Zhu, Cong Ma, Jiajun Zhang, and Chengqing Zong. 2017. Multi-modal summarization for asynchronous collection of text, image, audio and video. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing, pages 1092–1102, Copenhagen, Denmark. Association for Computational Linguistics.
Yinghao Li, Xuebo Liu, Shuo Wang, Peiyuan Gong, Derek F. Wong, Yang Gao, Heyan Huang, and Min Zhang. 2023. Templategec: Improving grammatical error correction with detection template. In *Proceedings of the 61st Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers),
Toronto, Canada. Association for Computational Linguistics.
Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, and Simon Tong. 2019. Corpora generation for grammatical error correction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3291–3301, Minneapolis, Minnesota. Association for Computational Linguistics.
Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, and Zhaopeng Tu. 2021. Understanding and improving encoder layer fusion in sequenceto-sequence learning. In International Conference on Learning Representations.
Yang Liu and Mirella Lapata. 2019. Text Summarization with Pretrained Encoders. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3721–3731.
Jakub Náplava and Milan Straka. 2019. Grammatical error correction in low-resource scenarios. In Proceedings of the 5th Workshop on Noisy User-generated
Text (W-NUT 2019), pages 346–356, Hong Kong, China. Association for Computational Linguistics.
Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In *Proceedings of* the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–14, Baltimore, Maryland. Association for Computational Linguistics.
Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel Tetreault. 2013. The CoNLL2013 shared task on grammatical error correction.
In *Proceedings of the Seventeenth Conference on* Computational Natural Language Learning: Shared Task, pages 1–12, Sofia, Bulgaria. Association for Computational Linguistics.
Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020.
GECToR - grammatical error correction: Tag, not rewrite. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 163–170, Seattle, WA, USA →
Online. Association for Computational Linguistics.
Heli Qi, Sashi Novitasari, Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. 2023. Speechain: A
speech toolkit for large-scale machine speech chain.
In *arXiv*.
Muhammad Qorib, Seung-Hoon Na, and Hwee Tou Ng. 2022. Frustratingly easy system combination for grammatical error correction. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 1964–1974, Seattle, United States. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21(140):1–67.
Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. Fastspeech: Fast, robust and controllable text to speech. In Advances in Neural Information Processing Systems, volume 32.
Curran Associates, Inc.
Sascha Rothe, Jonathan Mallinson, Eric Malmi, Sebastian Krause, and Aliaksei Severyn. 2021. A simple recipe for multilingual grammatical error correction.
In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 702–707, Online. Association for Computational Linguistics.
Ramit Sawhney, Puneet Mathur, Ayush Mangal, Piyush Khanna, Rajiv Ratn Shah, and Roger Zimmermann.
2020. Multimodal multi-task financial risk forecasting. In Proceedings of the 28th ACM international conference on multimedia, pages 456–465.
Rossano Schifanella, Paloma De Juan, Joel Tetreault, and Liangliang Cao. 2014. Detecting sarcasm in multimodal social platforms. In Proceedings of the 24th ACM international conference on Multimedia, pages 1136–1145.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics.
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks:
The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538.
Noam Shazeer and Mitchell Stern. 2018. Adafactor:
Adaptive learning rates with sublinear memory cost.
In *International Conference on Machine Learning*,
pages 4596–4604. PMLR.
Felix Stahlberg and Shankar Kumar. 2021. Synthetic data generation for grammatical error correction with tagged corruption models. In *Proceedings of the* 16th Workshop on Innovative Use of NLP for Building Educational Applications, pages 37–47, Online.
Association for Computational Linguistics.
Xin Sun, Tao Ge, Furu Wei, and Houfeng Wang. 2021.
Instantaneous grammatical error correction with shallow aggressive decoding. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 5937–5947, Online. Association for Computational Linguistics.
Maksym Tarnavskyi, Artem Chernodub, and Kostiantyn Omelianchuk. 2022. Ensembling and knowledge distilling of large sequence taggers for grammatical error correction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 3842–3852, Dublin, Ireland. Association for Computational Linguistics.
Zhaohong Wan, Xiaojun Wan, and Wenguang Wang.
2020. Improving grammatical error correction with data augmentation by editing latent representation.
In Proceedings of the 28th International Conference on Computational Linguistics, pages 2202–2212, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Changhan Wang, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Ann Lee, Peng-Jen Chen, Jiatao Gu, and Juan Pino. 2021. fairseq sˆ2: A scalable and integrable speech synthesis toolkit. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 143–152, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Ziang Xie, Guillaume Genthial, Stanley Xie, Andrew Ng, and Dan Jurafsky. 2018. Noising and denoising natural language: Diverse backtranslation for grammar correction. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers),
pages 619–628, New Orleans, Louisiana. Association for Computational Linguistics.
Shuyao Xu, Jiehao Zhang, Jin Chen, and Long Qin.
2019. Erroneous data generation for grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 149–158, Florence, Italy. Association for Computational Linguistics.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. In arXiv preprint arXiv:2010.11934.
Helen Yannakoudakis, Øistein E Andersen, Ardeshir Geranpayeh, Ted Briscoe, and Diane Nicholls. 2018. Developing an automated writing placement system for esl learners. *Applied Measurement in Education*, 31(3):251–267.
Helen Yannakoudakis, Ted Briscoe, and Ben Medlock.
2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180–189, Portland, Oregon, USA. Association for Computational Linguistics.
Yue Zhang, Leyang Cui, Deng Cai, Xinting Huang, Tao Fang, and Wei Bi. 2023a. Multi-task instruction tuning of llama for specific scenarios: A preliminary study on writing assistance. arXiv preprint arXiv:2305.13225.
Yue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei Huang, and Min Zhang.
2022a. MuCGEC: a multi-reference multi-source evaluation dataset for Chinese grammatical error correction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3118–3130, Seattle, United States.
Association for Computational Linguistics.
Yue Zhang, Bo Zhang, Haochen Jiang, Zhenghua Li, Chen Li, Fei Huang, and Min Zhang. 2023b. Nasgec: a multi-domain chinese grammatical error correction dataset from native speaker texts. arXiv preprint arXiv:2305.16023.
Yue Zhang, Bo Zhang, Zhenghua Li, Zuyi Bao, Chen Li, and Min Zhang. 2022b. Syngec: Syntax-enhanced grammatical error correction with a tailored gecoriented parser. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2518–2531. Association for Computational Linguistics.
Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156–165, Minneapolis, Minnesota. Association for Computational Linguistics.
Zewei Zhao and Houfeng Wang. 2020. Maskgec: Improving neural grammatical error correction via dynamic masking. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 1226–1233. Association for the Advancement of Artificial Intelligence (AAAI).
Renjie Zheng, Junkun Chen, Mingbo Ma, and Liang Huang. 2021. Fused acoustic and text encoding for multimodal bilingual pretraining and speech translation. In *International Conference on Machine Learning*, pages 12736–12746. PMLR.
Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2022. Dialoglm: Pre-trained model for long dialogue understanding and summarization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11765–
11773.
## A Appendix A.1 Pre-Training Settings For T5/Mt5-Large Model
The settings of hyper-parameters for pre-training T5/mT5-Large models for English and German are listed in Table 6.
| CONFIG. | ENGLISH MODEL | GERMAN MODEL |
|---------------|-----------------|----------------|
| Model Arch. | T5-Large | mT5-Large |
| Optimizer | Adafactor | Adafactor |
| Learning Rate | 0.0008 | 0.0007 |
| Batch Size | 24 | 16 |
| Update Freq. | 128 | 64 |
| GPUs | 2 (A100) | 2 (A100) |
Table 6: Hyper-parameters for pre-training T5/mT5-
Large models on 10M synthetic GEC data for English and German. Model Arch. refers to model architecture, Update Freq. means gradient accumulation steps.
## A.2 Settings Of Training Multimodal Gec Models
Table 7 presents the settings of hyper-parameters for training English and German multimodal GEC models.
| CONFIG. | ENGLISH MULTIM. | GERMAN MULTIM. |
|------------------|-------------------|---------------------|
| Stage-I | | |
| Text backbone | T5-Large | mT5-Large |
| Speech Encoder | Hubert-Large | wav2vec2-xls-r-300m |
| Optimizer | Adafactor | Adafactor |
| Learning Rate | 0.0001 | 0.0002 |
| Batch Size | 16 | 8 |
| Update Freq. | 16 | 16 |
| Num. of Experts | 6 | 6 |
| K | 2 | 2 |
| λ | 0.1 | 0.1 |
| Stage-II | | |
| Optimizer | - | Adafactor |
| Learning Rate | - | 0.0001 |
| Batch Size | - | 8 |
| Update Freq. | - | 2 |
| Num. of Experts | - | 6 |
| K | - | 2 |
| λ | - | 0.1 |
| Generation | | |
| Beam size | 5 | 5 |
| Max input length | 128 | 128 |
Table 7: Hyper-parameters for training English and German multimodal GEC models.
## A.3 Case Study
Table 8 shows some examples generated by the textonly unimodal GEC model and multimodal GEC
model. Our multimodal GEC model is better at correcting common error types (e.g. VERB) while exhibiting inferior performance in correcting word order errors.
| SRC | A couple did not have a child after their marriage for a long time, their parents were anxious about that and asked them to go to hospital to check what was the problem. |
|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| REF. | A couple did not have a child after their marriage for a long time. Their parents were anxious about that and asked them to go to hospital to check what the problem was. |
| T5 (BASE.) | A couple did not have a child after their marriage for a long time. Their parents were anxious about that and asked them to go to hospital to check what the problem was. |
| T5 (MOE) | A couple did not have a child after their marriage for a long time. Their parents were anxious about that and asked them to go to hospital to check what was the problem. Spouses usually have very close relationships, if person A tell his family that he has |
| SRC | this gene, his uncle C knows and tells his wife D that he needed to run a test because his cousine has this disease. Spouses usually have very close relationships. If person A tells his family that he has |
| REF. | this gene, his uncle C knows and tells his wife D that he needs to run a test because his cousin has this disease . Spouses usually have very close relationships. If person A tells his family that he has |
| T5 (BASE.) | this gene, his uncle C knows and tells his wife D that he needed to run a test because his cousin has this disease. Spouses usually have very close relationships. If person A tells his family that he has |
| T5 (MOE) | this gene, his uncle C knows and tells his wife D that he needs to run a test because his cousin has this disease. |
Table 8: Examples of the outputs generated by the unimodal/multimodal GEC model. SRC refers to the ungrammatical sentence, and REF. is the grammatical sentence. T5 (BASE.) refers to the outputs of the unimodal GEC
model. **T5 (M**OE) refers to the outputs of our multimodal GEC baseline model. The words with the color red are the ungrammatical parts and the **blue** indicates the corrected version.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations Section
A2. Did you discuss any potential risks of your work?
Not applicable. There are no potential risks associated with this paper because all tasks we used are public ones that have been verified for years.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract section, and Section 1
✓ A4. Have you used AI writing assistants when working on this paper?
We use ChatGPT AI writing assistants to check some spelling errors and polish some sentences of our work (i.e., sections 4.1, 4.2, and 7)
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2, Section 4, And Section 4.1
✓ B1. Did you cite the creators of artifacts you used?
section 4
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
All datasets and models we used here are public without restriction for research purposes.
✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
All datasets and models we used here are public without restriction for research purposes.
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The datasets we used in our paper do not have such issues according to the claims in the original paper.
✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
The datasets we used in our paper do not have such issues according to the claims in the original paper.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Section 2, and Section 4.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?**
Section 2, Section 4, Section 4.1, Section 4.4, Section 5.1, Appendix A.1, A.2, Table 3, and Table 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix A.1, A.2
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Section 4, Appendix A.1, A.2
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Section 4.4, Section 5.1, Table 3, Table 4
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 2, Section 4, and Appendix A.1, A.2
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
sun-etal-2023-teaching | Teaching the Pre-trained Model to Generate Simple Texts for Text Simplification | https://aclanthology.org/2023.findings-acl.595 | Randomly masking text spans in ordinary texts in the pre-training stage hardly allows models to acquire the ability to generate simple texts. It can hurt the performance of pre-trained models on text simplification tasks. In this paper, we propose a new continued pre-training strategy to teach the pre-trained model to generate simple texts. We continue pre-training BART, a representative model, to obtain SimpleBART. It consistently and significantly improves the results on lexical simplification, sentence simplification, and document-level simplification tasks over BART. At the end, we compare SimpleBART with several representative large language models (LLMs). |
## Teaching The Pre-Trained Model To Generate Simple Texts For Text Simplification
1Renliang Sun, 2Wei Xu, 1**Xiaojun Wan**
1Wangxuan Institute of Computer Technology, Peking University 1Center for Data Science, Peking University 1The MOE Key Laboratory of Computational Linguistics, Peking University 2School of Interactive Computing, Georgia Institute of Technology [email protected] [email protected] [email protected]
## Abstract
Randomly masking text spans in ordinary texts in the pre-training stage hardly allows models to acquire the ability to generate simple texts. It can hurt the performance of pre-trained models on text simplification tasks. In this paper, we propose a new continued pre-training strategy to teach the pre-trained model to generate simple texts. We continue pre-training BART, a representative model, to obtain SimpleBART.
It consistently and significantly improves the results on lexical simplification, sentence simplification, and document-level simplification tasks over BART. At the end, we compare SimpleBART with several representative large language models (LLMs).
## 1 Introduction
Text simplification (TS) is a task in the field of natural language generation. It aims at rewriting a complex text into simple text while keeping the primary meaning intact (Laban et al., 2021).
Recently, several works have leveraged pretrained models for TS (Omelianchuk et al., 2021; Devaraj et al., 2022). However, problems arise when pre-trained models are applied to TS directly.
In the pre-training stage, the model hardly acquires the ability to generate simple texts. The improvement of results on simplification tasks relies almost on the fine-tuning stage. It can hurt the performance of pre-trained models, especially for lowresource sub-tasks like lexical simplification. One reason for this shortcoming is the pre-training strategy. It randomly masks text spans in ordinary texts, teaching the model to generate ordinary texts rather than simple texts.
We are committed to adapting the pre-trained model to TS in this paper. The pre-trained model has gained the ability to generate ordinary texts, and it is costly to start pre-training from scratch.
Therefore, we focus on the continued pre-training strategy (Gururangan et al., 2020). We first aim to continue pre-training on simple texts because it contains plenty of simple words. In TS, simple texts are derived almost from SimpleWiki (Zhang and Lapata, 2017) and Newsela (Xu et al., 2015). We identify simple text spans in simple texts and dynamically replace them with <mask> tokens. Then, the pre-trained model will learn by reconstructing simple words. Meanwhile, we expect the pretrained model to learn from ordinary texts. We use a dictionary to replace complex words in ordinary texts with simple words. We also ensure the quality of the replaced sentences.
Based on BART (Lewis et al., 2020), we continue pre-training to teach it to generate simple texts and obtain SimpleBART. We then conduct experiments on three main tasks of TS: sentence simplification, lexical simplification, and documentlevel simplification. SimpleBART achieves consistent and noticeable improvements across several datasets on all three tasks over BART and several other baselines. The results illustrate that our proposed strategy helps the pre-trained model to gain the ability to generate simple texts.
To summarize, our contributions include: (1)
We propose a new continued pre-training strategy to teach the pre-trained model to generate simple texts. (2) We continue pre-training BART, a representative seq2seq model, to obtain SimpleBART. It can be used for several simplification tasks and achieve consistent performance improvement. Code and SimpleBART will be released at https://github.com/RLSNLP/SimpleBART.
## 2 Methodology
As illustrated in Figure 1, our strategy is divided into two parts: learning dynamically to reconstruct simple words from simple texts and from ordinary texts where complex words are replaced with simple ones.
![1_image_0.png](1_image_0.png)
## 2.1 Masking Simple Words In Simple Texts
We need to identify the simple words in simple texts at first. We take advantage of the DeepBlueAI
model (Pan et al., 2021) that achieves state-of-theart results on the lexical complexity prediction task
(Shardlow et al., 2021). A text span of length n consists of n words. The input to the DeepBlueAI
model is a text span and the output is a complex value between 0 and 1. The closer this value is to 0, the simpler the text span.
Unlike the previous constant mask probability, in our strategy, the simpler a text span is, the higher its probability of being masked. This means that the mask probability is dynamic. We also set a complexity threshold of T. If the complexity c of a text span exceeds T, we will not mask this span. In our experiments, we set T to 0.25 as an empirical value. Following Lewis et al. (2020), we set the max mask probability to 0.15, and the length of a text span obeys a Poisson distribution (λ = 3).
Finally, the mask probability m is calculated as:
$$m={\begin{cases}0.15\times(1-{\frac{1}{T}}\cdot c),&c\leq T\\ 0,&c>T\end{cases}}\quad(1)$$
The function to mask the text span is denoted as g(·). Given a sentence x, the pre-trained model will learn to reconstruct x from the masked sentence:
$$l(x)=-l o g P(x|g(x))$$
l(x) = −*logP*(x|g(x)) (2)
## 2.2 Replacing Complex Words In Ordinary Texts
We also expect the pre-trained model to learn helpful information from ordinary texts. However, ordinary texts contain more complex words than simple ones, making the pre-trained model learn to reconstruct simple words much less frequently. We introduce the dictionary SimplePPDB++ (Maddela and Xu, 2018) to address this issue. It contains millions of paraphrase rules with readability scores.
Therefore, we can replace the complex words in ordinary texts with simple words. Then, the pretrained model will learn to reconstruct these simple words as in Eq.(2).
Nevertheless, a word may have different meanings in different sentences. Using a dictionary to replace complex words may change the meaning of the original sentence. Therefore, we use BERTScore (Zhang et al., 2019) to calculate the similarity between the original and replaced sentences to avoid this problem. We will discard the replaced sentences if the calculated BERTScore is lower than a similarity threshold. In our experiments, the similarity threshold is set to 0.95 as an empirical value.
## 3 Experimental Settings 3.1 Continued Pre-Training
We select the BART-Large model to continue pretraining. It is a representative seq2seq model suitable for three main simplification tasks. We follow the task-adaptive pre-training method (Gururangan et al., 2020) and continue pre-training on the training set of the corresponding simplification task, ensuring that the continued pre-training texts have no intersection with the test set. We refer to the pretrained models obtained by our strategy collectively as SimpleBART.
## 3.2 Simplification Tasks
We select three representative tasks for experiments: sentence simplification, document-level simplification, and lexical simplification. For sentence simplification, we conduct experiments on Wikiauto (Jiang et al., 2020) and Newsela (Xu et al.,
2015). Wikiauto is only a training set, so we use Turkcorpus (Xu et al., 2016) as its validation and test set. Following Sun et al. (2023), we use SARI
(Xu et al., 2016) and BERTScore (Zhang et al.,
2019) as the evaluation metrics. BLEU and FKGL
have been proven to be unsuitable for evaluating simplification (Sulem et al., 2018; Tanprasert and Kauchak, 2021). For document-level simplification, we conduct experiments on the D-Wikipedia dataset (Sun et al., 2021). We use D-SARI (Sun et al., 2021) as the evaluation metric. For lexical simplification, we conduct experiments on LexMTurk (Horn et al., 2014) and BenchLS (Paetzold and Specia, 2016). We use precision, recall, and F1 score as the evaluation metrics. For more hyperparameter setting details, please refer to Appendix B.
## 4 Results 4.1 Sentence Simplification
To demonstrate the advantages of our strategy, we develop BART-CP for a fair comparison. It continues pre-training with the same number of steps on the same data using the previous pre-training strategy from Lewis et al. (2020). In the continued pre-training stage, text spans are masked randomly.
Turkcorpus SARI↑ Keep Del Add BS↑
EditNTS 37.9 67.3 43.1 3.4 0.950 T5 37.8 **73.5** 35.6 4.2 **0.982**
ControlTS **40.4** 70.4 44.5 6.2 0.959 BART 38.3 65.4 44.0 5.6 0.973
BART-CP 38.6 64.6 45.9 5.4 0.967
SimpleBART 39.5 64.6 **47.2 6.6** 0.972
Newsela SARI↑ Keep Del Add BS↑
EditNTS 37.1 34.9 74.8 1.6 0.897 T5 36.0 **41.8** 61.9 4.4 0.905
ControlTS 39.7 37.6 77.3 4.1 0.894
BART 40.1 40.5 73.8 6.2 0.904 BART-CP 40.3 41.7 72.6 6.9 **0.908**
SimpleBART **41.6** 40.5 **77.4 6.9** 0.902
Table 1: Results on the Turkcorpus test set and the Newsela test set. We use **bold** to indicate the best result.
We choose EditNTS (Dong et al., 2019), T5base (Raffel et al., 2020), and ControlTS (Maddela et al., 2021) as baselines. T5-base is close to SimpleBART in size. ControlTS achieves the state-ofthe-art result on the Newsela dataset. Following Alva-Manchego et al. (2021), BERTScore*precision*
(BS) is also reported. From Table 1, the BS scores of all outputs are high enough, which means that the outputs are of high quality. According to SARI,
the most important automatic evaluation metric for sentence simplification, SimpleBART improves SARI values over BART by 1.2 points and 1.5 points, respectively. Overall, it achieves comparable results to the advanced model for the sentence simplification task. We also notice that SimpleBART outperforms BART-CP, demonstrating the effectiveness of our proposed strategy. The example outputs are given in Appendix D.
## 4.2 Lexical Simplification
We focus on generating suitable words using the pre-trained model, which is a critical step in lexical simplification. We follow Qiang et al. (2020a) and let the pre-trained models generate several candidate words. BenchLS and LexMTurk are just two test sets, so we continue pre-training on the Wikiauto training set. We choose Paetzold-NE (Paetzold and Specia, 2017a) and LSBert (Qiang et al.,
2020b) as two baselines. LSBert achieves the stateof-the-art result in this task.
BenchLS F1↑ Precision Recall
Paetzold-NE 23.6 27.0 20.9
LSBert **28.1** 24.4 **33.1**
BART 19.2 19.6 18.9
BART-CP 25.8 26.0 25.7
SimpleBART 27.8 **28.0** 27.6
LexMTurk F1↑ Precision Recall
Paetzold-NE 19.5 **31.0** 14.2
LSBert 26.8 30.6 23.8 BART 18.8 19.2 18.3
BART-CP 26.9 27.2 26.6
SimpleBART **28.5** 28.7 **28.2**
Table 2: Results on the BenchLS test set and the LexMTurk test set.
As shown in Table 2, SimpleBART improves the F1 scores over BART by 8.6 points and 9.7 points, respectively. It achieves comparable results to LSBert. The results also demonstrate that BART
needs to gain the ability to generate simple words and the importance of introducing continued pretraining when training data is scarce.
## 4.3 Document-Level Simplification
SimpleBART also performs well on the documentlevel simplification task. We choose BertSumextabs (Liu and Lapata, 2019), which achieves the state-of-the-art result on this task as a baseline.
Compared with BART, SimpleBART improves the
| D-Wikipedia | D-SARI↑ | Dkeep | Ddel | Dadd |
|---------------|-----------|---------|--------|--------|
| BertSumextabs | 39.88 | 35.71 | 72.06 | 11.87 |
| BART | 39.84 | 35.87 | 70.26 | 13.40 |
| BART-CP | 40.13 | 36.21 | 71.54 | 12.64 |
| SimpleBART | 41.64 | 37.91 | 71.96 | 15.04 |
Table 3: Results on the D-Wikipedia test set D-SARI value by 1.8 points, making it the new state-of-the-art result.
## 5 Analysis 5.1 Human Evaluation
We hire three workers to conduct a human evaluation of the 100 randomly selected outputs of the sentence simplification task. Following Dong et al.
(2019), workers rate on simplicity (Simp), fluency (Flu), and adequacy (Ade) on a 5-point Likert scale.
Following Xu et al. (2016), we use simplicity gain
(S+) to demonstrate how many word-level simplifications occur in sentence simplification.
| Simp↑ | Flu↑ | Ade↑ | S+↑ | |
|------------|--------|--------|-------|-------|
| EditNTS | 3.30∗ | 4.65∗ | 3.56∗ | 0.14∗ |
| T5 | 3.16∗ | 4.91∗ | 4.47∗ | 0.25∗ |
| ControlTS | 3.39∗ | 4.67∗ | 4.26∗ | 0.60 |
| BART | 3.22∗ | 4.80 | 4.31∗ | 0.34∗ |
| BART-CP | 3.45∗ | 4.68∗ | 3.95 | 0.37∗ |
| SimpleBART | 3.62 | 4.82 | 4.01 | 0.55 |
| Reference | 3.74 | 4.85 | 4.03 | 0.93∗ |
Table 4 shows that SimpleBART achieves the highest Simp score among all the simplification models, close to that of the reference. SimpleBART also significantly makes more word-level simplifications compared to BART and BART-CP.
## 5.2 Domain Adaptation
Continued pre-training using our strategy on taskrelated data can improve the results. However, we still want to know if continued pre-training on more data from the same domain and different domains will improve the results. We design the following experiments. 1) Exp1: We continue pre-training on more sentences from Wikipedia and SimpleWiki, except those contained in the Wikiauto dataset. 2)
Exp2: We continue pre-training on more sentences in the Newsela corpus, except those contained in the Newsela dataset. The sizes of the above texts used for continued pre-training are roughly five times larger than the simplification training set. 3)
Exp3: We continue pre-training on the Newsela training set. 4) Exp4: We continue pre-training on the Wikiauto training set.
| SARI↑ | Keep | Del | Add | BS↑ | |
|---------|--------|-------|-------|-------|-------|
| Exp1 | 38.9 | 64.9 | 45.7 | 6.0 | 0.968 |
| Exp2 | 41.1 | 39.5 | 77.4 | 6.5 | 0.900 |
| Exp3 | 38.0 | 39.2 | 69.7 | 5.0 | 0.975 |
| Exp4 | 39.6 | 42.1 | 71.1 | 5.7 | 0.907 |
From the results of Exp1 and Exp2 in Table 5, continued pre-training on more texts from the same domain can still enhance the simplification results.
Compared to BART in Table 1, the SARI values improve by 0.6 and 1 point, respectively. From the results of Exp3 and Exp4, continued pre-training on more texts in a different domain can instead harm the results. Compared to BART, the SARI values decrease by 0.3 and 0.5 points, respectively. Thus, we suggest that future researchers use texts within the same domain (e.g., Wikiauto and Wikipedia)
for continued pre-training in text simplification.
## 5.3 Generating Complex Texts
There are numerous studies dedicated to simplifying complex texts. Nevertheless, none has attempted to rewrite simple texts into complex ones.
We make such an interesting attempt. We have changed our strategy to mask complex words and name the obtained model ComplexBART. When fine-tuning and testing on the Newsela dataset, we use simple texts as input and complex texts as reference.
| SARI↑ | Keep | Del | Add | BS↑ | |
|-------------|--------|-------|-------|-------|-------|
| BART | 35.7 | 53.2 | 50.5 | 3.3 | 0.901 |
| ComplexBART | 37.2 | 52.9 | 55.4 | 3.4 | 0.900 |
Table 6: Results of generating complex texts.
From Table 6, ComplexBART improves the SARI value by 1.5 points over the BART model, indicating that the modified strategy can help the pre-trained model learn to generate complex texts.
Thus, ComplexBART can serve as a better baseline for generating complex texts in the future.
## 6 Comparing Simplebart With Large Language Models
Large language models (LLMs) have received widespread attention from researchers recently and have achieved state-of-the-art results on many natural language generation tasks. In this section, we select several representative large models to conduct experiments on text simplification and compare them with SimpleBART. We hope these results can serve as baselines for future research.
We choose those LLMs that provide API or model files to ensure reproducibility. We choose GPT-3.5-Turbo-03011, FLAN-T5-XL (Chung et al., 2022), and LLaMA-7B (Touvron et al., 2023)
as LLM baselines and use zero-shot generation.
Then, we follow the implementation2and fine-tune FLAN-T5-base as another baseline. We collect the training sets of Wikiauto, Newsela, and DWikipedia and conduct instruction fine-tuning.
## 6.1 Comparison And Analysis
The comparison of SimpleBART results with those of the LLMs is shown in Tables 7, 8, and 9.
For the sentence-level simplification task, LLaMA and FLAN-T5-XL seem unable to understand the prompt for simplifying sentences, and they are inclined to repeat the original text. However, FLAN-T5-base, only 10% of the parameters of the above two models, performs better. It illustrates fine-tuning phase can improve performance when the model is not super large. It may be a little strange that GPT-3.5 performs worse than SimpleBART. We find that with the zero-shot setting, GPT-3.5 may not know the "degree of simplification" we want. It makes many reasonable changes to the original text, but it also keeps some of the complex parts of the original text.
For the document-level simplification task, LLaMA over-repeats sentences from the original article, and the generated text is difficult to read.
The shortcomings of GPT-3.5 are similar to those of the sentence-level simplification task. Besides, limited by the number of API accesses per minute of OpenAI, we only select 1000 original documents for simplification, which takes nearly five hours.
For the lexical simplification task, neither the LLaMA nor the FLAN-T5 model could understand 1https://openai.com/blog/chatgpt 2https://github.com/philschmid/
deep-learning-pytorch-huggingface/blob/main/ training/deepseed-flan-t5-summarization.ipynb
Turkcorpus SARI↑ Keep Del Add BS↑
GPT-3.5 32.4 43.4 43.4 10.4 0.896
FLAN-T5 31.5 64.1 29.6 1.0 0.892 LLaMA 29.3 69.3 16.3 2.3 0.873
FLAN-T5
(Fine-tuned) 36.5 74.4 31.3 3.8 0.901
SimpleBART 39.5 64.6 47.2 6.6 0.972
Newsela SARI↑ Keep Del Add BS↑
GPT-3.5 38.7 32.5 78.1 5.3 0.897 FLAN-T5 32.2 29.7 65.7 1.3 0.891 LLaMA 19.9 35.8 23.2 0.8 0.822
FLAN-T5
(Fine-tuned) 29.9 40.3 46.7 2.7 0.902
SimpleBART 41.6 40.5 77.4 6.9 0.902
Table 7: Comparison on the Turkcorpus test set and the Newsela test set.
D-Wikipedia D-SARI↑ Keep Del Add
GPT-3.5 26.68 18.45 59.36 2.25 FLAN-T5 26.77 15.07 64.83 0.40 LLaMA / / / /
FLAN-T5
(Fine-tuned) 33.22 25.08 67.50 7.09
SimpleBART 41.64 37.91 71.96 15.04
Table 8: Comparison on the D-Wikipedia test set.
BenchLS F1↑ Precision Recall
GPT-3.5 36.6 36.6 36.6
SimpleBART 27.8 28.0 27.6
LexMTurk F1↑ Precision Recall
GPT-3.5 31.4 31.5 31.4
SimpleBART 28.5 28.7 28.2
Table 9: Comparison on the BenchLS test set and the LexMTurk test set.
the instruction to replace complex words with simple words. However, GPT-3.5 outperforms the other models substantially. We also find that GPT3.5 makes many sensible substitutions not included in the reference, such as replacing "acquired"with
"earned". Such results illustrate that LLMs are dominant for this task.
## 7 Conclusion
In this paper, we are committed to adapting the pre-trained model to text simplification. We propose a new pre-training strategy to allow the pretrained model to learn to generate simple texts. The adapted pre-trained model improves the results on various simplification tasks.
## Limitations
The limitation of our method comes from the requirement to identify simple words in simple texts in Section 2.1. The DeepBlueAI we have used is a deep model, meaning it takes much time when inference. In our experiment, it takes 362.78 seconds to identify simple words from 10,000 sentences with an average length of 8.12. We expect that there will be methods with higher identification accuracy and higher inference speed in the future.
Due to page limitations, we have placed the related work in Appendix A and the ablation experiments in Appendix C.
Due to time constraints, we do not perform a human evaluation of the output of LLMs. We hope to conduct a more comprehensive evaluation of the performance of LLMs in the future.
## Ethics Statement
The texts we have used for continued pre-training come from Wikipedia dumps and the Newsela Corpus. Using Wikipedia dumps requires following the CC-BY-SA license and GFDL. Using Newsela Corpus requires authorization, and we have received it.
This paper contains a human evaluation. We hire three experienced workers to perform it. In the recruiting process, we follow a first-come, firstserved order. We pay much more than the local minimum hourly rate.
## Acknowledgements
We thank Mounica Maddela for her help on the ControlTS baseline. This work is supported by National Key R&D Program of China
(2021YFF0901502), National Science Foundation of China (No. 62161160339), State Key Laboratory of Media Convergence Production Technology and Systems and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).
Wei Xu and Xiaojun Wan are the corresponding authors.
## References
Fernando Alva-Manchego, Carolina Scarton, and Lucia Specia. 2020. Data-driven sentence simplification:
Survey and benchmark. *Computational Linguistics*,
46(1):135–187.
Fernando Alva-Manchego, Carolina Scarton, and Lucia Specia. 2021. The (un) suitability of automatic evaluation metrics for text simplification. *Computational* Linguistics, 47(4):861–889.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
Ashwin Devaraj, William Sheffield, Byron C Wallace, and Junyi Jessy Li. 2022. Evaluating factuality in text simplification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7331–
7345.
Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. Editnts: An neural programmer-interpreter model for sentence simplification through explicit editing. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 3393–3402.
Sian Gooding and Ekaterina Kochmar. 2019. Recursive context-aware lexical simplification. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 4853–4863.
Yuxian Gu, Zhengyan Zhang, Xiaozhi Wang, Zhiyuan Liu, and Maosong Sun. 2020. Train no evil: Selective masking for task-guided pre-training. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6966–
6974.
Suchin Gururangan, Ana Marasovic, Swabha ´
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360.
Colby Horn, Cathryn Manduca, and David Kauchak.
2014. Learning a lexical simplifier using wikipedia.
In *Proceedings of the 52nd Annual Meeting of the* Association for Computational Linguistics (Volume 2: Short Papers), pages 458–463.
Junjie Hu, Hiroaki Hayashi, Kyunghyun Cho, and Graham Neubig. 2022. Deep: Denoising entity pretraining for neural machine translation. In *Proceedings of the 60th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers),
pages 1753–1766.
Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural crf model for sentence alignment in text simplification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7943–7960.
Philippe Laban, Tobias Schnabel, Paul Bennett, and Marti A Hearst. 2021. Keep it simple: Unsupervised simplification of multi-paragraph text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 6365–6378.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880.
Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3730–3740.
Mounica Maddela, Fernando Alva-Manchego, and Wei Xu. 2021. Controllable text simplification with explicit paraphrasing. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 3536–3553.
Mounica Maddela and Wei Xu. 2018. A wordcomplexity lexicon and a neural readability ranking model for lexical simplification. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3749–3760.
Kostiantyn Omelianchuk, Vipul Raheja, and Oleksandr Skurzhanskyi. 2021. Text simplification by tagging.
In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, pages 11–25.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*,
pages 48–53.
Gustavo Paetzold. 2021. Utfpr at semeval-2021 task 1:
Complexity prediction by combining bert vectors and classic features. In *Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval2021)*, pages 617–622.
Gustavo Paetzold and Lucia Specia. 2016. Benchmarking lexical simplification systems. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3074–
3080.
Gustavo Paetzold and Lucia Specia. 2017a. Lexical simplification with neural ranking. In *Proceedings of* the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 34–40.
Gustavo H Paetzold and Lucia Specia. 2017b. A survey on lexical simplification. Journal of Artificial Intelligence Research, 60:549–593.
Chunguang Pan, Bingyan Song, Shengguang Wang, and Zhipeng Luo. 2021. Deepblueai at semeval-2021 task 1: Lexical complexity prediction with a deep ensemble approach. In *Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval2021)*, pages 578–584.
Jipeng Qiang, Yun Li, Yi Zhu, Yunhao Yuan, and Xindong Wu. 2020a. Lexical simplification with pretrained encoders. In *Proceedings of the AAAI Conference on Artificial Intelligence*, pages 8649–8656.
Jipeng Qiang, Yun Li, Yi Zhu, Yunhao Yuan, and Xindong Wu. 2020b. Lsbert: A simple framework for lexical simplification. *arXiv preprint* arXiv:2006.14939.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Matthew Shardlow, Michael Cooper, and Marcos Zampieri. 2020. Complex—a new corpus for lexical complexity prediction from likert scale data. In Proceedings of the 1st Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI), pages 57–62.
Matthew Shardlow, Richard Evans, Gustavo Paetzold, and Marcos Zampieri. 2021. Semeval-2021 task 1:
Lexical complexity prediction. In Proceedings of the 15th International Workshop on Semantic Evaluation
(SemEval-2021), pages 1–16.
Elior Sulem, Omri Abend, and Ari Rappoport. 2018.
Bleu is not suitable for the evaluation of text simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 738–744.
Renliang Sun, Hanqi Jin, and Xiaojun Wan. 2021.
Document-level text simplification: Dataset, criteria and baseline. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pages 7997–8013.
Renliang Sun, Zhixian Yang, and Xiaojun Wan. 2023.
Exploiting summarization data to help text simplification. In *Proceedings of the 17th Conference of the* European Chapter of the Association for Computational Linguistics, pages 39–51.
Teerapaun Tanprasert and David Kauchak. 2021.
Flesch-kincaid is not a text simplification evaluation metric. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics
(GEM 2021), pages 1–14.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45.
Wei Xu, Chris Callison-Burch, and Courtney Napoles.
2015. Problems in current text simplification research: New data can help. *Transactions of the Association for Computational Linguistics*, 3:283–297.
Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification.
Transactions of the Association for Computational Linguistics, 4:401–415.
Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, and Meng Jiang. 2022. Dict-bert: Enhancing language model pre-training with dictionary. In *Findings of* the Association for Computational Linguistics: ACL
2022, pages 1907–1918.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In *International Conference on Learning Representations*.
Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584–
594.
## A.2 Lexical Complexity Prediction A.3 Adapting Pre-Trained Models B Training Parameters A Related Work A.1 Text Simplification
a pipeline consisting of generating multiple candidate words and developing rules to select the most appropriate word from candidate words.
The lexical complexity prediction (LCP) task is to assign a value from a continuous scale to represent the complexity of a word (Shardlow et al., 2020).
Given a text and a text span in this text, the model will predict the complexity of this text span. Many studies have been devoted to improving the accuracy of model predictions (Gooding and Kochmar, 2019; Paetzold, 2021). On the latest LCP 2021 task
(Shardlow et al., 2021), the DeepBlueAI model
(Pan et al., 2021) achieves state-of-the-art results.
Pre-trained models have been widely used in natural language processing in recent years. However, Gururangan et al. (2020) observe the gap between the language model pre-training domain and the data distribution of the downstream task. Since then, researchers have focused on how to adapt pretrained models to downstream tasks. They have designed new methods for different tasks. Downstream tasks like machine translation (Hu et al.,
2022), sentiment analysis (Gu et al., 2020), and many understanding tasks (Yu et al., 2022) can benefit from the adapted pre-trained models.
We use the Huggingface transformers (Wolf et al.,
2020) to conduct sentence and lexical simplification experiments. For document-level simplification, we follow Sun et al. (2021) and use Fairseq
(Ott et al., 2019) to conduct the experiments. We choose the model that performs best on the validation set for testing. The specific parameter settings for each task are shown in Tables 10, 11, and 12. A detailed description of the dataset sizes is given in Table 13.
Here are the sources of the automatic evaluation methods we use: SARI
(https://github.com/mounicam/BiSECT/
tree/main/metrics), BERTScore (https:
//github.com/Tiiiger/bert_score), and D-SARI (https://github.com/RLSNLP/ Document-level-text-simplification).
Text simplification contains sentence simplification, document-level simplification, and lexical simplification. Sentence simplification is rewriting a complex sentence into a more straightforward and semantically identical sentence (AlvaManchego et al., 2020). Document-level simplification is rewriting an original complex article into a simple article (Sun et al., 2021). Information not relevant to the central meaning can be removed to improve readability. Lexical simplification is to replace complex words in a sentence with more straightforward but identical meaning words (Paetzold and Specia, 2017b). It is usually framed as
Parameter Value **Parameter Value**
epochs 10 max source length 128
batchsize 64 max target length 128
optimizer Adam dropout 0.1
learning rate 5e-5 weight decay 0
warm up steps 5000 seed 42
Table 10: Training parameters for sentence simplification.
Parameter Value **Parameter Value**
epochs 10 max source length 128
batchsize 64 max target length 128
optimizer Adam dropout 0.1
learning rate 5e-5 weight decay 0
warm up steps 5000 seed 42
Table 11: Training parameters for lexical simplification.
Table 12: Training parameters for document-level simplification.
Table 13: Sizes of the datasets used in experiments.
SARI↑ Keep Del Add BS↑
BART 40.1 40.5 73.8 6.2 0.904
BART-S 40.9 41.6 74.2 6.9 0.906 BART-T 40.9 40.6 74.9 7.2 0.905 SimpleBART 41.6 40.5 77.4 6.9 0.902
Table 14: Results of ablation experiments on the Newsela dataset of the sentence simplification task.
| Parameter | Value | Parameter | Value |
|------------------|---------|-------------------|---------|
| max update steps | 1e5 | max source length | 512 |
| max tokens | 2048 | max target length | 512 |
| optimizer | Adam | dropout | 0.1 |
| learning rate | 1e-4 | weight decay | 1e-4 |
| warm up steps | 2000 | seed | 42 |
## C Ablation Study
We conduct ablation experiments to explore the different contributions of replacing complex words in ordinary texts (BART-S) and masking simple words in simple texts (BART-T). We continue pretraining and fine-tuning on the Newsela dataset.
| Dataset | train | dev | test |
|----------------------------------------------------------|---------|-------|--------|
| Sentence simplification Wikiauto 488K | \ | \ | |
| Turkcorpus | \ | 2000 | 359 |
| Newsela | 94K | 1129 | 1077 |
| Lexical simplification BenchLS \ | \ | 929 | |
| LexMTurk | \ | \ | 500 |
| Document-level simplification D-Wikipedia 133K 3000 8000 | | | |
From Table 14, both methods in our proposed strategy allow the model to acquire the ability to generate simple words. Their contributions are roughly the same, but the improvement to the SARI
value is less than combining them.
## D Example Outputs
Table 15: In this sentence simplification example, SimpleBART replaces the phrase "is the founder of" with a simpler phrase "started a company", which is similar to the reference sentence. Both BART and BART-CP do not simplify the original sentence.
| Original sentence gary goddard is the founder of gary goddard entertainment . Reference sentence gary goddard started gary goddard entertainment . BART gary is the founder of gary goddard entertainment . BART-CP gary goddard is the founder of gary goddard entertainment . SimpleBART gary goddard started a company called gary goddard entertainment . |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Limitations section.
✓ A2. Did you discuss any potential risks of your work?
Ethics Statement section.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Abstract section.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 2
✓ B1. Did you cite the creators of artifacts you used?
2
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Ethics Statement section.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix B
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Appendix B
## C ✓ **Did You Run Computational Experiments?** 4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Appendix B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Appendix B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
3
## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 5
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Ethics Statement section.
✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Ethics Statement section.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
yamada-etal-2023-acquiring | Acquiring Frame Element Knowledge with Deep Metric Learning for Semantic Frame Induction | https://aclanthology.org/2023.findings-acl.596 | The semantic frame induction tasks are defined as a clustering of words into the frames that they evoke, and a clustering of their arguments according to the frame element roles that they should fill. In this paper, we address the latter task of argument clustering, which aims to acquire frame element knowledge, and propose a method that applies deep metric learning. In this method, a pre-trained language model is fine-tuned to be suitable for distinguishing frame element roles through the use of frame-annotated data, and argument clustering is performed with embeddings obtained from the fine-tuned model. Experimental results on FrameNet demonstrate that our method achieves substantially better performance than existing methods. | # Acquiring Frame Element Knowledge With Deep Metric Learning For Semantic Frame Induction
Kosuke Yamada1 Ryohei Sasano1,2 **Koichi Takeda**1 1Graduate School of Informatics, Nagoya University, Japan 2RIKEN Center for Advanced Intelligence Project, Japan [email protected],
{sasano,takedasu}@i.nagoya-u.ac.jp
## Abstract
The semantic frame induction tasks are defined as a clustering of words into the frames that they evoke, and a clustering of their arguments according to the frame element roles that they should fill. In this paper, we address the latter task of argument clustering, which aims to acquire frame element knowledge, and propose a method that applies deep metric learning. In this method, a pre-trained language model is fine-tuned to be suitable for distinguishing frame element roles through the use of frame-annotated data, and argument clustering is performed with embeddings obtained from the fine-tuned model. Experimental results on FrameNet demonstrate that our method achieves substantially better performance than existing methods.
## 1 Introduction
A semantic frame is a coherent conceptual structure that describes a particular type of situation or event along with its participants and props. FrameNet
(Ruppenhofer et al., 2016) is a representative resource, in which semantic frames define a set of frame-specific roles called frame elements (FEs).
FrameNet comprises a list of semantic frames, sets of frame-evoking words, and collections of frameannotated examples. Table 1 lists examples of frame-annotated sentences for the GIVING frame in FrameNet. For each sentence, a frame-evoking word is annotated with the GIVING frame, and its arguments are annotated with FEs such as Donor, Theme, and Recipient.
Because manually arranging such frame resources on a large scale is labor intensive, there have been many studies on automatic induction of frame resources. Most of these studies have assumed only verbs as frame-evoking words and divided the frame induction task into two sub-tasks:
verb clustering, which groups verbs according to the frames that they evoke, and argument clustering, which groups arguments of verbs according to 9356
1. [(1)Theme It] was handed in [(2)Donor by a couple of children] this morning.
2. [(3)Donor I] will now donate [(4)Theme the money] [(5)Recipient to charity].
3. [(6)Donor Your gift] gives [(7)Recipient children and families] [(8)Theme hope for tomorrows].
Table 1: Examples of verbs that evoke the GIVING
frame in FrameNet
![0_image_0.png](0_image_0.png)
their FE roles (Anwar et al., 2019; Ribeiro et al.,
2019). This study addresses the argument clustering task and acquires frame element knowledge for semantic frame induction.
As with many natural language processing tasks, methods using contextualized embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al.,
2019) have been proposed for argument clustering tasks. However, these methods have been reported to perform worse than methods based on syntactic relations (Anwar et al., 2019; Ribeiro et al., 2019).
We assume that this is because vanilla BERT, i.e.,
BERT without fine-tuning, is more influenced by factors such as a whole sentence's meaning and does not emphasize information that captures differences in semantic roles. Figure 1(a) shows a 2D
t-SNE (Maaten and Hinton, 2008) projection of the average BERT embeddings of argument tokens in examples of the GIVING frame in FrameNet. We can see that these embeddings are not adequately clustered according to their semantic roles.
Hence, in this study, we propose the use of deep metric learning to fine-tune a contextual word embedding model so that instances of the same FEs are placed close together while other instances are placed farther apart in the embedding space. Figure 1(b) shows a 2D projection of the average BERT
embeddings of argument tokens after fine-tuning with our proposed method based on the triplet loss.
We can confirm that instances of the same FEs are located close to each other. This suggests that deep metric learning enables fine-tuning of BERT to obtain embedding spaces that better reflect human intuition about FEs.
## 2 Acquiring Frame Element Knowledge With Deep Metric Learning
To acquire frame element knowledge for semantic frame induction, we work on argument clustering, which is the task of grouping arguments of frame-evoking words according to their roles in the evoked frame. We introduce two argument clustering methods that cluster argument instances using their contextualized word embeddings. To achieve higher performance methods, we assume the existence of frame-annotated data and propose to finetune a contextualized word embedding model using deep metric learning.
## 2.1 Deep Metric Learning
Deep metric learning is a method of learning deep learning models on the embedding space in such a way that instances with the same label are placed closer together and instances with different labels are placed farther apart (Kaya and Bilge, 2019; Musgrave et al., 2020). By applying this to the contextualized word embedding model, it is expected that argument instances with similar roles learn to be closer together, and argument instances with different roles learn to be farther apart. We use the representative triplet (Weinberger and Saul, 2009) and ArcFace losses (Deng et al., 2019) from two major approaches: the distance-based and classificationbased approaches, respectively.
Triplet loss This loss function is commonly used in deep metric learning, in which the distance to a triplet of instances can be learned directly using three encoders. Specifically, it performs learning such that the distance between an anchor instance xa and a negative instance xn, which are taken from different classes, is to be larger than a certain margin m plus the distance between the anchor instance xa and a positive instance xp. The squared Euclidean distance is typically used as the distance function D. The triplet loss is defined as follows:
Ltri =max (D (xa, xp)−D (xa, xn)+m, 0). (1)
ArcFace loss This loss has been used as a de facto standard in face recognition. It modifies the softmax-based cross-entropy loss for typical n-class classifiers. Specifically, it applies l2 regularization to the i-th class weight wi and the embedding of the i-th class instance xi. The angle between wi and xiis denoted as θi. An angular margin m and a feature scale s are introduced as hyperparameters to simultaneously enhance the intra-class compactness and inter-class discrepancy.
The ArcFace loss is defined as follows:
$$L_{\rm arc}=-\log\frac{e^{s\cdot\cos(\theta_{i}+m)}}{e^{s\cdot\cos(\theta_{i}+m)}+\sum_{j=1,j\neq i}^{n}e^{s\cdot\cos\theta_{j}}}.\tag{2}$$
## 2.2 Argument Clustering Methods
We introduce two argument clustering methods: a cross-frame clustering of argument instances across frames and an intra-frame clustering of frame-wise argument instances.
## 2.2.1 Cross-Frame Method
The cross-frame method is a method used by Anwar et al. (2019) and Ribeiro et al. (2019), in which FEs are regarded as general semantic roles independent of frames, and the argument instances are grouped by roles across frames. For example, both Donor in the GIVING frame and Agent in the PLAC-ING frame are similar roles in the meaning of "a person who acts on an object." Taking advantage of this property, the cross-frame method clusters the argument instances to form role clusters without considering the frame that each word evokes and then combines the frame and the role clusters into the FE clusters. In this method, we apply groupaverage clustering based on the Euclidean distance, which is a hierarchical clustering algorithm.1 The cross-frame method performs fine-tuning of contextualized word embedding models across frames by using the triplet and ArcFace losses. For 1See Appendix A for the number of clusters.
the triplet loss, a positive instance is one with the same FE as the anchor instance, while a negative instance is one with FEs of different frames or different FEs of the same frame as the anchor instance.
The ArcFace loss is used to classify instances on an FE basis so that the model trains the metric across frames rather than within a particular frame.
## 2.2.2 Intra-Frame Method
Since the cross-frame method treats FEs as roles independent of frames even though FEs are framespecific roles, there are two possible drawbacks as described below. We thus propose the intra-frame method that treats FEs as frame-specific roles.
As the first drawback, the cross-frame method causes the division of argument instances of the same FE into too many clusters. For example, the GIVING frame has only three FEs, but the cross-frame method is likely to split instances into more clusters due to the nature of clustering across frames. To overcome this drawback, the intraframe method focuses on clustering the argument instances for each frame. The method also uses group-average clustering.
As the second drawback, the fine-tuning of the cross-frame method may not provide the optimal embedding space for argument roles, because it learns to keep instances with similar roles in different frames away from each other. For example, Donor in the GIVING frame and Agent in the PLAC-ING frame are similar, but the cross-frame method keeps these instances away because they are regarded as different roles. Hence, the intra-frame method learns to keep away only between instances of different FEs of the same frame. For the triplet loss, this is achieved by limiting negative instances to be different FEs in the same frame. For the ArcFace loss, this is achieved by training classification for the number of FE types in a frame.
## 3 Experiment
To confirm the usefulness of fine-tuning with deep metric learning, we experimented with an argument clustering task. This study focuses on argument clustering to induce FEs for frame-evoking verbs. Given the true frame that a verb evokes and the true positions of its argument tokens in the example sentences, we cluster only its arguments to generate role clusters. Then, we merge the true frame and the role clusters to obtain the final FE clusters.
| #Frames | #FEs | #Examples | #Instances | |
|-----------|--------|-------------|--------------|---------|
| Set 1 | 212 | 641 | 21,433 | 42,544 |
| Set 2 | 212 | 623 | 24,582 | 47,629 |
| Set 3 | 213 | 637 | 35,468 | 71,617 |
| All | 637 | 1,901 | 81,493 | 161,790 |
## 3.1 Settings
Dataset The dataset in our experiment was created by extracting example sentences, in which the frame-evoking word was a verb, from FrameNet 1.7.2 The FEs in FrameNet are divided into two types: core FEs, which are essential for frames, and non-core FEs. Our experiment targeted only the core FEs, as in QasemiZadeh et al. (2019). The examples were divided into three sets so that those of the verbs that evoke the same frames were in the same set. Table 2 lists the dataset statistics.
We performed three-fold cross-validation with the three sets as the training, development, and test sets. Note that the frames to be trained and those to be clustered do not overlap because the sets are divided on the basis of frames.
Comparison Methods We used BERT3from Hugging Face (Wolf et al., 2020) to obtain contextualized word embeddings. We compared a total of six different methods, which use the cross-frame method or the intra-frame method for each of the three models, the vanilla model (**Vanilla**) and two fine-tuned models (Triplet, **ArcFace**).4 We also compared our methods with the two unsupervised methods used in Subtask-B.1 of SemEval-2019 Task 2 (QasemiZadeh et al., 2019).5 Anwar et al. (2019) performed group-average clustering by using a negative one-hot encoding feature vector to represent the inbound dependencies of argument words. Ribeiro et al. (2019) applied graph clustering by Chinese whispers (Biemann, 2006)
with the average ELMo (Peters et al., 2018) embeddings of argument tokens. We also prepared two baselines: **Boolean** and **Dependency-relationship**.
The Boolean method clusters argument instances based on whether they appear before or after the
| Method | #C | PU / IPU / PIF | BCP / BCR / BCF | |
|-----------------------------------------------|---------------|--------------------|--------------------|--------------------|
| Boolean | 411 | 70.7 / 85.9 / 77.6 | 61.4 / 79.6 / 69.4 | |
| Dependency-relationship | 2,032 | 84.6 / 70.6 / 77.0 | 78.2 / 56.9 / 65.9 | |
| Anwar et al. (2019) | 415 | 59.2 / 75.8 / 66.5 | 49.0 / 67.0 / 56.6 | |
| Ribeiro et al. (2019) | 628 | 65.3 / 74.6 / 69.6 | 55.0 / 64.4 / 59.3 | |
| Clustering | Model Vanilla | 628 | 55.2 / 87.5 / 67.6 | 46.5 / 81.1 / 59.0 |
| Cross-frame method (group-average clustering) | Triplet | 543 | 80.0 / 92.9 / 86.0 | 73.0 / 88.8 / 80.1 |
| ArcFace | 594 | 81.7 / 91.5 / 86.2 | 74.9 / 86.8 / 80.3 | |
| Vanilla | 636 | 54.9 / 88.9 / 67.9 | 46.2 / 83.1 / 59.4 | |
| Intra-frame method (group-average clustering) | Triplet | 646 | 90.1 / 95.0 / 92.5 | 85.5 / 91.9 / 88.6 |
| ArcFace | 631 | 90.0 / 94.3 / 92.1 | 85.4 / 90.9 / 88.1 | |
verb. For example, in the second example sentence
"[I] will now donate [the money] [to charity]." in Table 1, the word "I" belongs to the *before* cluster, while "the money" and "to charity" belong to the after cluster. The Dependency-relationship method clusters argument instances based on dependency labels. In the case of the same example sentence as above, "I" belongs to a cluster indicating a noun subject, "the money" belongs to a cluster indicating an object, and "to charity" belongs to a cluster indicating an oblique nominal. We use stanza (Qi et al., 2020) as a dependency parsing tool.6 Metrics For evaluation metrics, we used PURITY
(PU), INVERSE PURITY (IPU), and their harmonic mean, F-SCORE (PIF) (Zhao and Karypis, 2001),
as well as B-CUBED PRECISION (BCP), RECALL
(BCR), and their harmonic mean, F-SCORE (BCF) (Bagga and Baldwin, 1998).
## 3.2 Results
Table 3 summarizes the experimental results. The cross-frame and intra-frame methods with the Triplet and ArcFace models showed a remarkable performance improvement compared to those with the Vanilla model. In particular, the intra-frame method with the Triplet model obtained a high score of 92.5 for PIF and 88.6 for BCF. Also, while there was no difference between the intra-frame and cross-frame methods with the Vanilla model, we can confirm the efficacy of the intra-frame methods with the fine-tuned models. There was little difference in scores with the deep metric learning models. We consider that they achieved similar 6https://stanfordnlp.github.io/stanza/
scores as a result of satisfactory learning because both models learn margin-based distances.
As for the comparison to previous methods, the methods with the Vanilla model underperformed the baseline methods with syntactic features, but our methods with the fine-tuned models outperformed them considerably. This result also confirms the usefulness of the fine-tuned models through deep metric learning. Among the previous methods, although the two baselines performed better than the methods in Anwar et al. (2019) and Ribeiro et al. (2019), this was an expected result because the experiment by Anwar et al. showed that the Boolean method obtained higher scores than their method. Note that our experiment only considered core FEs. The trends that baselines with syntactic features performed well may not be going to hold in experiments that consider non-core FEs.
We also visualized the embeddings to understand them intuitively. Figure 2 shows a 2D t-SNE projection of the average contextualized embeddings of the argument tokens. With the Vanilla model, clumps of instances can be seen for each FE, but instances for the same FE are entirely scattered, and the instances for different FEs in the same frame are mixed together. On the other hand, with the fine-tuned models, the instances are clustered for each FE. We can see that the instances with the cross-frame Triplet model are tightly grouped by FEs than those with the intra-frame Triplet model.
However, the FEs are still independent of each frame, and it is important to distinguish instances of different FEs in the same frame. The intra-frame Triplet model distinguishes more instances with different roles in the same frame than the cross-frame
![4_image_0.png](4_image_0.png)
![4_image_1.png](4_image_1.png)
Triplet model does, such as instances of Theme and Goal in the PLACING frame. Furthermore, with the intra-frame Triplet model, we can see instances of similar roles clustered together across frames such as instances of Speaker in the COMMUNI-CATION_NOISE frame and Agent in the PLACING
frame. These results confirm the usefulness of the fine-tuning of the intra-frame method.
## 4 Conclusion
We have addressed argument clustering for semantic frame induction. We proposed a method that uses deep metric learning to fine-tune contextualized embedding models and applied the resulting fine-tuned embeddings to perform argument clustering. We also introduced intra-frame methods that exploit the property that FEs are frame-specific.
Experimental results showed that fine-tuned models with deep metric learning are promising and that intra-frame methods perform quite well. Especially, the intra-frame method with the Triplet model achieved high scores of 92.5 for PIF and 88.6 for BCF.
Although only core frame elements are covered in this study, it would be ideal to acquire non-core frame element knowledge as well. Since many noncore frame elements are shared among different frames and are likely to be easier to learn than core frame elements, our methods are expected to achieve competitive performance for non-core frame elements as well. We would like to confirm it in future work. The ultimate goal of this research is to automatically build frame knowledge resources from large text corpora. We will need to merge our method with methods that cluster verbs according to the frames that they evoke (Yamada et al., 2021, 2023) and predict the positions of argument tokens.
In addition, we will consider how to apply our
## Limitations
As we only used English FrameNet as the dataset for our experiment, it is unclear how well our method would work with other languages or corpora. However, because the method is neither language- nor corpus-specific, fine-tuning may lead to better results with other datasets. Also, the method relies on a semantic frame knowledge resource, and annotation will thus be required if it is applied to languages without such resources. This study only considers core frame elements and does not show results for non-core frame elements.
## Acknowledgements
This work was supported by JST FOREST
Program, Grant Number JPMJFR216N and JSPS KAKENHI Grant Numbers 21K12012 and 23KJ1052.
## References
Saba Anwar, Dmitry Ustalov, Nikolay Arefyev, Simone Paolo Ponzetto, Chris Biemann, and Alexander Panchenko. 2019. HHMM at SemEval-2019 task 2:
Unsupervised frame induction using contextualized word embeddings. In *Proceedings of the 13th International Workshop on Semantic Evaluation (SemEval* 2019), pages 125–129.
Amit Bagga and Breck Baldwin. 1998. Entity-based cross-document coreferencing using the vector space model. In *Proceedings of the 36th Annual Meeting* of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (ACL-COLING 1998), pages 79–85.
Chris Biemann. 2006. Chinese whispers: An efficient graph clustering algorithm and its application to natural language processing problems. In *Proceedings of TextGraphs: the First Workshop on Graph*
Based Methods for Natural Language Processing
(TextGraphs 2006), pages 73–80.
Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. 2019. ArcFace: Additive angular margin loss for deep face recognition. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), pages 4690–4699.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019), pages 4171–4186.
Mahmut Kaya and Hasan ¸Sakir Bilge. 2019. Deep metric learning: A survey. *Symmetry*, 11(9):1066.
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017).
Laurens van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605.
Kevin Musgrave, Serge Belongie, and Ser-Nam Lim.
2020. A metric learning reality check. In Proceedings of the 16th European Conference on Computer Vision (ECCV 2020), pages 681–699.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018), pages 2227–2237.
Behrang QasemiZadeh, Miriam R. L. Petruck, Regina Stodden, Laura Kallmeyer, and Marie Candito. 2019.
SemEval-2019 task 2: Unsupervised lexical frame induction. In *Proceedings of the 13th International* Workshop on Semantic Evaluation (SemEval 2019),
pages 16–30.
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:
System Demonstrations (ACL 2020), pages 101–108.
Eugénio Ribeiro, Vânia Mendonça, Ricardo Ribeiro, David Martins de Matos, Alberto Sardinha, Ana Lúcia Santos, and Luísa Coheur. 2019. L2F/INESC-ID
at SemEval-2019 task 2: Unsupervised lexical semantic frame induction using contextualized word representations. In *Proceedings of the 13th International Workshop on Semantic Evaluation (SemEval* 2019), pages 130–136.
Josef Ruppenhofer, Michael Ellsworth, Myriam Schwarzer-Petruck, Christopher R Johnson, and Jan Scheffczyk. 2016. *FrameNet II: Extended theory and* practice. International Computer Science Institute.
Kilian Q Weinberger and Lawrence K Saul. 2009. Distance metric learning for large margin nearest neighbor classification. *Journal of Machine Learning Research*, 10(2).
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP 2020), pages 38–45.
Kosuke Yamada, Ryohei Sasano, and Koichi Takeda.
2021. Semantic frame induction using masked word embeddings and two-step clustering. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
(ACL-IJCNLP 2021), pages 811–816.
Kosuke Yamada, Ryohei Sasano, and Koichi Takeda.
2023. Semantic frame induction with deep metric learning. In *Proceedings of the 17th Conference of* the European Chapter of the Association for Computational Linguistics (EACL 2023), pages 1833–1845.
Xiao Zhang, Rui Zhao, Yu Qiao, Xiaogang Wang, and Hongsheng Li. 2019. AdaCos: Adaptively scaling cosine logits for effectively learning deep face representations. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*
(CVPR 2019), pages 10823–10832.
Ying Zhao and George Karypis. 2001. Criterion functions for document clustering: Experiments and analysis. Technical report, Retrieved from the University of Minnesota Digital Conservancy.
## A How To Determine Number Of Clusters
Here, we explain how to determine the number of clusters in cross-frame and intra-frame methods. In the cross-frame method, it is determined from the ratio of the number of FEs to the number of frames in the development set.
In contrast, the intra-frame method uses criteria across frames because the number of frames is not easy to decide on a frame-by-frame basis. The termination criterion for clustering is the point at which there are no more cluster pairs for which the distance between clusters is less than a threshold θ that all frames share. The threshold θ is gradually decreased from a sufficiently large value, and the average number of clusters over all frames is set to a value that is closest to the average number of different FEs in each frame in the development set.
## B Detailed Settings For Our Methods
Here, we describe the detailed settings, including hyperparameters, of the methods in our experiment.
All embeddings were processed with l2 normalization to match the ArcFace requirement. In finetuning, the batch size was 16, the learning rate was 1e-5, and the number of epochs was 10. The candidate margins were 0.1, 0.2, 0.5, and 1.0 for the triplet loss and 0.01, 0.02, 0.05, and 0.1 for the ArcFace loss. The feature scale for ArcFace was 64.
We explored only the margin because Zhang et al.
(2019) showed that the behaviors of the margin and scale are similar. The optimization algorithm was AdamW (Loshchilov and Hutter, 2017).
In the experiment, the epochs and margins for fine-tuning and the number of clusters for clustering were determined by the development set.
The most plausible model for fine-tuning was determined from ranking similarities to ensure clustering-independent evaluation. Specifically, we took an argument instance as a query instance; then, we computed the cosine similarity of the embeddings between the query instance and the remaining argument instances, and we evaluated the instances' similarity rankings in descending order. For a metric, we chose the recall. It computes the average match rate between true instances, which are instances of the same FE as the query instance, and predicted instances, which are obtained by extracting the same number of top-ranked instances as the number of true instances. The embedding of the model with the highest score was used for clustering.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
"Limitations" is found in the section after Conclusion without the section number.
✗ A2. Did you discuss any potential risks of your work?
No potential risk to our work
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1 Introduction
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 Experiment
✓ B1. Did you cite the creators of artifacts you used?
3 Experiment
✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
URLs with licenses and terms are given for the artifacts used in the paper.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
In the three-part cross-validation, the average numbers of frame types, FE types, example sentences, and argument instances for the three sets were 212, 634, 27,161, and 53,930, respectively.
## C ✓ **Did You Run Computational Experiments?** 3 Experiment
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
It is not described because deep metric learning using GPU is lightweight and can be implemented without using much CPU memory.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
3.1 Settings
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
3.1 Settings
✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Although stanza is used for normalization in part of the preprocessing, it is not described because it is a process that does not directly affect the evaluation score and is not the essence of this paper.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
mittal-etal-2023-leveraging | Leveraging Synthetic Targets for Machine Translation | https://aclanthology.org/2023.findings-acl.597 | In this work, we provide a recipe for training machine translation models in a limited resource setting by leveraging synthetic target data generated using a large pre-trained model. We show that consistently across different benchmarks in bilingual, multilingual, and speech translation setups, training models on synthetic targets outperforms training on the actual ground-truth data. This performance gap grows bigger with increasing limits on the amount of available resources in the form of the size of the dataset and the number of parameters in the model. We also provide preliminary analysis into whether this boost in performance is linked to ease of optimization or more deterministic nature of the predictions, and whether this paradigm leads to better out-of-distribution performance across different testing domains. | # Leveraging Synthetic Targets For Machine Translation
Sarthak Mittal†1,2 Oleksii Hrinchuk3 **Oleksii Kuchaiev**3 1Mila, 2Universite de Montreal, 3NVIDIA
## Abstract
In this work, we provide a recipe for training machine translation models in a limited resource setting by leveraging synthetic target data generated using a large pre-trained model. We show that consistently across different benchmarks in bilingual, multilingual, and speech translation setups, training models on synthetic targets outperforms training on the actual ground-truth data. This performance gap grows bigger with increasing limits on the amount of available resources in the form of the size of the dataset and the number of parameters in the model. We also provide preliminary analysis into whether this boost in performance is linked to ease of optimization or more deterministic nature of the predictions, and whether this paradigm leads to better out-of-distribution performance across different testing domains.
## 1 Introduction
Neural Machine Translation (NMT) (Bahdanau et al., 2014; Wu et al., 2016; Stahlberg, 2020) relies on deep learning models to train end-to-end translation systems. With the advent of deep recurrent models like LSTMs (Hochreiter and Schmidhuber, 1997; Sundermeyer et al., 2014; Chung et al., 2014)
and their attention-augmented improvements (Bahdanau et al., 2014; Luong et al., 2015), these models outperformed traditional statistical (Koehn, 2009; Della Pietra, 1994) and rule-based (Lagarda et al., 2009; Nirenburg, 1989) approaches. Recently, with the introduction of fully attentionbased networks networks (Vaswani et al., 2017; Dehghani et al., 2018; Sukhbaatar et al., 2019; Dai et al., 2019; Kitaev et al., 2020; Choromanski et al., 2020; Mittal et al., 2021) and increase in compute and data, large-scale Transformer networks have dominated the field of natural language processing (Devlin et al., 2018; Brown et al., 2020; Hoffmann et al., 2022; Shoeybi et al., 2019), and machine translation in particular (Edunov et al., 2018; Raffel et al., 2020), leading to not only better performance but also more efficient training through their parallel computations (Ott et al., 2018).
While it has been established that scaling up data and compute boosts the performance of the largescale NMT systems (Gordon et al., 2021; Ghorbani et al., 2021; Kaplan et al., 2020; Bahri et al., 2021), there is still a need to focus on budget models that can run on mobile and edge computing devices. In other tasks, like end-to-end speech translation, training data is scarce and expensive. Inspired from these needs, we provide a recipe for training textto-text and text-to-speech translation systems in a limited resource setting at the modest overhead of running inference of pre-trained models on the source sentences.
Though in theory, increasing the amount of data provides a relatively simple methodology for bolstering the performance of current AI systems, it is difficult to do so when obtaining new data is costly, especially because of the labeling process in supervised learning. On the other hand, there have been a variety of approaches leveraging synthetic data to either improve the robustness of the systems or to boost their performance. This can be achieved by introducing adversarial examples in the training set
(Goodfellow et al., 2014), considering knowledge distillation when provided access to large models but not their pre-training data (Bucilua et al. ˇ , 2006; Gou et al., 2021; Hinton et al., 2015; Kim and Rush, 2016; Urner et al., 2011; Cheng et al., 2020; Phuong and Lampert, 2019; Tan et al., 2019), or using forward and back translation techniques (Zhang and Zong, 2016; Sennrich et al., 2015; Bogoychev and Sennrich, 2019; Edunov et al., 2018; Hoang et al., 2018) when additional monolingual data, which is easily available, is leveraged to generate synthetic targets to augment the amount of data.
In this work, we use large, often pre-trained, NMT systems to provide synthetic targets which can be leveraged to train high performing lowcompute and low-data models. Our findings show that using these synthetic translations to train different NMT systems leads to considerable improvements, even if we remove all ground-truth feedback from the training set. We also test models trained with synthetic targets on out-of-distribution settings through translations on different domains in a zero-shot manner as well as finetuning them on a different domain using additional synthetic data from an existing finetuned model, and further highlight the improvements obtained in both. We also find additional evidence to support that the ease of optimization resulting from training on synthetic targets does not completely explain its superior performance (He et al., 2019). Instead, we showcase that models trained with synthetic targets are more deterministic than their counterparts which are trained on real targets.
Our key contributions are as follows
- We provide a recipe for training better models in resource and compute constrained settings provided access to a pre-trained system and validate it on bilingual and multilingual translation setups, as well as out-of-domain generalization.
- We provide analysis into the reasoning behind this improved performance and demonstrate that it is not solely because of ease of optimization but instead, we believe it is due to more deterministic nature of such systems.
## 2 Related Work
Synthetic targets have been consistently used in the past to either augment the amount of data available or to boost performance through knowledge transfer between models. In the domain of Machine Translation, a popular way of augmenting data is by considering monolingual data that is available in abundance and obtaining its paired counterpart using a trained model.
Back Translation. Back translation (Sennrich et al., 2015; Edunov et al., 2018; Hoang et al., 2018)
relies on translating unseen sentences from the target side back to the source side using an existing trained model to provide new paired samples which can then be leveraged to train a model in the forward direction.. For example, if the task is to translate sentences from language S to T (*S → T* ),
one can obtain a corpus of monolingual data from T and translate it backwards to S using an existing trained *T → S* translation model. This would then provide additional paired data that can be combined with the existing ground-truth data to train a S → T translation model.
Forward Translation. Analogous to back translation, forward translation (Zhang and Zong, 2016; Bogoychev and Sennrich, 2019) or selftraining (He et al., 2019) relies on training a standard translation model using which additional data is obtained by translating new sentences from the source to the target side and then re-training the translation model using this additional data. For example, to translate in the *S → T* direction, one first trains a model using existing data and then leverages this model to generate targets for a corpus of monolingual data from S to provide new paired data. This data is then combined with the original data for re-training of the *S → T* translation model. Typically, forward translation is not as effective as back translation since the errors of the model are further propagated in the data in the former case (Burlot and Yvon, 2019).
Our approach can also be related to knowledge distillation, which has been the popular choice for transferring knowledge from a larger (called teacher) to a smaller model (called student) by enforcing similarities at different levels (Hinton et al.,
2015; Gou et al., 2021; Freitag et al., 2017; Kim and Rush, 2016), eg. in the output or the representation space of the two models. We give a brief overview of the different strategies used.
Soft Target Matching. The earliest works on knowledge distillation transfer knowledge by enforcing the soft-logits of the student to be close to those of the teacher (Hinton et al., 2015). This is accomplished by introducing a loss term that penalizes deviation of the student model's logits from the teacher, and this loss can either be used as is or added as a regularizing effect in training. This formulation of knowledge distillation has been revisited as *Word-Level Knowledge Distillation* for NMT in (Kim and Rush, 2016; Wang et al., 2021).
While it is a simple way of distilling the teacher's knowledge into the student, it can be computationally expensive if the number of classes, or equivalently the vocabulary V, is large as it requires either storing all the |V| soft-logits for all words of the whole dataset or requires access to the teacher model on the fly, which can make training slow.
| Model | Data | Real | Synthetic | ∆ |
|---------|--------|--------|-------------|-----|
| 0.5M | 17.6 | 20.9 | 3.3 | |
| 5M | 22.6 | 23.5 | 0.9 | |
| 25M | 23.1 | 23.9 | 0.8 | |
| 0.5M | 18.0 | 22.5 | 4.5 | |
| 5M | 24.7 | 25.7 | 1.0 | |
| 25M | 25.3 | 25.9 | 0.6 | |
| 0.5M | 18.6 | 22.9 | 4.3 | |
| 5M | 25.7 | 26.5 | 0.8 | |
| 25M | 26.4 | 26.4 | 0.0 | |
Representation Matching.(Romero et al., 2014; Zagoruyko and Komodakis, 2016; Lee et al., 2018; Heo et al., 2019; Passban et al., 2021; Chen et al.,
2021) Another way of transferring knowledge from the teacher to the student is by matching their intermediate representations. Again, this can be accomplished by considering a regularization term lreg(gt(ϕt), gs(ϕs)) where lreg is some notion of similarity, ϕt, ϕs are the intermediate representations of the teacher and student respectively and gt, gs are functions that map the two representations to the same space, which is needed as the student is often smaller than the teacher model.
While intuitively simple, this formulation is harder to implement and tune as the intermediate representations may be of different shapes, making it non-trivial to obtain a notion of similarity between the two. For example, if gt and gs map all the representations to the same point, the matching loss would be low even though the representations themselves ϕt, ϕs can be quite dis-similar.
Sequence-Level Knowledge Distillation. Kim and Rush (2016) propose *Sequence-Level Knowledge Distillation* which does not rely on soft-logits from the teacher model but instead relies on the synthetic translations obtained from the teacher model. Using synthetic targets is computationally efficient as the computation does not rely on matching the soft-logits across the whole of vocabulary but instead relies on sparse signals. Moreover, Kim and Rush (2016) showcase that using synthetic targets in LSTM-based systems lead to improved performance as opposed to the traditional knowledge distillation approach based on matching soft-logits.
While similar to forward translation and sequence-level knowledge distillation, our ap-
Model Data Real Synthetic ∆
2 × 2
0.5M 18.6 22.8 4.2
5M 23.8 26.0 2.2
57M 24.2 26*.3 2.*1
6 × 6
0.5M 18.7 23.9 5.2
5M 25.1 27.8 2.7
57M 26.6 28*.4 1.*8
24 × 6
0.5M 19.2 24*.1 4.*9
5M 25.9 28.4 2.5
57M 27.1 29.0 1.9
proach differs by leveraging pre-trained translation models trained on large amounts of data for synthetic targets as opposed to training from scratch and then re-training. Further, we also consider setups where the amount of data used for the teacher and the student model is different, and where their model sizes can be similar.
## 3 Method
Our aim is to perform Machine Translation from a source language S to a target language T given some training data DS→T = {(si, ti)}
N
i=1 where si ∈ S is the source sentence and ti ∈ T denotes the target sentence which is the ground-truth translation corresponding to si. Further, we assume that we have access to a teacher network fS→T (·), using which we obtain synthetic targets fS→T (si) to construct the synthetic dataset D′S→T = {(si, fS→T (si))}
N
i=1.
For our experiments, we consider different dataset sizes for the student models by subsampling from DS→T and D′S→T respectively. All the models considered in this work rely on the EncoderDecoder Transformer architecture (Vaswani et al.,
2017) with the teacher network generally having 24 encoder and 6 decoder layers (24×6). We consider different model sizes for the student, ranging from small models to matching the teacher's size.
Typically, knowledge distillation considers the same input data for training both the teacher and the student. Instead, we perform analysis where the student has access to different amounts of data. Also, unlike knowledge distillation where knowledge is transferred from a bigger teacher to a smaller student network, we additionally consider setups
Model Data Real **Synthetic** ∆
en-de en-es en-fr Avg. en-de en-es en-fr Avg. **Avg.**
2 × 2
1.5M 24.9 29.6 29.3 27.9 28.2 31.4 31.3 30.3 2.4
15M 29.5 31.8 32.6 31.3 31.9 33.0 33.5 32.8 1.5
300M 30.1 32.5 32.8 31.8 32.3 33.2 33.7 33.0 1.2
6 × 6
1.5M 25.9 30.1 30.2 28.7 31.0 32.5 32.5 32.0 3.3
15M 33.2 33.9 35.0 34.0 36.0 34.4 35.4 35.2 1.2
300M 34.5 34.4 35.6 34.8 36.1 34.5 35.8 35.4 0.6
24 × 6
1.5M 25.8 30.0 29.6 28.5 31.8 33.3 33.3 32.8 4.3
15M 35.0 34.5 35.7 35.0 36.9 34.9 36.4 36.0 1.0
300M 36.1 34.9 36.8 35.9 37.9 35.2 37.0 36.7 0.8
## 4 Experiments
For all our experiments, we only considered Transformer models for both teachers and students. Unless specified, we used the Pre-Layer Normalization variant where LayerNorm is applied before the respective attention and residual computations (Xiong et al., 2020). In our experiments, we consider two text-to-text machine translation setups: bilingual and multilingual, and one speechto-text setup. For the text-to-text machine translation experiments, we considered byte-pair encoding (Britz et al., 2017) for bilingual experiments and sentence-piece encoding (Kudo and Richardson, 2018) for multilingual experiments.
## 4.1 Bilingual Machine Translation
We first test the benefits of training with synthetic targets on bilingual machine translation where models are trained to translate sentences from one specified (source) language to another (target) language.
We conducted experiments with the source language as English and the target languages as Russian and German. We consistently see improvements when training with synthetic targets, and these benefits are substantial when the student is trained on limited data. Even on the same amount of data, we see benefits of using synthetic data when the student has lower complexity.
English to Russian. We used the WMT'21 dataset for English to Russian Machine Translation, where we trained the models with different tokenizers and vocabularies on the source and target side with the vocabulary size of 24576. For the teacher model, we trained a baseline Transformer with 24 encoder and 6 decoder layers. For the student, we consider two different axis of analysis:
models with lower capacity than the 24×6 teacher, and models trained with fewer data subsampled from the training set used to train the teacher.
In Table 1, we highlight that in the low-data regime for the student, consistently across all the different model sizes, training solely on synthetic targets obtained from the teacher model leads to much better performance than training on real ground-truth data. We also see that even when keeping the amount of data fixed (25M sentences; which is the same data on which the teacher model was trained), we see improvements on using synthetic targets in training smaller models. Thus, the only avenue where we don't get a substantial improvement is when the student uses the same dataset as the teacher model and has high complexity, similar to the teacher. This, however, is intuitive and doesn't pose a problem since our aim is for better low-compute models.
English to German. We consider the WMT'21 dataset for the task of English to German Machine Translation, where we trained the models with shared tokenizers and vocabularies on the source and target side, with the vocabulary size of 32000.
For the teacher model, we picked a published transformer model with 24 encoder and 6 decoder layers from the NeMo codebase (Kuchaiev et al., 2019)
which was trained on substantially more data than WMT. For the student models, we again consider two different axis of analysis: models with lower capacity than the 24×6 teacher, and models trained with different percentages of the WMT data.
In Table 2, we see that consistent with our En-
Model Data Real **Synthetic** ∆
de-en es-en fr-en Avg. de-en es-en fr-en Avg. **Avg.**
2 × 2
1.5M 28.4 29.1 32.2 29.9 31.9 30.8 34.1 32.3 2.4
15M 34.6 32.8 36.2 34.5 37.1 33.6 37.5 36.1 1.6
300M 35.5 32.7 36.9 35.0 38.3 33.8 37.6 36.6 1.6
6 × 6
1.5M 30.8 30.1 33.2 31.4 35.3 31.8 35.3 34.1 2.7
15M 38.1 34.0 38.3 36.8 40.2 35.0 39.1 38.1 1.3
300M 40.0 34.2 38.8 37.7 41.3 35.5 39.6 38.8 1.1
24 × 6
1.5M 31.4 30.5 33.8 31.9 36.0 32.8 36.2 35.0 3.1
15M 38.8 34.6 39.6 37.6 40.8 35.6 40.1 38.8 1.2
300M 40.1 35.6 40.9 38.9 41.8 36.3 41.2 39.8 0.9
glish to Russian experiments, in low data regime the student models trained with synthetic targets outperform the models trained on real ground-truth data. In particular, since the teacher model was trained on an enormous corpus outside of WMT
as well, we see that even on using a 24 × 6 transformer model which has the same complexity as the teacher on the full WMT21 dataset (57M sentences), it is still beneficial to train it with synthetic data as opposed to real data. Additionally we can also see that smaller models (6 × 6) trained on synthetic targets outperform larger models (24 × 6)
trained on real targets, while also providing faster training and inference.
Our English to Russian and English to German experiments show that indeed across different amounts of data regime and model complexities, as long as one does not approach the large-scale teacher model in both the regimes, it is beneficial to train systems using synthetic targets from the pre-trained large-scale model as opposed to training it on the ground-truth targets. We show that through this simple recipe, one can train much better smaller scale models which is beneficial to have when considering deployment in low-resource settings where inference latency needs to be low. It also provides a viable strategy for training of models when access to large-scale systems is provided but their training data is not.
## 4.2 Multilingual Machine Translation
Next, we move our attention towards multilingual machine translation where a single model is trained to perform translation from multiple different source languages to various different target languages. In particular, we focus on two different multilingual settings; translating sentences from (a)
English to German, Spanish and French, and (b)
German, Spanish and French to English. An important difference from bilingual experiments is that in this case, we obtain synthetic targets for the student multilingual models using published bilingual teachers. As in the bilingual setup, we again see consistent improvements of using synthetic targets instead of real ground-truth targets.
English to German/Spanish/French. We consider the setup of training a single model to translate english sentences into three different languages: german, spanish and french. These models are trained with shared tokenizers and vocabularies on the source and target side, with the vocabulary size of 64000. We query bilingual teachers trained on considerably large datasets to obtain synthetic targets for training the multilingual translation systems and compare it to training done on real ground-truth translations. We consider three different dataset sizes for the multilingual setup with 0.5M, 5M, and 100M sentences for each language.
In Table 3, we see that across different dataset sizes (1.5M = 0.5M for each pair; similar for 15M
and 300M) and model complexities, training on synthetic targets outperforms training on groundtruth data. Thus, in the presence of large pretrained bilingual experts, this provides a recipe for training stronger and more powerful multilingual models of various sizes.
German/French/Spanish to English. We perform similar analysis in the reverse direction where we train multilingual models to translate sentences from German, French and Spanish to English. We use shared tokenizers and vocabularies, with the vocabulary size of 64000, and use bilingual experts
| German targets source | Size | Must-C v2 | IWSLT tst | | | | | | | | |
|--------------------------|--------|-------------|-------------|------|------|------|------|------|------|------|------|
| dev | tst | 2010 | 2013 | 2014 | 2015 | 2018 | 2019 | 2020 | Avg. | | |
| Real | 590K | 25.2 | 27.8 | 22.1 | 27.9 | 23.8 | 21.3 | 20.9 | 20.1 | 20.9 | 22.4 |
| Synthetic, WMT21 teacher | 590K | 28.3 | 28.9 | 24.5 | 28.0 | 23.9 | 23.0 | 22.6 | 21.7 | 23.3 | 23.9 |
| + fine-tuned on IWSLT | 590K | 29.2 | 30.2 | 25.0 | 29.5 | 24.8 | 24.9 | 24.2 | 23.4 | 25.3 | 25.3 |
| + extra ASR data | 1.25M | 30.6 | 31.0 | 27.2 | 31.3 | 27.4 | 25.8 | 25.1 | 24.3 | 26.4 | 26.8 |
Table 5: Speech Translation systems trained to translate English audio to German text. Synthetic targets are generated with 24 × 6 bilingual teacher trained on WMT21 and fine-tuned on IWSLT22.
trained on large datasets to provide synthetic targets. We train the multilingual models on synthetic and ground-truth targets with 0.5M, 5M, and 100M
sentences for each language pair.
We highlight the results in Table 4 for different dataset and model sizes and see consistent improvements on using synthetic targets as opposed to real ground-truth targets for training.
Our experiments on multilingual translation in both directions reveal that when we use bilingual experts for each language pair as teachers, we see consistent improvements when training solely on synthetic data. We believe that this can pave the way for utilization of bilingual models for better and more efficient training of multilingual models, given bilingual models currently outperform multilingual models while the latter are more memory efficient as only a single model needs to be stored.
## 4.3 Speech Translation
In end-to-end speech translation (ST), the task is to train a model which translates speech in one language into a text in another. In contrast to textto-text NMT, data for this particular task is much more scarce and expensive. However, this problem can be solved with the help of readily available text-to-text models and a large corpora of ASR
data. Surprisingly, completely replacing all target language transcripts with synthetic data improves the performance of the ST models we trained.
Our speech translation models consist of a 17layer Conformer encoder (Gulati et al., 2020) initialized with pre-trained speech recognition (ASR)
encoder followed by a 6-layer Transformer decoder initialized randomly. For training, we used all available En→De ST datasets from IWSLT'22 competition (Anastasopoulos et al., 2022) which amounted to 590K examples after cleaning. To generate synthetic targets, we used 24×6 teacher model trained on WMT21 with the optional in-domain fine-tuning on 250K sentences from Must-C v2 dataset (Cattoni et al., 2021).
| Model | Data | Real | Synthetic | ∆ |
|---------|--------|--------|-------------|-----|
| IWSLT | 27.9 | 30.4 | 2.5 | |
| Medical | 27.3 | 30.6 | 3.3 | |
| Law | 36.5 | 39.0 | 2.5 | |
| IWSLT | 30.0 | 31.6 | 1.6 | |
| Medical | 30.1 | 31.7 | 1.6 | |
| Law | 39.8 | 40.8 | 1.0 | |
| IWSLT | 31.2 | 33.1 | 1.9 | |
| Medical | 31.3 | 33.1 | 1.8 | |
| Law | 40.4 | 42.1 | 1.7 | |
In Table 5, we see that replacing real targets with synthetic ones leads to over 1.5 BLEU score improvement, even though the teacher model was trained on out-of-domain data. When teacher model is fine-tuned in-domain, the score improvement goes to 3 BLEU. Finally, using the synthetic translations, we can expand our dataset by translating ASR-only datasets from IWSLT'22 which do not have German translations. Adding additional 660K examples leads to another 1.5 BLEU
improvement over, even though the additional data was out of domain to TED talks we evaluate on.
## 4.4 Out-Of-Domain Evaluation
One might argue that the model trained on a particular dataset overfits to it and using it to translate sentences from other domains will produce poor results, which can be exacerbated on using synthetic targets from models trained on a particular domain. In the next series of experiments, we evaluate out-of-domain performance of models trained with synthetic data.
Table 6 shows the performance of models trained on WMT'21 dataset evaluated on three different non-news domains: TED talks (IWSLT), medical,
| Fine-tuning targets | 2 × 2 | 6 × 6 | 24 × 6 |
|--------------------------------|---------|---------|----------|
| Training from scratch | | | |
| Real | 16.9 | 17.0 | 19.2 |
| Synthetic, WMT Teacher | 20.6 | 21.9 | 22.7 |
| + fine-tuned on IWSLT | 21.8 | 22.9 | 25.9 |
| Pre-training on real data | | | |
| Real | 30.9 | 33.6 | 34.5 |
| Synthetic, WMT Teacher | 30.9 | 32.6 | 33.7 |
| + fine-tuned on IWSLT | 32.5 | 34.1 | 35.5 |
| Pre-training on synthetic data | | | |
| Real | 32.4 | 33.8 | 34.9 |
| Synthetic, WMT Teacher | 32.0 | 33.3 | 33.6 |
| + fine-tuned on IWSLT | 33.4 | 35.1 | 35.6 |
and law. As we see, training on synthetic targets is beneficial here as well even though all training was done on the WMT dataset.
## 4.5 In-Domain Fine-Tuning
Finally, we utilize synthetic targets for in-domain fine-tuning, where pre-training is done on real or synthetic data and then further fine-tuning is done using real, out-of-domain synthetic and in-domain synthetic data. In Table 7, we train NMT models on WMT'21 dataset with either real or synthetic targets and then additionally fine-tune them on IWSLT data which is in-domain to evaluation tst-COMMON dataset comprising of TED talks.
We see that fine-tuning on synthetic targets generated with out-of-domain model actually hurts the model performance but pre-training on it works well. Also, fine-tuning on synthetic targets generated with in-domain model is superior to finetuning on real in-domain data no matter what data was used for pre-training. Training from scratch with synthetic targets generated by both models outperforms real targets by a large margin.
## 5 Ablations
Our key ablations into understanding the phenomena presented involve analysing whether the results can be solely explained from the lens of optimization, or are there other reasons at play (eg. stochasticity of the predictive model).
## 5.1 Optimization Problem
It can be argued that training on teacher outputs is easier (He et al., 2019) and its superior performance
![6_image_0.png](6_image_0.png)
Figure 1: Training loss of models trained with synthetic and real targets on the English to German machine translation. Both models are 6 × 6 Transformers; synthetic targets are obtained with a 24 × 6 teacher model.
is just an artifact of a well-behaved optimization landscape leading to "better solution spaces". Figure 1 shows that synthetic targets are indeed easier to fit as both the training loss and its variance are lower. However, we do not agree that training on synthetic targets leads to better local optima on which real-data training can capitalize on.
To see this, we pre-train some of the systems from Section 4 on synthetic targets and half-way during the training, switch out to real ground-truth targets. Our hypothesis is that if training on synthetic targets leads to better solution spaces, using the corresponding model parameters as initialization will not hurt the overall performance of training with real data, as evaluation data is considered to be from the same distribution as the real data and not the synthetic one.
However, our experiments in Figure 2 imply that when we switch the data from synthetic to real, we do see an immediate drop in performance. Moreover, we also see that when we train on real targets and switch to synthetic targets, we get an immediate boost in performance. Together, the two results indicate that training with synthetic data does not leads to better solutions to the underlying optimization problem and that the reasoning behind its workings is something else.
## 5.2 Top-K Performance
Next, we analyze whether the models trained with synthetic targets are more deterministic than those trained with real data. Our hypothesis is that training on synthetic targets dampens some of the noise present in the ground-truth dataset. To explore this claim, we use the models trained in Section 4 and evaluate them at different levels of top-k sampling for inference and compare it with models trained on ground-truth data.
![7_image_0.png](7_image_0.png)
| Language | Targets | Predictive Entropy | | |
|------------|-----------|----------------------|-----|-----|
| 2 × 2 | 6 × 6 | 24 × 6 | | |
| English to | Real | 2.4 | 2.4 | 2.5 |
| German | Synthetic | 1.8 | 1.8 | 1.7 |
| English to | Real | 2.6 | 2.4 | 2.3 |
| Russian | Synthetic | 1.8 | 1.8 | 1.9 |
Our findings in Figure 3 showcase that the drop in performance with increasing k in top-k is less in models trained with synthetic targets than in those trained with real targets. This does highlight that models trained with synthetic targets capture less noise as the degradation of performance is less when we make the sampling more noisy.
## 5.3 Predictive Entropy
As a final analysis of whether models trained with synthetic targets are more deterministic, we compute the predictive entropy of the distribution over the logits that predict the next token in translation. Our findings in Table 8 highlight that indeed the predictive entropy of models trained with synthetic targets is lower than the models trained with ground-truth targets, implying more deterministic nature of the former translation systems.
Together, our analysis on top-k performance and predictive entropy provide some evidence that the models trained with synthetic targets are more deterministic and hence are more robust and perform better, even on out of distribution shifts.
## 6 Conclusion
Inspired by the recent advances in knowledge distillation and the need for better performing low-
![7_image_1.png](7_image_1.png)
Figure 3: Degradation of Machine Translation performance with increasing k in top-k sampling for the decoding procedure for both English to German and English to Russian translations using student 24 × 6 models trained with real and synthetic targets respectively.
resource and low-compute models, we provide a recipe that leverages large-scale pre-trained translation systems as teacher models which provide synthetic targets for training of smaller and lowresource models. Surprisingly, we see a considerable increase in the performance of smaller models when only teacher outputs are provided as opposed to any proportion of real ground-truth translations.
We also see additional benefits of using synthetic targets for training, namely faster convergence and improved translations with top-k sampling when compared to models trained solely on real groundtruth translations.
This improvement in performance of small or low-resource models comes at the additional inference costs tied with the large teacher model. However, this is a one-time data curating cost based on which training of multiple different smaller models can be accomplished which will enjoy not only faster inference (owing to their smaller size) but also better performance than if they were trained on the original data, sometimes even better than larger models trained on real data. We believe this is an exciting avenue of research as low-compute high-performance models are important when deployed in constrained settings like edge computing and mobile devices.
## Ethics Statement
We provide a methodology for improving the performance of resource and data constrained translation systems which rely on obtaining synthetic targets from larger pre-trained systems. Given the dependence on large pre-trained systems, we believe that their biases can negatively impact the biases and fairness of the smaller consecutively trained systems. This is a problem common with any type of knowledge transfer where the biases of the base model can also be transferred to the student system and approaches on mitigating biases in the larger models in the first place would be a potential solution that can alleviate this problem.
## References
Antonios Anastasopoulos, Loïc Barrault, Luisa Bentivogli, Marcely Zanon Boito, Ondˇrej Bojar, Roldano Cattoni, Anna Currey, Georgiana Dinu, Kevin Duh, Maha Elbayad, et al. 2022. Findings of the iwslt 2022 evaluation campaign. In *Proceedings of the* 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 98–157.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
Yasaman Bahri, Ethan Dyer, Jared Kaplan, Jaehoon Lee, and Utkarsh Sharma. 2021. Explaining neural scaling laws. *arXiv preprint arXiv:2102.06701*.
Nikolay Bogoychev and Rico Sennrich. 2019. Domain, translationese and noise in synthetic data for neural machine translation. *arXiv preprint* arXiv:1911.03362.
Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc Le. 2017. Massive exploration of neural machine translation architectures. *arXiv preprint* arXiv:1703.03906.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901.
Cristian Bucilua, Rich Caruana, and Alexandru ˇ
Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541.
Franck Burlot and François Yvon. 2019. Using monolingual data in neural machine translation: a systematic study. *arXiv preprint arXiv:1903.11437*.
Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2021. Mustc: A multilingual corpus for end-to-end speech translation. *Computer Speech & Language*, 66:101155.
Defang Chen, Jian-Ping Mei, Yuan Zhang, Can Wang, Zhe Wang, Yan Feng, and Chun Chen. 2021. Crosslayer distillation with semantic calibration. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 7028–7036.
Xu Cheng, Zhefan Rao, Yilan Chen, and Quanshi Zhang.
2020. Explaining knowledge distillation by quantifying the knowledge. In Proceedings of the IEEE/CVF
conference on computer vision and pattern recognition, pages 12925–12935.
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin,
Lukasz Kaiser, et al. 2020. Rethinking attention with performers. *arXiv preprint arXiv:2009.14794*.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. *arXiv preprint arXiv:1412.3555*.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov.
2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. 2018. Universal transformers. *arXiv preprint arXiv:1807.03819*.
Vincent J Della Pietra. 1994. The mathematics of statistical machine translation: Parameter estimation.
Using Large Corpora, page 223.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. *arXiv preprint arXiv:1808.09381*.
Markus Freitag, Yaser Al-Onaizan, and Baskaran Sankaran. 2017. Ensemble distillation for neural machine translation. *arXiv preprint arXiv:1702.01802*.
Behrooz Ghorbani, Orhan Firat, Markus Freitag, Ankur Bapna, Maxim Krikun, Xavier Garcia, Ciprian Chelba, and Colin Cherry. 2021. Scaling laws for neural machine translation. *arXiv preprint* arXiv:2109.07740.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. *arXiv preprint arXiv:1412.6572*.
Mitchell A Gordon, Kevin Duh, and Jared Kaplan. 2021.
Data and parameter scaling laws for neural machine translation. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 5915–5922.
Jianping Gou, Baosheng Yu, Stephen J Maybank, and Dacheng Tao. 2021. Knowledge distillation: A
survey. *International Journal of Computer Vision*,
129(6):1789–1819.
Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al.
2020. Conformer: Convolution-augmented transformer for speech recognition. *arXiv preprint* arXiv:2005.08100.
Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2019. Revisiting self-training for neural sequence generation. arXiv preprint arXiv:1909.13788.
Byeongho Heo, Minsik Lee, Sangdoo Yun, and Jin Young Choi. 2019. Knowledge transfer via distillation of activation boundaries formed by hidden neurons. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 33, pages 3779–3787.
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015.
Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531, 2(7).
Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Proceedings of the 2nd workshop on neural machine translation and generation, pages 18–24.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735–
1780.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. *arXiv* preprint arXiv:2203.15556.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B
Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling laws for neural language models. *arXiv* preprint arXiv:2001.08361.
Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. *arXiv preprint* arXiv:1606.07947.
Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya.
2020. Reformer: The efficient transformer. *arXiv* preprint arXiv:2001.04451.
Philipp Koehn. 2009. *Statistical machine translation*.
Cambridge University Press.
Oleksii Kuchaiev, Jason Li, Huyen Nguyen, Oleksii Hrinchuk, Ryan Leary, Boris Ginsburg, Samuel Kriman, Stanislav Beliaev, Vitaly Lavrukhin, Jack Cook, et al. 2019. Nemo: a toolkit for building ai applications using neural modules. *arXiv preprint* arXiv:1909.09577.
Taku Kudo and John Richardson. 2018. Sentencepiece:
A simple and language independent subword tokenizer and detokenizer for neural text processing.
arXiv preprint arXiv:1808.06226.
Antonio-L Lagarda, Vicent Alabau, Francisco Casacuberta, Roberto Silva, and Enrique Diaz-de Liano.
2009. Statistical post-editing of a rule-based machine translation system. In *Proceedings of Human Language Technologies: The 2009 Annual Conference* of the North American Chapter of the Association for Computational Linguistics, Companion Volume:
Short Papers, pages 217–220.
Seung Hyun Lee, Dae Ha Kim, and Byung Cheol Song.
2018. Self-supervised knowledge distillation using singular value decomposition. In Proceedings of the European Conference on Computer Vision (ECCV),
pages 335–350.
Minh-Thang Luong, Hieu Pham, and Christopher D
Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025.
Sarthak Mittal, Sharath Chandra Raparthy, Irina Rish, Yoshua Bengio, and Guillaume Lajoie. 2021. Compositional attention: Disentangling search and retrieval.
arXiv preprint arXiv:2110.09419.
Sergei Nirenburg. 1989. Knowledge-based machine translation. *Machine Translation*, 4(1):5–24.
Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Brussels, Belgium. Association for Computational Linguistics.
Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, and Qun Liu. 2021. Alp-kd: Attention-based layer projection for knowledge distillation. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 35, pages 13657–13665.
Mary Phuong and Christoph Lampert. 2019. Towards understanding knowledge distillation. In *International Conference on Machine Learning*, pages 5142–
5151. PMLR.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets.
arXiv preprint arXiv:1412.6550.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2015. Improving neural machine translation models with monolingual data. *arXiv preprint* arXiv:1511.06709.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism.
arXiv preprint arXiv:1909.08053.
Felix Stahlberg. 2020. Neural machine translation: A
review. *Journal of Artificial Intelligence Research*,
69:343–418.
Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. 2019. Adaptive attention span in transformers. arXiv preprint arXiv:1905.07799.
Martin Sundermeyer, Tamer Alkhouli, Joern Wuebker, and Hermann Ney. 2014. Translation modeling with bidirectional recurrent neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 14–25.
Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and TieYan Liu. 2019. Multilingual neural machine translation with knowledge distillation. *arXiv preprint* arXiv:1902.10461.
Ruth Urner, Shai Shalev-Shwartz, and Shai Ben-David.
2011. Access to unlabeled data can speed up prediction time. In *ICML*.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Fusheng Wang, Jianhao Yan, Fandong Meng, and Jie Zhou. 2021. Selective knowledge distillation for neural machine translation. *arXiv preprint* arXiv:2105.12967.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al.
2016. Google's neural machine translation system:
Bridging the gap between human and machine translation. *arXiv preprint arXiv:1609.08144*.
Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. 2020. On layer normalization in the transformer architecture. In *International Conference on Machine Learning*, pages 10524–10533. PMLR.
Sergey Zagoruyko and Nikos Komodakis. 2016. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. *arXiv preprint arXiv:1612.03928*.
Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In *Proceedings of the 2016 Conference on* Empirical Methods in Natural Language Processing, pages 1535–1545.
## Appendix A Implementation Details
For all our experiments, we rely on the NeMo codebase published in Kuchaiev et al. (2019). We do not perform extensive hyperparameter selection and instead just rely on the defaults provided. All the models that we train from scratch use the pre layernorm transformer variant (Xiong et al., 2020) and are trained with 0.001 learning rate with a linear warmup followed by an exponential decay. For all the synthetic targets, we use beam search with beam size 4 for generating translations. All experiments also use label smoothing of 0.1 with a dropout of 0.1 as well. We only vary the models in their depth, while keeping the attention layer of 512 dimensions, feed-forward residual connections of 2048 and 8 attention heads, as is typical with transformer models. All experiments were done on 16 GPUs and for 150, 000 iterations by when convergence of all models had been achieved. All the results are reported by considering single runs of the model as our experiments revealed very low variance between different runs.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
In Section 4.1, where we show that our approach doesn't provide benefits on improving models that have the same data and compute complexity as the teacher.
A2. Did you discuss any potential risks of your work?
Not applicable. Our approach is more of an up-to-date analysis into some of the existing techniques used, and does not pose any risks beyond the risks of general improvement of AI.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Please refer to Abstract and Section 1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Not applicable. Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Not applicable. Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Not applicable. Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Not applicable. Left blank.
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Yes, we used the standard splits in WMT data and mentioned them in the paper.
## C ✓ **Did You Run Computational Experiments?** Section 4; Experiments.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We provide details about the framework used, as well as the size of models in terms of number of layers. Details in Appendix A
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
We did not do any extensive hyperparameter search and used default parameters. Details in Appendix A
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
We report a single run. Details in Appendix A.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Section 4 and first page.
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
soltan-etal-2023-recipes | Recipes for Sequential Pre-training of Multilingual Encoder and {S}eq2{S}eq Models | https://aclanthology.org/2023.findings-acl.598 | Pre-trained encoder-only and sequence-to-sequence (seq2seq) models each have advantages, however training both model types from scratch is computationally expensive. We explore recipes to improve pre-training efficiency by initializing one model from the other. (1) Extracting the encoder from a seq2seq model, we show it under-performs a Masked Language Modeling (MLM) encoder, particularly on sequence labeling tasks. Variations of masking during seq2seq training, reducing the decoder size, and continuing with a small amount of MLM training do not close the gap. (2) Conversely, using an encoder to warm-start seq2seq training, we show that by unfreezing the encoder partway through training, we can match task performance of a from-scratch seq2seq model. Overall, this two-stage approach is an efficient recipe to obtain both a multilingual encoder and a seq2seq model, matching the performance of training each model from scratch while reducing the total compute cost by 27{\%}. | # Recipes For Sequential Pre-Training Of Multilingual Encoder And Seq2Seq Models
Saleh Soltan∗
Alexa AI, Amazon [email protected] Andy Rosenbaum∗
Alexa AI, Amazon [email protected] Qin Lu Alexa AI, Amazon [email protected] Anna Rumshisky
![0_image_0.png](0_image_0.png)
Alexa AI, Amazon Univ. of Massachusetts Lowell [email protected]
## Abstract
Pre-trained encoder-only and sequence-tosequence (seq2seq) models each have advantages, however training both model types from scratch is computationally expensive. We explore recipes to improve pre-training efficiency by initializing one model from the other. (1)
Extracting the encoder from a seq2seq model, we show it under-performs a Masked Language Modeling (MLM) encoder, particularly on sequence labeling tasks. Variations of masking during seq2seq training, reducing the decoder size, and continuing with a small amount of MLM training do not close the gap. (2) Conversely, using an encoder to warm-start seq2seq training, we show that by unfreezing the encoder partway through training, we can match task performance of a from-scratch seq2seq model. Overall, this two-stage approach is an efficient recipe to obtain both a multilingual encoder and a seq2seq model, matching the performance of training each model from scratch while reducing the total compute cost by 27%.
## 1 Introduction And Related Work
Transformer-based Pre-trained Language Models (PLMs) have become the main building blocks when creating models for most Natural Language Processing (NLP) tasks. PLMs come in three main architectures: decoder-only (e.g. GPT), sequence-to-sequence (seq2seq, e.g. BART, T5),
and encoder-only (e.g. BERT). Multilingual models such as XLM-RoBERTa (encoder-only) and mBART/mT5 (seq2seq) are also common.
Raffel et al. (2020b) showed that seq2seq models can perform many NLP tasks on par with similarlysized encoder-only models trained via Masked Language Modeling (MLM) by framing tasks such a sentence classification or sequence labeling as text generation. However, encoder models remain more efficient at inference for sequence labeling tasks Tobias Falke Alexa AI, Amazon [email protected] Wael Hamza Alexa AI, Amazon [email protected] Figure 1: Two-stage seq2seq pre-training. First (left),
we train the encoder via Masked Language Modeling
(MLM). Second (right), we attach a randomly initialized decoder to the pre-trained MLM encoder, and train on the same data with de-noising objective. The encoder may remain frozen for part or all of the second stage.
like Named Entity Recognition (NER) and Partof-Speech tagging (POS): an encoder can label all words in the sequence with a single forward pass, while a seq2seq model must generate each word's label autoregressively.
Motivated by the need for both an encoder model for efficient sequence labeling and a seq2seq model for generative tasks like semantic parsing and summarization, we explore recipes to pre-train both models. Compared to training each model from scratch, we propose two sequential training recipes which reduce the total compute cost (Section 2.1.6).
The first recipe is to extract the encoder of a seq2seq model as proposed in Ni et al. (2022).
Although it performs well on classification tasks, we show that the encoder from seq2seq underperforms a from-scratch encoder on sequence labeling tasks. Variations of masking during seq2seq training and reducing the decoder size do not provide a consistent benefit to the encoder. We also explore continuing training the extracted encoder on MLM for a small number of updates. However, we show it cannot consistently close the gap in performance across different datasets.
The second recipe is to warm-start seq2seq pretraining with an encoder pre-trained via MLM (Fig-
∗ Equal Contribution.
ure 1). Rothe et al. (2020) proposed a similar idea for fine-tuning. AlexaTM 20B and AlexaTM 5B applied this recipe for pre-training, by warm-starting with Alexa Teacher Model encoders (Soltan et al.,
2022; Rosenbaum et al., 2022b; FitzGerald et al.,
2022). We add the novelty of comparing to a seq2seq model pre-trained from scratch with the same data and codebase. First, we observe that if the encoder is frozen the whole time, the model under-performs a from-scratch seq2seq model on semantic parsing and summarization tasks. While cross-attention fusion across different layers of the encoder reduces the performance gap, we find that we can match performance of a from-scratch model by using standard cross-attention and unfreezing the encoder partway through training.
Overall, the second recipe demonstrates a viable approach for efficient pre-training of both a multilingual encoder and a multilingual seq2seq model, matching the performance of training each model from scratch, while using 27% less total compute.
See Appendix A for additional related work.
## 2 Pre-Training Setup
We describe our pre-training objectives, models, datasets, two recipes for initializing one model type from the other, and compare compute costs.
## 2.1 Models
We pre-train ten models (Table 1): one fromscratch encoder, five from-scratch seq2seq models, one encoder from a seq2seq model with continued MLM training, and three two-stage seq2seq models warm-started with the from-scratch encoder. We report the pre-training Compute Cost for each, where
"TU" (Training Units) is defined as 100k update steps for 12 model layers with hidden dimension 1024 and batch size 1M tokens (Appendix D, E).
## 2.1.1 Encoder Model From Scratch
We train an encoder model ("roberta-12e" in Table 1) following a similar recipe to XLM-RoBERTa
(Conneau et al., 2020a), using the MLM objectve
(Figure 2a) of randomly masking 15% of subword tokens, as introduced in BERT (Devlin et al., 2019).
We use a batch size of 1M tokens and train for 500k update steps. Notably, these settings match our seq2seq models. We use "PreLayerNorm" (Xiong et al., 2020), moving the layer norms to inside residual blocks to improve training stability.
## 2.1.2 Seq2Seq Objectives
Our seq2seq training follows the architecture and de-noising task of BART and mBART (Lewis et al., 2020; Liu et al., 2020); the only architecture change we make is to again use PreLayerNorm.
The de-noising objective selects 15% of the tokens in the input (spans of length ∼ Poisson(3)),
and either (i) simply drops them, or (ii) replaces each selected span with a single mask token. The model is trained to reconstruct the original input entirely. See Figures 2b and 2c, respectively. We add a suffix "-mask" to the model names that use masking instead of dropping the tokens. Intuitively, adding an explicit mask token for de-noising makes the reconstruction task easier, as the decoder knows exactly where the missing tokens are needed.
## 2.1.3 Seq2Seq Models From Scratch
All of our seq2seq models use 12 encoder layers
("12e"). The first five models are trained from scratch starting from randomly initialized weights.
The models "bart-12e12d" and "bart-12e12d-mask" use 12-layer decoders (same number as encoder layers) using the seq2seq de-noising training objective without masking and with masking, respectively.
The remaining three models use a smaller decoder of either 2 layers ("bart-12e2d" without masking,
"bart-12e2d-mask" with masking) or 1 layer ("bart12e1d-mask", with masking). We hypothesize that reducing the size of the decoder may strengthen the encoder when it is extracted and used on its own.
## 2.1.4 Recipe 1: Encoder Of Seq2Seq + Mlm
We extract the encoder from the seq2seq model
"bart-12e12d" and continue training via MLM for 100k updates ("bart-12e12d+mlm"). We initialize the MLM head from the input embedding and untie.
## 2.1.5 Recipe 2: Two-Stage Seq2Seq Models
Finally, we train three seq2seq models following the two-stage setup (Figure 1). We initialize the encoder weights of the seq2seq model with the MLM
encoder "roberta-12e" (Section 2.1.1) and train via seq2seq de-noising without masking. The first two models train for 500k updates with the encoder always frozen: "2stage-bart-12e12d" uses standard cross-attention, where the decoder attends to only the final encoder layer, and "2stage-bart-12e12dattn-f" uses a novel application of **attention fusion**
(Cao et al., 2022) during cross-attention, where the decoder attends to all encoder layers.
| Model | Encoder | Decoder | Encoder | Decoder | Compute |
|------------------------------------------------------------------|-----------|-----------|-------------------|-------------|-------------------------|
| Layers | Layers | Updates | Updates | Cost (TU) | |
| Encoder Model From Scratch (MLM only) | | | | | |
| roberta-12e | 12 | 0 | 500k | 0 | 5.0 |
| Seq2Seq Models From Scratch (de-noising only) | | | | | |
| bart-12e12d | 12 | 12 | 500k | 500k | 10.0 |
| bart-12e12d-mask | 12 | 12 | 500k | 500k | 10.0 |
| bart-12e2d | 12 | 2 | 500k | 500k | 5.8 |
| bart-12e2d-mask | 12 | 2 | 500k | 500k | 5.8 |
| bart-12e1d-mask | 12 | 1 | 500k | 500k | 5.4 |
| Recipe 1: Encoder of Seq2Seq + MLM | | | | | |
| bart-12e12d+mlm | 12 | 12 | 500k (s2s) + 100k | 500k | 10.0 (s2s) + 1.0 = 11.0 |
| Recipe 2: Two-Stage Seq2Seq Models (warm-start with MLM encoder) | | | | | |
| 2stage-bart-12e12d | 12 | 12 | 500k (MLM) | 500k | 5.0 (MLM) + 7.5 = 12.5 |
| 2stage-bart-12e12d-attn-f | 12 | 12 | 500k (MLM) | 500k | 5.0 (MLM) + 7.5 = 12.5 |
| 2stage-bart-12e12d-unfrz | 12 | 12 | 500k (MLM) + 150k | 200k + 150k | 5.0 (MLM) + 6.0 = 11.0 |
Table 1: Model architecture details. All models use a batch size of 1M tokens with hidden dimension of 1024, feed-forward dimension of 4096 and 16 attention heads.
The last model, "2stage-bart-12e12d-unfrz" uses standard cross-attention and **unfreezes the encoder partway through training**, applying 200k update steps with the encoder frozen, then 150k update steps with the encoder unfrozen.
In all cases, we initialize and tie the decoder embeddings from/to the encoder embeddings and keep them frozen as long as the encoder is frozen.
The LM head is also initialized from the encoder embeddings, but it is untied from the embeddings and unfrozen from the beginning of the training.
## 2.1.6 Compute Cost Comparison
The baseline of training both models from scratch has a compute cost of 15.0 TU: 5.0 TU for "roberta12e" plus 10.0 TU for "bart-12e12d". Our proposed recipes reduce the total compute cost either by 17%
(to 12.5 TU) or by 27% (to 11.0 TU).
## 2.2 Pretraining Dataset
We pre-train on a combination of Wikipedia and mC4 (Xue et al., 2021) data in 12 languages:
Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, Portuguese, Spanish, Tamil, and Telugu. We pack sequences of tokens to produce sequences of approximately 512 subword units. We allow unrelated content to be packed together in the same sequence, separated with a special symbol
"[DOC]". Maintaining a relatively constant number of subword sequences reduces padding and results in efficient compute. We up-sample data for different languages following Conneau et al. (2020a).
## 3 Fine-Tuning Results
We present the results on fine-tuning our pretrained models. All runs are averaged over three random seeds and reported as mean ± standard deviation. See Appendix C for hyperparameters.
## 3.1 Encoder Model Results
In Table 2, we compare the performance of our encoder models on four datasets: (1) XNLI (Conneau et al., 2018) sentence-pair classification, (2) mATIS++ (Xu et al., 2020) joint Intent Classification
(IC) and Slot Labeling (SL), (3) WikiANN (Pan et al., 2017) token-level Named Entity Recognition
(NER), and (4) UDPOS (Nivre et al., 2020) tokenlevel Part-of-Speech tagging (POS) (XTREME (Hu et al., 2020) version). For each task, we follow the cross-lingual zero-shot setting: train and validate on English data only, then report on the test set in English ("en") and the average over the zero-shot langauges ("avg-0s"). Appendix B shows results on each language.
## We Find That The Mlm Encoder Performs Best
on all tasks except for mATIS++ IC avg-0s setting. The encoder of seq2seq ("bart-12e12d") is only slightly behind on the sentence-level tasks, on en/avg-0s by 0.6/1.1 points on XNLI (83.9 vs.
84.5 / 74.7 vs. 75.8), and 1.0/1.0 points on mATIS++ IC (96.8 vs. 97.8 / 86.2 vs. 87.2). However, the gap is much larger on the sequence labeling tasks: on en/avg-0s, 3.2/17.3 points on mATIS++
SL (92.5 vs. 95.7 / 44.3 vs. 61.6), 6.4/9.0 points on WikiANN NER (76.6 vs. 83.0 / 52.1 vs. 61.1), and
| Classification | Sequence Labeling | | | | | | | | | |
|---------------------------------------------|---------------------|-------------------|-------------------|-------------------|-------------------|-------------------|----------|----------|----------|----------|
| XNLI (acc.) | mATIS++ IC (acc.) | mATIS++ SL (f1) | WikiANN (f1) | UDPOS (f1) | | | | | | |
| Encoder | en | avg-0s | en | avg-0s | en | avg-0s | en | avg-0s | en | avg-0s |
| Encoder Model From Scratch (MLM only) | | | | | | | | | | |
| roberta-12e | 84.5±0.5 | 75.8±0.2 | 97.8±0.1 | 87.2±4.1 | 95.7±0.1 | 61.6±0.6 | 83.0±0.1 | 61.1±0.4 | 95.8±0.0 | 73.5±0.2 |
| Encoder of Seq2Seq Models (de-noising only) | | | | | | | | | | |
| bart-12e12d | 83.9±0.2 74.7±0.3 | 96.8±0.1 86.2±1.5 | 92.5±0.3 44.3±1.3 | 76.6±0.2 52.1±0.9 | 94.3±0.7 61.5±0.4 | | | | | |
| bart-12e12d-mask 83.9±0.4 75.0±0.6 | 97.1±0.1 87.3±0.7 | 91.1±0.9 41.3±1.3 | 73.2±0.1 48.4±0.6 | 93.3±0.1 55.1±0.4 | | | | | | |
| bart-12e2d | 71.3±0.1 59.7±0.5 | 96.1±0.1 79.1±0.8 | 91.4±0.1 38.2±1.7 | 69.3±0.5 42.9±0.1 | 92.1±0.1 50.7±0.5 | | | | | |
| bart-12e2d-mask | 82.9±0.3 73.8±0.2 | 96.8±0.1 | 88.1±0.9 | 92.3±0.3 48.0±1.4 | 76.5±0.2 54.0±0.6 | 93.3±0.1 54.0±0.6 | | | | |
| bart-12e1d-mask | 82.4±0.2 72.7±0.1 | 97.0±0.1 87.6±0.5 | 92.8±0.5 49.3±1.2 | 74.6±0.5 48.5±0.3 | 92.4±0.1 46.3±1.7 | | | | | |
| Recipe 1: Encoder of Seq2Seq Model + MLM | | | | | | | | | | |
| bart-12e12d+mlm | 80.3±0.4 69.0±0.4 | 97.2±0.4 83.9±1.6 | 95.3±0.2 56.5±2.8 | 79.9±0.2 47.5±0.5 | 95.1±0.0 60.7±0.9 | | | | | |
Table 2: Encoder results per task, English and avg. zero-shot. The best (second) mean result is bolded (underlined).
| Seq2Seq Models | mTOP (acc.) | XSUM (ROUGE) | | | |
|------------------------------------------------------------------|---------------|----------------|------------|------------|------------|
| en | avg-0s | R-1 | R-2 | R-L | |
| Seq2Seq Models From Scratch (de-noising only) | | | | | |
| bart-12e12d | 83.4±0.2 | 45.7±1.1 | 40.37±0.07 | 17.37±0.06 | 32.46±0.06 |
| bart-12e12d-mask | 83.2±0.5 | 46.9±0.5 | 40.63±0.09 | 17.48±0.10 | 32.63±0.06 |
| Recipe 2: Two-Stage Seq2Seq Models (warm-start with MLM encoder) | | | | | |
| 2stage-bart-12e12d | 82.0±1.1 | 46.8±1.1 | 40.12±0.06 | 17.13±0.03 | 32.16±0.01 |
| 2stage-bart-12e12d-attn-f | 80.6±1.3 | 46.4±0.5 | 40.13±0.06 | 17.24±0.07 | 32.28±0.03 |
| 2stage-bart-12e12d-unfrz | 83.3±0.2 | 48.2±0.5 | 40.63±0.11 | 17.58±0.03 | 32.65±0.05 |
1.5/12.0 on UDPOS (94.3 vs. 95.8 / 61.5 vs. 73.5).
This suggests that seq2seq pre-training may give the encoder the knowledge to perform sentencelevel tasks, while MLM pre-training may be particularly effective for sequence labeling tasks which use the token-level representations directly.
With a 12-layer decoder, the explicit mask token during seq2seq pre-training does not seem to improve the encoder. However, when the decoder has only 2 layers, the mask token is crucial: "bart12e2d-mask" out-performs "bart-12e2d" by a wide margin across tasks. We hypothesize that the mask token makes de-noising easier, by signaling where tokens should be filled in, and without this signal, the task is too challenging for a seq2seq model with just a 2-layer decoder. Reducing the decoder further to only 1 layer does not benefit the encoder.
Continuing training the seq2seq-extracted encoder on MLM for 100k updates does not close the gap to the from-scratch encoder across datasets.
Some tasks improve, while others degrade.
## 3.2 Seq2Seq Model Results
We evaluate the generation quality of our seq2seq models on two datasets: mTOP (Li et al., 2021)
cross-lingual zero-shot semantic parsing, and XSUM (Narayan et al., 2018) English summarization. For mTOP, following CLASP (Rosenbaum et al., 2022a), we use space-joined tokens as input, word sentinels, and SCIEM (Space- and CaseInsensitive Exact Match) metric. For both datasets, we generate outputs using beam search with k=3.
As shown in Table 3, **the two-stage model with**
encoder unfrozen partway through training is on-par with the from-scratch seq2seq model:
compared to "bart-12e12d ", "2stage-bart-12e12dunfrz" is only 0.1 points behind on mTOP en (83.3 vs. 83.4) yet 2.5 points ahead on cross-lingual zeroshot (48.2 vs. 45.7). On XSUM, the two-stage model is on-par or slightly better than the fromscratch seq2seq models.
Masking during seq2seq pre-training does not greatly impact generation quality. When the encoder is frozen ("2stage-bart-12e12d"), the results are slightly behind; attention fusion ("2stage-bart12e12d-attn-f") does not provide a clear benefit.
Overall, our proposed **two-stage seq2seq pretraining recipe provides both a multilingual encoder and a seq2seq model on-par with the two**
models trained from scratch, while reducing compute cost by 27% (from 15.0 to 11.0 TU).
## 4 Conclusion And Future Work
In this work, we studied recipes to efficiently pretrain both a multilingual encoder and a seq2seq model by re-using the weights from one model for the other. We found that the most effective recipe is to start training of a seq2seq model from a pretrained encoder and unfreeze it partway through the training. Future work can explore even more efficient pre-training strategies such as jointly training on MLM and sequence-level de-noising objectives, and probe further why the encoders trained as part of a seq2seq model do not do well on sequence labeling tasks.
## 5 Limitations
Our proposed two-stage training recipe is beneficial under the assumption that a pre-trained model is needed for generative as well as sequence labeling tasks. We believe that is typically the case, as one tries to offset the pre-training investment by using the model for as many tasks as possible, but this assumption might not apply in all cases. While we assess the effect of randomness on fine-tuning results by using multiple seeds, we have not done that for the pre-training itself. Even at our mediumsize scale, it is already prohibitively expensive to do so. The evidence for the effectiveness of the twostage approach is also limited by the number of tasks evaluated (2 sequence classification tasks, 2 sequence labeling tasks, 2 generation tasks), but we believe it is a reasonable trade-off between robust results and compute investment.
## 6 Acknowledgments
We thank Kai-Wei Chang, Nicolas Guenon Des Mesnards, and the anonymous ACL reviewers for their helpful feedback on our work.
## References
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon.
2020. Unilmv2: Pseudo-masked language models for unified language model pre-training. *ArXiv*,
abs/2002.12804.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. *ArXiv*,
abs/2005.14165.
Jin Cao, Chandana Satya Prakash, and Wael Hamza.
2022. Attention fusion: a light yet efficient late fusion mechanism for task adaptation in NLU. In *Findings of the Association for Computational Linguistics:*
NAACL 2022, pages 857–866, Seattle, United States. Association for Computational Linguistics.
Cheng Chen, Yichun Yin, Lifeng Shang, Xin Jiang, Yujia Qin, Fengyu Wang, Zhi Wang, Xiao Chen, Zhiyuan Liu, and Qun Liu. 2022. bert2BERT: Towards reusable pretrained language models. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 2134–2148, Dublin, Ireland. Association for Computational Linguistics.
Qian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Baindoor Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier García, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Oliveira Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathleen S. Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling language modeling with pathways. *ArXiv*, abs/2204.02311.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020a. Unsupervised cross-lingual representation learning at scale. In ACL.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020b. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440–
8451, Online. Association for Computational Linguistics.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, M. Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. *ArXiv*, abs/1905.03197.
Jack G. M. FitzGerald, Shankar Ananthakrishnan, Konstantine Arkoudas, David Bernardi, Abhishek Bhagia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, Anurag Dwarakanath, Satyam Dwivedi, Turan Gojayev, Karthik Gopalakrishnan, Thomas Gueudré, Dilek Z. Hakkani-Tür, Wael Hamza, Jonathan Hueser, Kevin Martin Jose, Haidar Khan, Beiye Liu, Jianhua Lu, A. Manzotti, Pradeep Natarajan, Karolina Owczarzak, Gokmen Oz, Enrico Palumbo, Charith S. Peris, Chandan Prakash, Stephen Rawls, Andrew Rosenbaum, Anjali Shenoy, Saleh Soltan, Mukund Sridhar, Liz Tan, Fabian Triefenbach, Pang Wei, Haiyang Yu, Shuai Zheng, Gokhan Tur, and Premkumar Natarajan. 2022.
Alexa teacher model: Pretraining and distilling multibillion-parameter encoders for natural language understanding systems.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson.
2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. In *Proceedings of the 37th International* Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4411–4421. PMLR.
Diederik P. Kingma and Jimmy Ba. 2015. Adam:
A method for stochastic optimization. *CoRR*,
abs/1412.6980.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart:
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL.
Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2021.
MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2950–2962, Online. Association for Computational Linguistics.
Frederick Liu, Terry Huang, Shihang Lyu, Siamak Shakeri, Hongkun Yu, and Jing Li. 2022. Enct5: A framework for fine-tuning t5 as non-autoregressive models.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692.
Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018. Don't give me the details, just the summary!
topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
Jianmo Ni, Gustavo Hernandez Abrego, Noah Constant, Ji Ma, Keith Hall, Daniel Cer, and Yinfei Yang. 2022.
Sentence-t5: Scalable sentence encoders from pretrained text-to-text models. In *Findings of the Association for Computational Linguistics: ACL 2022*,
pages 1864–1874, Dublin, Ireland. Association for Computational Linguistics.
Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajivc, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis M. Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. In International Conference on Language Resources and Evaluation.
Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long*
Papers), pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics.
Alec Radford and Karthik Narasimhan. 2018. Improving language understanding by generative pretraining.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020a. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*,
21(140):1–67.
Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020b. Exploring the limits of transfer learning with a unified text-to-text transformer. *ArXiv*, abs/1910.10683.
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In *Proceedings of the* 26th ACM SIGKDD International Conference on Knowledge Discovery Data Mining, KDD '20, page 3505–3506, New York, NY, USA. Association for Computing Machinery.
Andy Rosenbaum, Saleh Soltan, Wael Hamza, Marco Damonte, Isabel Groves, and Amir Saffari. 2022a. CLASP: Few-shot cross-lingual data augmentation for semantic parsing. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 444–462, Online only. Association for Computational Linguistics.
Andy Rosenbaum, Saleh Soltan, Wael Hamza, Yannick Versley, and Markus Boese. 2022b. LINGUIST: Language model instruction tuning to generate annotated utterances for intent classification and slot tagging.
In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 218–241, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Sascha Rothe, Shashi Narayan, and Aliaksei Severyn.
2020. Leveraging pre-trained checkpoints for sequence generation tasks. *Transactions of the Association for Computational Linguistics*, 8:264–280.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H.
Bach, Lintang A. Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M SAIFUL BARI, Canwen Xu, Urmish Thakker, Shanya Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han
Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Stella Rose Biderman, Leo Gao, T. G. Owe Bers, Thomas Wolf, and Alexander M. Rush. 2021.
Multitask prompted training enables zero-shot task generalization. *ArXiv*, abs/2110.08207.
Saleh Soltan, Shankar Ananthakrishnan, Jack G. M.
FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith S. Peris, Stephen Rawls, Andrew Rosenbaum, Anna Rumshisky, Chandan Prakash, Mukund Sridhar, Fabian Triefenbach, Apurv Verma, Gokhan Tur, and Premkumar Natarajan. 2022. Alexatm 20b:
Few-shot learning using a large-scale multilingual seq2seq model. *ArXiv*, abs/2208.01448.
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam M. Shazeer, Apoorv Kulshreshtha, HengTze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, Yaguang Li, Hongrae Lee, Huaixiu Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Yanqi Zhou, Chung-Ching Chang, I. A. Krivokon, Willard James Rusch, Marc Pickett, Kathleen S. Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Hartz Søraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Díaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin HoffmanJohn, Josh Lee, Lora Aroyo, Ravindran Rajakumar, Alena Butryna, Matthew Lamm, V. O. Kuzmina, Joseph Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda:
Language models for dialog applications. *ArXiv*,
abs/2201.08239.
Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *ArXiv*, abs/1706.03762.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. 2020. On layer normalization in the transformer architecture. *ArXiv*, abs/2002.04745.
Weijia Xu, Batool Haider, and Saab Mansour. 2020.
End-to-end slot alignment and recognition for crosslingual NLU. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5052–5063, Online. Association for Computational Linguistics.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models. *ArXiv*,
abs/2205.01068.
## A Additional Related Work
Pre-trained Transformer models (Vaswani et al., 2017) are commonly used in Natural Language Processing
(NLP) for both transfer learning in downstream tasks (Devlin et al., 2019; Liu et al., 2019; Radford and Narasimhan, 2018; Radford et al., 2019) and for in-context learning (Brown et al., 2020). Transformers were originally designed as sequence-to-sequence (seq2seq) models with an encoder and a decoder component (Vaswani et al., 2017). However, all three obvious variants of this architecture are now common: encoder-only (Devlin et al., 2019), decoder-only (Radford and Narasimhan, 2018; Radford et al., 2019; Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022; Thoppilan et al., 2022) and seq2seq (Lewis et al., 2020; Raffel et al., 2020b; Sanh et al., 2021; Dong et al., 2019; Bao et al., 2020).
Commonly, encoder transformer models are pre-trained using the MLM objective (Devlin et al., 2019).
Decoders are pre-trained using a next-token left-to-right prediction (causal) language modeling objective
(Radford and Narasimhan, 2018) or some version of autoregressive de-noising (Lewis et al., 2020).
Seq2seq models often combine these objectives (Lewis et al., 2020; Bao et al., 2020).
We follow the multilingual approach of models such as XLM-RoBERTa (Conneau et al., 2020b)
(encoder-only) and mT5/mBART (Xue et al., 2021; Liu et al., 2020) (seq2seq), where the model is pre-trained on data from multiple languages. This enables **cross-lingual zero-shot fine-tuning**, where the model is fine-tuned on task data only from a single language (usually English), then evaluated on multiple languages.
Previous literature has explored using a pre-trained encoder to initialize a larger encoder (Chen et al., 2022) or a seq2seq model (Rothe et al., 2020). The latter was applied to large-scale models, e.g.
AlexaTM 20B and AlexaTM 5B (Soltan et al., 2022; Rosenbaum et al., 2022b; FitzGerald et al., 2022).
Our work provides the first direct comparison of warm-starting vs. from-scratch seq2seq pre-training using the same data and codebase.
Recently, Sentence-T5 (Ni et al., 2022) studied the opposite direction, showing that extracting the encoder from T5 (Raffel et al., 2020a) can out-perform BERT on several sentence-level tasks. We also explore extracting the encoder from a seq2seq model, adding the novelty of the first explicit comparison with MLM encoders using the same pre-training data and codebase. Furthermore, whereas Sentence-T5 studies only sentence level tasks in English, we study both sentence-level and token-level (e.g. sequence labeling) multilingual tasks. We show that the encoder extracted from a seq2seq model under-performs on token-level tasks, motivating our proposed sequential pre-training recipes.
EncT5 (Liu et al., 2022) proposes an alternative method to fine-tune the encoder from a seq2seq model for classification and sequence labeling tasks, by attaching a randomly initialized one-layer decoder with cross-attention. They report substantial improvements on UDPOS (Part-of-Speech tagging, a sequence labeling task) compared to an mBERT (MLM encoder) model of similar encoder size, however the comparison is between models pre-trained on different data and codebases. For a cleaner comparison, we would need to implement and evaluate the EncT5 framework with our models, which is challenging since no reference implementation is available, and also because Liu et al. (2022) provide only the average number across languages for UDPOS and do not report per language. Therefore, we defer a more thorough study of EncT5 vs. standard feed-forward layer classification heads to future work.
## B Results By Language
Our main results in Section 3 (Tables 2 and 3) show only the English and average zero-shot results for brevity. Here, for completeness, we show the results on each langauge for XNLI (Table 4), mATIS++
Intent Classification (IC) (Table 5), mATIS++ Slot Labeling (SL) (Table 6), WikiANN NER (Table 7),
UDPOS (Table 8), and mTOP semantic parsing (Table 9).
## C Fine-Tuning Hyperparameters
Table 10 shows the hyperparameters for fine-tuning the pre-trained models. For encoders, we first performed a single run with learning rates among 1e-6, 3e-6, 1e-5, 3e-5, 1e-4 for each task and model, and found that the best learning rate was nearly always 1e-5 or 3e-5, with only small differences between
| Encoder | en | ar | de | es | fr | hi | avg-0s |
|---------------------------------------------|----------|----------|----------|----------|----------|----------|----------|
| Encoder Model From Scratch (MLM only) | | | | | | | |
| roberta-12e | 84.5±0.5 | 72.9±0.2 | 76.9±0.3 | 79.9±0.2 | 78.7±0.3 | 70.5±0.8 | 75.8±0.2 |
| Encoder of Seq2Seq Models (de-noising only) | | | | | | | |
| bart-12e12d | 83.9±0.2 | 71.6±0.6 | 76.0±0.5 | 79.2±0.8 | 77.8±0.1 | 68.9±0.3 | 74.7±0.3 |
| bart-12e12d-mask | 83.9±0.4 | 71.9±0.7 | 76.3±0.2 | 79.5±0.6 | 78.5±0.5 | 68.5±1.3 | 75.0±0.6 |
| bart-12e2d | 71.3±0.1 | 56.7±0.7 | 60.3±0.3 | 64.2±0.7 | 63.9±0.6 | 53.3±0.2 | 59.7±0.5 |
| bart-12e2d-mask | 82.9±0.3 | 70.9±0.4 | 74.7±0.5 | 78.1±0.3 | 76.9±0.4 | 68.2±0.5 | 73.8±0.2 |
| bart-12e1d-mask | 82.4±0.2 | 69.6±0.3 | 73.5±0.3 | 77.0±0.1 | 76.3±0.4 | 66.9±0.2 | 72.7±0.1 |
| Recipe 1: Encoder of Seq2Seq Model + MLM | | | | | | | |
| bart-12e12d+mlm | 80.3±0.4 | 65.6±0.4 | 70.6±0.2 | 72.9±0.8 | 72.6±0.2 | 63.4±0.8 | 69.0±0.4 |
| Encoder | en | de | es | fr | hi | ja | pt | avg-0s |
|---------------------------------------------|----------|----------|----------|----------|----------|-----------|----------|----------|
| Encoder Model From Scratch (MLM only) | | | | | | | | |
| roberta-12e | 97.8±0.1 | 92.7±2.2 | 96.2±0.5 | 94.6±1.4 | 79.5±4.5 | 65.6±17.1 | 94.3±2.4 | 87.2±4.1 |
| Encoder of Seq2Seq Models (de-noising only) | | | | | | | | |
| bart-12e12d | 96.8±0.1 | 91.0±2.5 | 91.0±0.4 | 93.1±1.5 | 77.7±3.6 | 72.1±4.5 | 92.2±1.8 | 86.2±1.5 |
| bart-12e12d-mask | 97.1±0.1 | 89.7±1.1 | 94.2±0.4 | 94.0±0.8 | 78.6±0.7 | 75.0±3.4 | 91.9±1.1 | 87.3±0.7 |
| bart-12e2d | 96.1±0.1 | 80.4±8.1 | 84.7±3.5 | 86.1±3.0 | 74.3±1.3 | 64.4±5.6 | 84.6±2.6 | 79.1±0.8 |
| bart-12e2d-mask | 96.8±0.1 | 92.4±0.6 | 94.5±0.5 | 94.7±0.5 | 79.1±1.5 | 73.9±5.1 | 94.4±0.4 | 88.1±0.9 |
| bart-12e1d-mask | 97.0±0.1 | 90.1±0.8 | 94.8±0.4 | 93.3±0.4 | 79.4±0.8 | 76.5±3.0 | 91.6±0.4 | 87.6±0.5 |
| Recipe 1: Encoder of Seq2Seq Model + MLM | | | | | | | | |
| bart-12e12d+mlm | 97.2±0.4 | 86.1±4.2 | 92.0±0.8 | 91.1±1.5 | 76.1±2.7 | 70.1±4.4 | 88.2±3.3 | 83.9±1.6 |
Table 4: Encoder model results by language on XNLI test sets: accuracy.
Table 5: Encoder model results by language on mATIS++ test sets, Intent Classificaiton (IC) accuracy.
those two options by model. For consistency, we then fixed the learning rate for each task and ran each model on each task with three random seeds. We use Adam (Kingma and Ba, 2015) optimization. We freeze the embedding layer which we find generally slightly improves the cross-lingual zero-shot results.
For XNLI, we follow the standard practice established in BERT (Devlin et al., 2019) to attach the classification head to the first token ("<s>" for all of our models). We also explored max pooling across all tokens and did not observe a significant difference in performance.
For mATIS++, following Chen et al. (2019), we use two separate classification heads, one for Intent Classification (IC) attached to the encoder output of the first subword token of the sequence, and the second for Slot Labeling (SL) attached to the *first subword of each whole word* in the sequence.
Similarly, for WikiANN NER and UDPOS, we again use a single classification head attached to the first subword of each whole word in the sequence. When computing f1 score for sequence labeling tasks
(mATIS++ SL and WikiANN NER), we ignore the "O" ("Outside") tag, using the seqeval (Nakayama, 2018) implementation which takes into account the BIO tags present in WikiANN.
## D Details On Compute Cost
We provide details on the compute cost reported in Table 1. The unit "TU" (Training Updates) is defined as the compute cost for 100k updates (forward and backward pass) of 12 model layers with hidden dimension 1024 and batch size 1M tokens. The encoder-only MLM model trains for 500k updates, for Compute Cost 5.0 TU. The Seq2Seq Models From Scratch have more layers, and therefore a larger Compute Cost for 500k updates. For example, "bart-12e12d" has 12 layers each for encoder and decoder, resulting in a compute cost of 10.0 for 500k updates. As a baseline, training both the MLM encoder and the seq2seq models from scratch would incur a compute cost of 5.0 + 10.0 = 15.0 TU.
| Encoder | en | de | es | fr | hi | ja | pt | avg-0s |
|---------------------------------------------|----------|----------|----------|----------|----------|----------|----------|----------|
| Encoder Model From Scratch (MLM only) | | | | | | | | |
| roberta-12e | 95.7±0.1 | 82.8±1.2 | 81.8±0.6 | 72.3±1.7 | 31.8±1.3 | 20.9±1.2 | 79.8±0.8 | 61.6±0.6 |
| Encoder of Seq2Seq Models (de-noising only) | | | | | | | | |
| bart-12e12d | 92.5±0.3 | 57.0±4.8 | 52.6±0.9 | 58.5±0.4 | 25.1±1.3 | 12.6±1.5 | 60.0±3.1 | 44.3±1.3 |
| bart-12e12d-mask | 91.1±0.9 | 54.7±2.6 | 52.9±2.4 | 52.5±0.9 | 23.2±2.9 | 10.5±1.5 | 53.9±1.7 | 41.3±1.3 |
| bart-12e2d | 91.4±0.1 | 52.0±4.1 | 53.2±2.3 | 50.3±1.7 | 11.4±0.8 | 3.9±1.5 | 58.3±2.2 | 38.2±1.7 |
| bart-12e2d-mask | 92.3±0.3 | 63.7±2.5 | 60.7±1.1 | 60.8±1.8 | 26.0±1.5 | 10.4±0.9 | 66.4±2.1 | 48.0±1.4 |
| bart-12e1d-mask | 92.8±0.5 | 65.6±4.8 | 59.5±0.4 | 61.7±0.3 | 24.7±1.7 | 16.3±1.8 | 68.0±1.6 | 49.3±1.2 |
| Recipe 1: Encoder of Seq2Seq Model + MLM | | | | | | | | |
| bart-12e12d+mlm | 95.3±0.2 | 70.5±7.6 | 79.7±1.2 | 67.3±1.4 | 33.5±3.9 | 15.9±3.8 | 72.2±0.6 | 56.5±2.8 |
Table 6: Encoder model results by language on mATIS++ test sets, Slot Labeling (SL) f1 score.
Encoder en ar de es fr hi it ja mr pt ta te avg-0s
roberta-12e 83.0 45.7 73.5 68.9 75.4 71.5 76.7 28.0 57.5 74.7 53.9 46.5 **61.1**
±0.1 ±2.1 ±0.7 ±0.5 ±0.4 ±0.5 ±0.7 ±1.0 ±2.0 ±0.2 ±0.3 ±0.7 ±0.4
Encoder of Seq2Seq Models (de-noising only)
| Encoder Model From Scratch (MLM only) | | | | | | | | | | | | |
|---------------------------------------------|------|------|------|------|------|------|------|------|------|------|------|------|
| ±0.1 | ±2.1 | ±0.7 | ±0.5 | ±0.4 | ±0.5 | ±0.7 | ±1.0 | ±2.0 | ±0.2 | ±0.3 | ±0.7 | ±0.4 |
| Encoder of Seq2Seq Models (de-noising only) | | | | | | | | | | | | |
| ±0.2 | ±1.8 | ±1.2 | ±1.7 | ±0.8 | ±2.2 | ±0.7 | ±0.6 | ±1.9 | ±0.4 | ±0.2 | ±2.3 | ±0.9 |
| ±0.1 | ±0.8 | ±0.3 | ±0.5 | ±0.5 | ±0.1 | ±0.5 | ±0.7 | ±2.9 | ±1.5 | ±1.1 | ±1.9 | ±0.6 |
| ±0.5 | ±1.7 | ±1.0 | ±0.5 | ±0.5 | ±0.8 | ±0.8 | ±0.4 | ±3.3 | ±0.2 | ±0.8 | ±1.0 | ±0.1 |
| ±0.2 | ±1.9 | ±0.5 | ±2.4 | ±0.5 | ±1.0 | ±0.3 | ±0.7 | ±3.3 | ±0.5 | ±2.4 | ±3.1 | ±0.6 |
| ±0.5 | ±2.5 | ±1.6 | ±2.0 | ±0.6 | ±1.0 | ±0.3 | ±0.8 | ±0.4 | ±0.5 | ±0.9 | ±2.3 | ±0.3 |
| Recipe 1: Encoder of Seq2Seq Model + MLM | | | | | | | | | | | | |
| ±0.2 | ±1.0 | ±0.6 | ±0.5 | ±0.3 | ±1.4 | ±0.4 | ±0.3 | ±0.8 | ±0.7 | ±1.1 | ±0.9 | ±0.5 |
bart-12e12d 76.6 44.4 64.8 61.1 70.7 62.2 69.7 10.2 41.0 69.7 42.0 37.0 52.1
±0.2 ±1.8 ±1.2 ±1.7 ±0.8 ±2.2 ±0.7 ±0.6 ±1.9 ±0.4 ±0.2 ±2.3 ±0.9
bart-12e12d-mask 73.2 30.6 57.7 59.8 67.4 60.7 66.1 8.1 41.3 68.9 37.6 33.8 48.4
±0.1 ±0.8 ±0.3 ±0.5 ±0.5 ±0.1 ±0.5 ±0.7 ±2.9 ±1.5 ±1.1 ±1.9 ±0.6
bart-12e2d 69.3 31.0 53.5 54.9 61.2 48.7 62.0 7.1 33.2 61.9 31.1 28.0 42.9
±0.5 ±1.7 ±1.0 ±0.5 ±0.5 ±0.8 ±0.8 ±0.4 ±3.3 ±0.2 ±0.8 ±1.0 ±0.1
bart-12e2d-mask 76.5 45.2 63.5 65.0 70.8 64.0 69.9 10.3 44.4 71.5 47.7 41.4 54.0
±0.2 ±1.9 ±0.5 ±2.4 ±0.5 ±1.0 ±0.3 ±0.7 ±3.3 ±0.5 ±2.4 ±3.1 ±0.6
bart-12e1d-mask 74.6 40.9 54.5 64.0 64.3 54.3 65.4 9.4 43.0 67.3 37.8 32.1 48.5
±0.5 ±2.5 ±1.6 ±2.0 ±0.6 ±1.0 ±0.3 ±0.8 ±0.4 ±0.5 ±0.9 ±2.3 ±0.3
Recipe 1: Encoder of Seq2Seq Model + MLM
bart-12e12d+mlm 79.9 29.8 62.8 60.9 68.9 58.7 68.7 13.6 29.4 69.5 33.7 27.0 47.5
±0.2 ±1.0 ±0.6 ±0.5 ±0.3 ±1.4 ±0.4 ±0.3 ±0.8 ±0.7 ±1.1 ±0.9 ±0.5
Recipe 1 (Encoder of Seq2Seq + MLM), first pays compute cost 10.0 TU for the seq2seq training, then 1.0 TU for 100k MLM updates on the extracted encoder, for a total of 11.0 TU.
For Recipe 2 (Two-Stage Seq2Seq Models warm-started with MLM encoder), we first pay a compute cost of 5.0 TU from MLM pre-training of the encoder, then add compute cost for the second stage seq2seq pre-training. When the encoder is frozen, we only need to compute the forward pass for the encoder, not the backward pass. We estimate that when the encoder is frozen, its forward pass uses 1/2 the compute as a forward and backward pass would use. (In reality, the ratio is likely less, as we also save memory by not needing to store the optimizer states.) Therefore, when the encoder is frozen for "2stage-bart-12e12d" and "2stage-bart-12e12d-attn-f", the 500k decoder updates incur a compute cost of 2.5 on the encoder side and 5.0 on the decoder side. Adding this to the 5.0 for MLM initialization gives a total compute cost of 5.0 + 7.5 + 12.5 TU.
For "2stage-bart-12e12d-unfrz", the 200k updates with frozen encoder incur a compute cost of 1.0 TU
on the encoder side and 2.0 TU on the decoder size for a total of 3.0 TU. During the final 150k updates, the encoder is unfrozen, so the compute cost is 3.0. Adding the 5.0 compute cost for MLM Encoder initialization, the total compute cost for this model is 5.0 + 3.0 + 3.0 = 11.0 TU.
| Encoder | en | ar | de | es | fr | hi | it | ja | mr | pt | ta | te | avg-0s |
|---------------------------------------------|------|------|------|------|------|------|------|------|------|------|------|------|----------|
| Encoder Model From Scratch (MLM only) | | | | | | | | | | | | | |
| roberta-12e | 95.8 | 65.9 | 86.2 | 85.6 | 77.6 | 67.5 | 88.0 | 44.9 | 74.4 | 86.1 | 55.1 | 76.8 | 73.5 |
| ±0.0 | ±0.2 | ±0.2 | ±0.5 | ±0.2 | ±0.6 | ±0.4 | ±1.3 | ±1.7 | ±0.3 | ±0.1 | ±1.2 | ±0.2 | |
| Encoder of Seq2Seq Models (de-noising only) | | | | | | | | | | | | | |
| bart-12e12d | 94.3 | 54.0 | 75.0 | 70.7 | 66.7 | 57.0 | 71.2 | 32.1 | 61.5 | 72.8 | 47.4 | 68.6 | 61.5 |
| ±0.7 | ±0.4 | ±3.9 | ±5.5 | ±1.1 | ±1.4 | ±2.4 | ±7.9 | ±9.7 | ±3.0 | ±4.5 | ±7.8 | ±0.4 | |
| bart-12e12d-mask | 93.3 | 49.0 | 62.4 | 58.9 | 56.4 | 50.6 | 60.9 | 27.3 | 60.0 | 64.3 | 47.3 | 69.0 | 55.1 |
| ±0.1 | ±1.3 | ±1.2 | ±0.5 | ±0.4 | ±1.4 | ±0.6 | ±1.1 | ±1.1 | ±0.5 | ±0.5 | ±0.9 | ±0.4 | |
| bart-12e2d | 92.1 | 43.5 | 60.8 | 58.4 | 54.6 | 42.3 | 58.5 | 16.8 | 57.2 | 63.2 | 41.4 | 61.1 | 50.7 |
| ±0.1 | ±0.9 | ±1.5 | ±1.7 | ±1.4 | ±0.3 | ±1.3 | ±0.3 | ±3.0 | ±0.9 | ±0.3 | ±1.2 | ±0.5 | |
| bart-12e2d-mask | 93.3 | 48.9 | 61.7 | 52.8 | 52.9 | 48.8 | 58.1 | 27.1 | 63.3 | 59.3 | 48.1 | 73.4 | 54.0 |
| ±0.1 | ±0.6 | ±2.6 | ±0.9 | ±1.8 | ±0.5 | ±1.2 | ±1.2 | ±1.5 | ±1.6 | ±1.0 | ±2.2 | ±0.6 | |
| bart-12e1d-mask | 92.4 | 44.8 | 52.5 | 43.8 | 43.8 | 43.0 | 47.9 | 19.8 | 58.4 | 53.0 | 42.1 | 60.0 | 46.3 |
| ±0.1 | ±1.4 | ±3.4 | ±1.6 | ±2.8 | ±2.6 | ±3.2 | ±1.7 | ±1.9 | ±1.5 | ±1.1 | ±1.9 | ±1.7 | |
| Recipe 1: Encoder of Seq2Seq Model + MLM | | | | | | | | | | | | | |
| bart-12e12d+mlm | 95.1 | 53.5 | 78.2 | 76.1 | 68.2 | 56.0 | 72.8 | 39.6 | 49.4 | 74.9 | 41.9 | 57.5 | 60.7 |
| ±0.0 | ±1.3 | ±1.1 | ±0.7 | ±1.8 | ±2.0 | ±0.7 | ±1.4 | ±1.0 | ±1.0 | ±0.3 | ±1.9 | ±0.9 | |
Table 8: Encoder model results by language on UDPOS Part-of-Speech tagging (POS) test sets, f1 score.
| Model | en | fr | de | es | hi | avg-0s |
|------------------------------------------------------------------|-----------|-----------|-----------|-----------|-----------|-----------|
| Seq2Seq Models From Scratch (de-noising only) | | | | | | |
| bart-12e12d | 83.4 ±0.2 | 54.3 ±1.2 | 48.5 ±1.7 | 51.6 ±1.6 | 28.4 ±0.5 | 45.7 ±1.1 |
| bart-12e12d-mask | 83.2 ±0.5 | 53.9 ±0.6 | 51.0 ±0.4 | 53.2 ±0.9 | 29.3 ±0.2 | 46.9 ±0.5 |
| Recipe 2: Two-Stage Seq2Seq Models (warm-start with MLM encoder) | | | | | | |
| 2stage-bart-12e12d | 82.0 ±1.1 | 52.3 ±.06 | 49.6 ±1.4 | 54.4 ±0.5 | 28.8 ±0.3 | 46.3 ±0.3 |
| 2stage-bart-12e12d-attn-f | 80.6 ±1.3 | 52.6 ±0.7 | 49.8 ±0.7 | 53.7 ±0.8 | 29.7 ±0.4 | 46.4 ±0.5 |
| 2stage-bart-12e12d-unfrz | 83.3 ±0.2 | 55.2 ±1.1 | 51.3 ±1.6 | 55.3 ±1.2 | 31.1 ±0.2 | 48.2 ±0.5 |
Table 9: Seq2Seq model results by language on mTOP semantic parsing test sets, SCIEM.
## E Pre-Training Details
We show an example sentence for each of our pre-training objectives in Figure 2.
Models were pre-trained (8 or 16 machines) and fine-tuned (1 machine) on AWS p3dn.24xlarge instances. For MLM pre-training, we use a peak learning rate of 1.5e-4 (1e-4 for the second stage of Recipe 1) warmed up over 5k update steps (1k for the second stage of Recipe 1) and decayed linearly down to 5e-6 over the total number of updates (500k or 100k, respectively). For seq2seq pre-training, we use the same learning rate as MLM pre-training: peak of 1.5e-4, warmed up over 5k updates, and linearly decayed down to 5e-6 for the duration of pre-training. For all pre-training runs, we use dropout of 0.1.
Our code is derived from HuggingFace (Wolf et al., 2020). We use DeepSpeed (Rasley et al., 2020)
ZeRO Stage 1 to accelerate training.
## F Dataset Sources
We show in Table 11 the source locations of the datasets we use for fine-tuning evaluation.
| Parameter | Encoder tasks | Seq2Seq tasks | | | | |
|-------------------------|-----------------|-----------------------------------|------------------|-------------|-------------------------------|-------------|
| XNLI | mATIS++ | WikiANN | UDPOS | mTOP | XSUM | |
| Peak Learning Rate (LR) | 1e-5 | 3e-5 | 3e-5 | 3e-5 | 5e-6 | 5e-6 |
| LR warmup type | linear | linear | linear | linear | exponential | exponential |
| from 0 | from 0 | from 0 | from 0 | from 1e-7 | from 1e-7 | |
| LR warmup num steps | 1000 | 500 | 300 | 1000 | 1000 | 1000 |
| LR decay type | linear to 0 | linear to 0 | linear to 0 | linear to 0 | linear to 1e-7 linear to 1e-7 | |
| Batch size | 128 | 128 | 128 | 128 | 32 | 32 |
| Epochs | 5 | 200 | 20 | 56 | 200 | 200 |
| Validation Metric | Accuracy | Slot Labeling f1 Slot Labeling f1 | Slot Labeling f1 | Exact Match | Perplexity | |
| Max number of updates | 30k | 7k | 3k | 9k | ∼50k | ∼50k |
| [256,256] gelu | | | | | | |
| Classification head(s) | [512] gelu | each for | [512] gelu | [512] gelu | - | - |
| IC and SL | | | | | | |
![12_image_0.png](12_image_0.png)
| Dataset | Source |
|-----------|---------------------------------------------|
| XNLI | https://huggingface.co/datasets/xnli |
| mATIS++ | https://github.com/amazon-science/multiatis |
| WikiANN | https://huggingface.co/datasets/wikiann |
| UDPOS | https://huggingface.co/datasets/xtreme |
| mTOP | https://fb.me/mtop_dataset |
| XSUM | https://huggingface.co/datasets/xsum |
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Not applicable.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
fei-etal-2023-constructing | Constructing Code-mixed {U}niversal {D}ependency Forest for Unbiased Cross-lingual Relation Extraction | https://aclanthology.org/2023.findings-acl.599 | Latest efforts on cross-lingual relation extraction (XRE) aggressively leverage the language-consistent structural features from the universal dependency (UD) resource, while they may largely suffer from biased transfer (e.g., either target-biased or source-biased) due to the inevitable linguistic disparity between languages. In this work, we investigate an unbiased UD- based XRE transfer by constructing a type of code-mixed UD forest. We first translate the sentence of the source language to the parallel target-side language, for both of which we parse the UD tree respectively. Then, we merge the source-/target-side UD structures as a unified code-mixed UD forest. With such forest features, the gaps of UD-based XRE between the training and predicting phases can be effectively closed. We conduct experiments on the ACE XRE benchmark datasets, where the results demonstrate that the proposed code-mixed UD forests help unbiased UD-based XRE transfer, with which we achieve significant XRE performance gains. | # Constructing Code-Mixed Universal Dependency Forest For Unbiased Cross-Lingual Relation Extraction
Hao Fei1**, Meishan Zhang**2∗
, Min Zhang2**, Tat-Seng Chua**1 1 Sea-NExT Joint Lab, School of Computing, National University of Singapore 2 Harbin Institute of Technology (Shenzhen), China
{haofei37,dcscts}@nus.edu.sg, [email protected], [email protected]
## Abstract
Latest efforts on cross-lingual relation extraction (XRE) aggressively leverage the languageconsistent structural features from the universal dependency (UD) resource, while they may largely suffer from biased transfer (e.g., either target-biased or source-biased) due to the inevitable linguistic disparity between languages.
In this work, we investigate an unbiased UDbased XRE transfer by constructing a type of code-mixed UD forest. We first translate the sentence of the source language to the parallel target-side language, for both of which we parse the UD tree respectively. Then, we merge the source-/target-side UD structures as a unified code-mixed UD forest. With such forest features, the gaps of UD-based XRE between the training and predicting phases can be effectively closed. We conduct experiments on the ACE XRE benchmark datasets, where the results demonstrate that the proposed code-mixed UD forests help unbiased UD-based XRE transfer, with which we achieve significant XRE
performance gains.
## 1 Introduction
Relation extraction (RE) aims at extracting from the plain texts the meaningful *entity mentions* paired with *semantic relations*. One widelyacknowledged key bottleneck of RE is called the long-range dependence (LRD) issue, i.e., the decay of dependence clues of two mention entities with increasing distance in between (Culotta and Sorensen, 2004; Zhang et al., 2018; Fei et al., 2021).
Fortunately, prior work extensively reveals that the syntactic dependency trees help resolve LRD issue effectively, by taking advantage of the close relevance between the dependency structure and the relational RE pair (Miwa and Bansal, 2016; Can et al.,
2019). In cross-lingual RE, likewise, the universal dependency trees (de Marneffe et al., 2021) are leveraged as effective language-persistent features
∗Corresponding author
![0_image_0.png](0_image_0.png)
in the latest work for better transfer from source
(SRC) language to target (TGT) language (Subburathinam et al., 2019; Fei et al., 2020b; Taghizadeh and Faili, 2021).
Current state-of-the-art (SoTA) XRE work leverages the UD trees based on the model transfer paradigm, i.e., training with SRC-side UD features while predicting with TGT-side UD features (Ahmad et al., 2021; Taghizadeh and Faili, 2022). Model transfer method transfers the shareable parts of features from SRC to TGT, while unfortunately it could fail to model the TGT-side language-specific features, and thus results in a clear *TGT-side bias*. In fact, the TGT-side bias can
![1_image_0.png](1_image_0.png)
be exacerbated in UD-based model transfer, cf. Fig.
1(a). Given that UD has a universal annotation standard, inevitably, there is still a syntax discrepancy between the two languages due to their intrinsic linguistic nature. We show (cf. §3 for more discussion) that between the parallel sentences in English and Arabic, around 30% words are misaligned and over 35% UD word-pairs have no correspondence.
Such structural discrepancies consequently undermine the model transfer efficacy.
One alternative solution is using annotation projection (Padó and Lapata, 2009; Kim et al., 2010; McDonald et al., 2013; Xiao and Guo, 2015). The main idea is directly synthesizing the pseudo TGTside training data, so that the TGT-side linguistic features (i.e., UD trees) are well preserved. However, it could be a double side of the sword in the annotation projection paradigm. It manages to learn the language-specific features, while at the cost of losing some high-efficient structural knowledge from SRC-side UD, thus leading to the SRC-biased UD feature transfer. As illustrated in Fig. 1(b), the dependence paths in the SRC UD
tree that effectively solves the LRD issues for the task are sacrificed when transforming the SRC tree into the TGT tree.
This motivates us to pursue an unbiased and holistic UD-based XRE transfer by considering both the SRC and TGT UD syntax features. To reach the goal, in this work, we propose combining the view of model transfer and annotation projection paradigm, and constructing a type of codemixed UD forests. Technically, we first project the SRC training instances and TGT predicting instances into the opposite languages, respectively.
Then, we parse the parallel UD trees of both sides respectively via existing UD parsers. Next, merge each pair of SRC and TGT UD trees together into the code-mixed UD forest, in which the wellaligned word pairs are merged to the TGT ones in the forest, and the unaligned words will all be kept in the forest. With these code-mixed syntactic features, the gap between training and predicting phases can be closed, as depicted in Fig. 1(c).
We encode the UD forest with the graph attention model (GAT; Velickovic et al., 2018) for feature encoding. We perform experiments on the representative XRE benchmark, ACE05 (hristopher Walker et al., 2006), where the transfer results from English to Chinese and Arabic show that the proposed code-mixed forests bring significant improvement over the current best-performing UD-based system, obtaining the new SoTA results. Further analyses verify that 1) the code-mixed UD
forests help maintain the debiased cross-lingual transfer of RE task, and 2) the larger the difference between SRC and TGT languages, the bigger the boosts offered by code-mixed forests. To our knowledge, we are the first taking the complementary advantages of annotation projection and model transfer paradigm for unbiased XRE transfer. We verify that the gap between training and predicting of UD-based XRE can be bridged by synthesizing a type of code-mixed UD forests. The resource can be found at https://github.com/
scofield7419/XLSIE/.
## 2 Related Work
Different from the sequential type of information extraction (IE), e.g., named entity recognition
(NER) (Cucerzan and Yarowsky, 1999), RE not only detects the mentions but also recognizes the semantic relations between mentions. RE has long received extensive research attention within the last decades (Zelenko et al., 2002). Within the community, research has revealed that the syntactic dependency trees share close correlations with RE
or broad-covering information extraction tasks in structure (Fei et al., 2021; Wu et al., 2021; Fei et al.,
2022), and thus the former is frequently leveraged as supporting features for enhancing RE. In XRE,
the key relational features between words need to be transferred between languages, which motivates the incorporation of UD tree features that have consistent annotations and principles across various languages. Thus, UD-based systems extensively achieve the current SoTA XRE (Lu et al., 2020; Taghizadeh and Faili, 2021; Zhang et al., 2021).
This work inherits the prior wisdom, and leverages the UD features.
Model transfer (Kozhevnikov and Titov, 2013; Ni and Florian, 2019; Fei et al., 2020b) and annotation projection (Björkelund et al., 2009; Mulcaire et al., 2018; Daza and Frank, 2019; Fei et al., 2020a; Lou et al., 2022) are two mainstream avenues in structural cross-lingual transfer track. The former trains a model on SRC annotations and them make predictions with TGT instances, i.e., transferring the shared language-invariant features. The latter directly synthesizes the pseudo training instances in TGT language based on some parallel sentences, in which the TGT-specific features are retained to the largest extent. As we indicated earlier, in both two paradigms the UD tree features can be unfortunately biased during the transfer, thus leading to the underutilization of UD resource. This work considers a holistic viewpoint, integrating both the two cross-lingual transfer schemes and combining both the SRC and TGT syntax trees by code mixing.
Several prior studies have shown that combining the raw SRC and pseudo TGT (from projection)
data for training helps better transfer. It is shown that although the two data are semantically identical, SRC data still can offer some complementary language-biased features (Fei et al., 2020a,b; Zhen et al., 2021). Yet we emphasize that different from regular cross-lingual text classification or sequential prediction, XRE relies particularly on the syntactic structure features, e.g., UD, and thus needs a more fine-grained approach for SRC-TGT
data ensembling, instead of simply instance stacking. Thus, we propose merging the SRC and TGT
syntax trees into the code-mixed forests.
Code mixing has been explored in several different NLP applications (Labutov and Lipson, 2014; Joshi et al., 2016; Banerjee et al., 2018; Samanta et al., 2019), where the core idea is creating data piece containing words from different languages simultaneously. For example, Samanta et al. (2019)
introduce a novel data augmentation method for enhancing the recognition of code-switched sentiment analysis, where they replace the constituent phrases with code-mixed alternatives. Qin et al.
(2020) propose generating code-switching data to augment the existing multilingual language models for better zero-shot cross-lingual tasks. While we notice that most of the works focus on the development of code-mixed sequential texts, this work considers the one for structural syntax trees. Our work is partially similar to Zhang et al. (2019) on the code-mixed UD tree construction. But ours differentiate theirs in that Zhang et al. (2019) target better UD parsing itself, while we aim to improve downstream tasks.
## 3 Observations On Ud Bias 3.1 Bias Source Analysis
As mentioned, even though UD trees define consistent annotations across languages, it still falls short on wiping all syntactic bias. This is inevitably caused by the underlying linguistic disparity deeply embedded in the language itself. Observing the linguistic discrepancies between different languages, we can summarize them into following three levels:
1) Word-level Changes.
- **Word number.** The words referring to same semantics in different languages vary, e.g., in English one single-token word may be translated in Chinese with more than one token.
- **Part of speech.** In different languages a parallel lexicon may come with different part of speech.
- **Word order.** Also it is a common case that the word order varies among parallel sentences in different languages.
## 2) Phrase-Level Change.
- **Modification type.** A modifier of a phrasal constituent can be changed when translating into another languages. For example, in English, 'in the distance' is often an adverbial modifier, while its counterpart in Chinese '遥 远的' plays a role of an attribute modifier.
- **Change of pronouns.** English grammar has strict structure, while in some other languages the grammar structures may not strict. For example, in English, it is often case to use relative pronouns (e.g., which, that, who) to refer to the prior mentions, while in other languages, such as Chinese, the personal pronouns (e.g., which, that, who) will be used to refer the prior mentions.
- **Constituency order change.** Some constituent phrases will be reorganized and reordered from one language to another language, due to the differences in grammar rules.
## 3) Sentence-Level Change.
- **Transformation between active and passive**
sentences. In English it could be frequent to use the passive forms of sentences, while being translated into other languages the forms will be transformed into active types, where the words and phrases in the whole sentences can be reversed.
- **Transformation between clause and main**
sentence. In English the attributive clauses and noun clauses are often used as subordinate components, while they can be translated into two parallel clauses in other languages.
- **Change of reading order of sentences.** The majority of the languages in this world have the reading order of from-left-to-right, such as English, French, etc. But some languages, e.g., under Afro-Asiatic family, Arabic, Hebrew, Persian, Sindhi and Urdu languages read from right to left.
## 3.2 Ud Bias Statistics
In Fig. 3 we present the statistics of such bias between the parallel UD trees in different languages, such as the misaligned words, mismatched UD
(w↷
i wj ) pair and UD path of (e↷
s*· · ·*↷ eo) relational pair. Fig. 3(a) reveals that languages under different families show distinct divergences. And the more different of languages, the greater the divergences (e.g., English to Arabic). Fig. 3(b)
indicates that complex sentences (e.g., compound sentences) bring larger bias; and in the real world, complex sentences are much more ubiquitous than simple ones. Also, the mismatch goes worse when the UD core predicates are nouns instead of verbs.
![3_image_0.png](3_image_0.png)
## 4 Code-Mixed Ud Forest Construction
To eliminate such discrepancies for unbiased UDfeature transfer, we build the code-mixed UD
forests, via the following six steps.
▶ **Step 1: translating a sentence** x Src **in SRC**
language to the one x Tgt **in TGT language.**1 This step is to generate a pseudo parallel sentence pair in both TGT and SRC languages. We accomplish this by using the state-of-the-art *Google Translation API*.
2 We denote the parallel sentences as
<x Src,x Tgt> or <x Src,x Tgt>.
▶ **Step 2: obtaining the word alignment**
scores. Meanwhile, we employ the Awesome-align toolkit3to obtain the word alignment confidence M={mi↔j} between word pair wi ∈ x Src and wj ∈ x Tgt in parallel sentences.
▶ **Step 3: parsing UD trees for parallel sentences.** Then, we use the UD parsers in SRC and SRC languages respectively to parse the UD syntax trees for two parallel sentences, respectively. We adopt the UDPipe4as our UD parsers, which are trained separately on different UD annotated data5.
We denote the SRC UD tree as T
Src, and the pseudo TGT UD tree as T
Tgt. Note that the UD trees in all languages share the same dependency labels, Algorithm 1 Process of constructing code-mixed UD forests Input: T
SRC, T
TGT, M, threshold θ, empty forest F = Φ.
Output: Code-mixed UD forest F.
| SRC ̸= Φ) or (T TGT ̸= Φ) or (opt_nodes̸= Φ) do | | | |
|------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|----|
| 6: | if is_root then | | |
| 7: | wmerged = Merge(T SRC.ROOT, T TGT.ROOT) | ▷ merging from ROOT in T SRC and T | |
| 8: | wmerged.nextSRC = T SRC.ROOT.GetChildNodes() | | |
| 9: | wmerged.nextTGT = T TGT.ROOT.GetChildNodes() | | |
| 10: | F.wcur.SetChild(wmerged, 'root') | | |
| 11: | opt_nodes.enqueue(wmerged) | | |
| 12: | is_root = False | | |
| 14: | F.wcur = opt_nodes.dequeue() | | |
| 15: | aligned_pairs, nonaligned_nodes = AlignSearch(F.wcur.nextSRC , F.wcur.nextTGT , M) SRC TGT i , w j , arc ) ∈ aligned_pairs do SRC TGT | | |
| 17: | wmerged = Merge(w i | , w j | ) |
| 18: | wmerged.nextSRC = w SRC i .GetChildNodes() | | |
| 19: | wmerged.nextTGT = w j | .GetChildNodes() | |
| TGT | | | |
| 20: | F.wcur.SetChild(wmerged, arc) | | |
| 21: | opt_nodes.enqueue(wmerged) | | |
| 23: | for wi ∈ nonaligned_nodes do | | |
| 24: | F.wcur.SetChild(wi , wi .arc) | ▷ action 'Coping into forest' for non-aligned words. | |
29: **def Merge** (w SRC
a, w TGT
b) ▷ action '*Merging into forest*' for aligned words.
30: **return** w TGT
b▷ for two aligned word, returning the TGT-side word.
31: **def AlignSearch** (nodes_a, nodes_b, M) ▷ preparing the aligned word pairs in T
SRC and T
TGT.
32: aligned_pairs = []
33: for mi↔j ∈ M do 34: if mi↔j > θ **then**
35: aligned_pairs.Append(nodes_a[i], nodes_b[j], nodes_b[i].arc )
36: nodes_a.Remove(wi) 37: nodes_a.Remove(wj )
38: **end if** 39: **end for**
40: nonaligned_nodes = nodes_a.union(nodes_b) ▷ words with no salient alignments.
41: **return** aligned_pairs, nonaligned_nodes i.e., with the same (as much as possible) annotation standards. In Appendix §A we list the dependency labels which are the commonly occurred types.
▶ **Step 4: projecting and merging the labels**
of training data. For the training set, we also need to project the annotations (relational subjectobject pairs) of sentences in SRC languages to TGT
pseudo sentences. Note that this step is not needed for the testing set. The projection is based on the open source6, during which the word alignment scores at step-2 are used. We can denote the SRC
annotation as y, and the pseudo TGT label as y.
We then merge the annotation from both SRC and TGT viewpoints, into the code-mixed one Y , for later training use. Specifically, for the node that is kept in the final code-mixed forest, we will keep its labels; and for those nodes that are filtered, the annotations are replaced by their correspondences.
▶ **Step 5: merging the SRC and TGT UD**
trees into a code-mixed forest. Finally, based on the SRC UD tree and the TGT UD tree, we construct the code-mixed UD forest. We mainly perform breadth-first top-down traversal over each pair of nodes T
Src and T
Tgt, layer by layer. The traversal starts from their *ROOT* node. We first create a *ROOT* node as the initiation of the codemixed forest. We design two types of actions for the forest merging process:
- **Merging** current pair of nodes wi ∈ T Src from SRC tree and wj ∈ T Tgt from TGT tree into the forest F, if the current two nodes are confidently aligned at same dependency layer. We check the word alignment confidence mi↔j between the two nodes, and if the confidence is above a pre-defined threshold θ, i.e., mi↔j > θ, we treat them as confidently aligned.
- **Copying** current node from SRC tree T
Src or TGT tree T
Tgt into the forest F, once the node has no significant alignment in the opposite tree at this layer.
In Algorithm 1 we formulate in detail the process of code-mixed forest construction. Also, we note that when moving the nodes from two separate UD trees into the forest, the attached dependency labels are also copied. When two nodes are merged, we only choose the label of the TGT-side node. Finally, the resulting forest F looks like code-mixing, and is structurally compact.
▶ **Step 6: assembling code-mixed texts.** Also we need to synthesize a code-mixed text X based on the raw SRC text x Src and the pseudo TGT text x Tgt. The code-mixed text X will also be used as inputs together with the forest, into the forest encoder. We directly replace the SRC words with the TGT words that have been determined significantly aligned at Step-5.
6https://github.com/scofield7419/XSRL-ACL
## 5 Xre With Code-Mixed Ud Forest
Along with the UD forest F
Src, we also assemble the code-mixed sequential text XSrc from the SRC
and translated pseudo-TGT sentences (i.e., x Src and x Tgt), and the same for the TGT sentences XTgt. An XRE system, being trained with SRC-side annotated data (<XSrc, F
Src>, Y
Src), needs to determine the label Y
Tgt of relational pair e r↷
seo given a TGT
sentence and UD forest (<XTgt, F
Tgt>).
The XRE system takes as input X={wi}n and F.
We use the multilingual language model (MLM)
for representing the input code-mixed sentence X:
H = {h1, *· · ·* , hn} = MLM(X), (1)
where X is the code-mixed sentential text. We then formulate the code-mixed forest F as a graph, G=<*E, V* >, where E={ei,j}n×n is the edge between word pair (i.e., initiated with ei,j=0/1, meaning dis-/connecting), V ={wi}n are the words. We main the node embeddings ri for each node vi. We adopt the GAT model (Velickovic et al., 2018) for the backbone forest encoding:
$$\begin{array}{c}{{=\mathrm{Softmax}(\mathrm{GelLU}(U^{T}[W_{1}r_{i};W_{2}r_{j}]))\,,}}\\ {{\quad}}\\ {{u_{i}=\sigma(\sum_{j}\rho_{i,j}W_{3}r_{j}^{1})\,,}}\end{array}$$
where W3/4/5 and U are all trainable parameters.
σ is the sigmoid function. GeLU is a Gaussian error linear activation function. Note that the firstlayer representations of riis initialized with hi.
H and U are then concatenated as the resulting feature representation:
Hˆ = H ⊕ U . (4)
XRE aims to determine the semantic relation labels between two given mention entities. For example, given a sentence 'John Smith works at Google', RE should identify that there is a relationship of "works at" between the entities "John Smith" and "Google". Our XRE model needs to predict the relation label y. We adopt the biaffine decoder (Dozat and Manning, 2017) to make prediction:
y = Softmax(h T
s · W1 · ho + W2 · Pool(Hˆ )). (5)
Here both hs and ho are given.
## 6 Experiments
6.1 Setups We consider the ACE05 (hristopher Walker et al.,
2006) dataset, which includes English (EN), Chinese (ZH) and Arabic (AR). We give the data statistics in Table 1 The multilingual BERT is used.7 7https://huggingface.co, base, cased version
| Language | Train | Dev | Test |
|------------|---------|-------|--------|
| EN | 479 | 60 | 60 |
| ZH | 507 | 63 | 63 |
| AR | 323 | 40 | 40 |
We use two-layer GAT for forest encoding, with a 768-d hidden size. We mainly consider the transfer from EN to one other language. Following most cross-lingual works (Fei et al., 2020b; Ahmad et al.,
2021), we train the XRE model with fixed 300 iterations without early-stopping. We make comparisons between three setups: 1) using only raw SRC training data with the model transfer, 2) using only the pseudo TGT (via annotation projection)
for training, and 3) using both the above SRC and TGT data. Each setting uses both the texts and UD
tree (or forest) features. The baseline uses the same GAT model for syntax encoding, marked as *SynBaseline*. For setup 1)&2) we also test the transfer with only text inputs, removing the syntax features, marked as *TxtBaseline*. Besides, for setup 1) we cite current SoTA performances as references. We use F1 to measure the RE performance, following Ahmad et al. (2021). All experiments are undergone five times and the average value is reported.
## 6.2 Data Inspection
We also show in Table 3 the differences in average sequential and syntactic (shortest dependency path) distances between the subjects and objects of the relational triplets. As seen, the syntactic distances between subject-object pairs are clearly shortened in the view of syntactic dependency trees, which indicates the imperative to incorporate the tree structure features. However, the syntactic distances between different languages vary, i.e., more complex languages have longer syntactic distances.
Such discrepancy reflects the necessity of employing our proposed UD debiasing methods to bridge the gap.
## 6.3 Main Results
From Table 2, we can see that UD features offer exceptional boosts (M1 vs. M2, M4 vs. M5). And annotation projection methods outperform model transfer ones (i.e., M1&M2&M3 vs. M4&M5) by offering direct TGT-side features. Interestingly, in both two transfer paradigms, the improvements from UD become weak on the language pairs with
![6_image_0.png](6_image_0.png)
bigger divergences. For example, the improvement on EN→DE outweighs the ones on EN→ZH.
Furthermore, using our proposed code-mixed syntax forests is significantly better than using standalone SRC or TGT (or the simple combination)
UD features (M7 vs. M2&M5&M6) on all transfers with big margins. For example, our system outperforms SoTA UD-based systems with averaged +4.8%(=67.2-62.4) F1. This evidently verifies the necessity to create the code-mixed forests, i.e., bringing unbiased UD features for transfer.
Also, we find that the more the difference between the two languages, the bigger the improvements from forests. The ablation of code-mixed texts also shows the contribution of the sequential textual features, which indirectly demonstrates the larger efficacy of the structural code-mixed UD forests.
## 6.4 Probing Unbiasedness Of Code-Mixed Ud Forest
Fig. 4 plots the change of the syntax distances of RE pairs during the transfer with different syntax trees. We see that the use of SRC UD trees shows clear bias (with larger inclination angles) during the transfer, while the use of TGT UD trees and codemixed forests comes with less change of syntax distances. Also, we can see from the figure that the inference paths between objects and subjects of RE tasks are clearly shortened with the forests (in orange color), compared to the uses of SRC/TGT
UD trees.
## 6.5 Change During Code-Mixed Ud Forest Merge
Here we make statistics of how many words are merged and kept during the UD tree merging, respectively. The statistics are shown in Table 4. We can see that the distance between EN-ZH is shorter
| SRC | TGT | EN→ZH | EN→AR | AVG | | |
|--------------------------------------------------------------|---------------------|---------|---------|-------|-------------|-------------|
| ▶ Model Transfer M1 TxtBaseline | ✓ | 55.8 | 63.8 | 59.8 | | |
| M2 | SynBaseline(+T ) | ✓ | 59.2 | 65.2 | 62.2 (+2.4) | |
| M3 | SoTA XRE | ✓ | 58.0 | 66.8 | 62.4 | |
| ▶ Annotation Projection M4 TxtBaseline | ✓ | 58.3 | 66.2 | 62.3 | | |
| M5 | SynBaseline(+T ) | ✓ | 61.4 | 67.4 | 64.4 (+2.1) | |
| ▶ Model Transfer + Annotation Projection M6 SynBaseline(+T ) | ✓ | ✓ | 57.8 | 64.0 | 60.9 | |
| M7 (Ours) | SynBaseline(+F) | ✓ | ✓ | 63.7 | 70.7 | 67.2 (+6.3) |
| M8 | w/o code-mixed text | ✓ | ✓ | 61.6 | 68.2 | 64.9 (-2.3) |
| EN | ZH | AR |
|------------------------------|------|------|
| •Sequential Distance 4.8 3.9 | 25.8 | |
| •Syntactic Distance 2.2 2.6 | 5.1 | |
than that between EN-AR. For example, the length of code-mixed EN-ZH UD forests (sentences) is 31.63, while for EN-AR the length is 40.44. Also, EN-ZH UD forests have a higher to 21.4% merging rate, while EN-AR UD forests have 16.6% merging rate. This demonstrates that the more divergences of languages, the lower the merging rate of the code-mixed forest.
## 6.6 Impacts Of Θ **On Controlling The Quality Of** Merged Forest
In §4 of step-5, we describe that we use a threshold θ to control the aligning during the UD tree merging. Intuitively, the large the threshold θ, the lower the alignment rate. When θ → 0, most of the SRC and TGT nodes in two parallel UD trees can find their counterparts but the alignments are most likely to be wrong, thus hurting the quality of the resulting code-mixed UD forests. When θ → 1, none of the SRC and TGT nodes in two parallel UD trees can be aligned, and both two UD trees are copied and co-existed in the resulting code-mixed UD forests. In such case, the integration of such forests is equivalent to the annotation projection methods where we directly use both the raw SRC
![7_image_0.png](7_image_0.png)
UD feature and the translated pseudo TGT UD tree feature. In Fig. 5 we now study the influences of using different code-mixed forest features generated with different merging rates (θ). We see that with a threshold of θ=0.5, the performances are consistently the best.
## 6.7 Performances On Different Types Of Sentence
In Table 5 we show the results under different types of sentences. We directly select 500 short sentences (with length < 12) as simple sentences; and select 500 lengthy sentences (with length > 35)
as complex sentences. As can be seen, with the code-mixed forest features, the system shows very notable improvements in complex sentences. For example, on the EN→ZH we obtain 15.9(=57.241.3)% F1 improvement, and on the EN→AR the boost increases strikingly to 25.2(=67.3-42.1)% F1.
However, such enhancements are not very significant in handling simple sentences. This indicates that the code-mixed UD forest features can espe-
| Words per Sentence | | | | | |
|----------------------|---------------|-------|------------|---------------|-------------|
| Before Merging | After Merging | | | | |
| SRC (EN) | TGT | Sum | Code-mixed | Merged (Rate) | |
| EN-ZH | 15.32 | 24.91 | 40.23 | 31.63 | 8.6 (21.4%) |
| EN-AR | 15.32 | 33.12 | 48.44 | 40.44 | 8.0 (16.6%) |
| EN→ZH | EN→AR | |
|-----------------------------------------|---------|------|
| - Simple Sentence SynBaseline(+T SRC ) | 66.1 | 78.2 |
| T GT ) | 68.7 | 80.6 |
| SynBaseline(+T SynBaseline(+F) | 71.3 | 82.4 |
| - Complex Sentence SynBaseline(+T SRC ) | 39.5 | 37.4 |
| SynBaseline(+T T GT ) | 41.3 | 42.1 |
| SynBaseline(+F) | 57.2 | 67.3 |
Table 4: The statistics of the words before and after constructing code-mixed data.
cially enhance the effectiveness on the hard case, i.e., the transfer between those pairs with greater divergences will receive stronger enhancements from our methods.
## 7 Conclusion And Future Work
Universal dependencies (UD) have been served as effective language-consistent syntactic features for cross-lingual relation extraction (XRE). In this work, we reveal the intrinsic language discrepancies with respect to the UD structural annotations, which limit the utility of the UD features. We enhance the efficacy of UD features for an unbiased UD-based transfer, by constructing code-mixed UD forests from both the source and target UD
trees. Experimental results demonstrate that the UD forests effectively debias the syntactic disparity in the UD-based XRE transfer, especially for those language pairs with larger gaps.
Leveraging the syntactic dependency features is a long-standing practice for strengthening the performance of RE tasks. In this work, we propose a novel type of syntactic feature, code-mixed UD forests, for cross-lingual relation extraction.
We note that this feature can be applied broadly to other cross-lingual structured information extraction tasks that share the same task definition besides RE, such as event detection (ED) (Halpin and Moore, 2006) and semantic role labeling (SRL)
(Gildea and Jurafsky, 2000). Besides, how to further increase the utility of the UD forests with a better modeling method is a promising research direction, i.e., filtering the noisy structures in the UD forests.
## Acknowledgments
This research is supported by the National Natural Science Foundation of China (No. 62176180), and also the Sea-NExT Joint Lab.
## Limitations
Although showing great prominence, our proposed method has the following limitations. First of all, our method relies on the availability of annotated UD trees of both the source and target languages, as we need to use the annotations to parse the syntax trees for our own sentences. Fortunately, UD
project covers over 100 languages, where most of the languages, even the minor ones, will have the UD resources. At the same time, our method will be influenced by the quality of UD parsers. Secondly, our method also uses the external translation systems to produce the pseudo parallel sentences, where our method may largely subject to the quality of the translators. Again luckily, current neural machine translation systems have been well developed and established, i.e., Google Translation.
Only when handling very scare languages where the current translation systems fail to give satisfactory performances, our method will fail.
## Ethics Statement
In this work, we construct a type of code-mixed UD forest based on the existing UD resources. We note that all the data construction has been accomplished automatically, and we have not created any new annotations with additional human labor. Specifically, we use the UD v2.10 resource, which is a collection of linguistic data and tools that are open-sourced.
Each of treebanks of UD has its own license terms, including the *CC BY-SA 4.0*8and *CC BY-NC-SA*
8http://creativecommons.org/licenses/by-sa/4.
0/
2.5-4.09as well as *GNU GPL 3.0*10. Our use of UD
treebanks comply with all these license terms is at non-commercial purpose. The software tools (i.e., UDPipe parsers) are provided under *GNU GPL V2*.
Our use of UDPipe tools complies with the term.
## References
Wasi Uddin Ahmad, Nanyun Peng, and Kai-Wei Chang.
2021. GATE: graph attention transformer encoder for cross-lingual relation and event extraction. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 12462–12470.
Suman Banerjee, Nikita Moghe, Siddhartha Arora, and Mitesh M. Khapra. 2018. A dataset for building code-mixed goal oriented conversation systems. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3766–3780.
Anders Björkelund, Love Hafdell, and Pierre Nugues.
2009. Multilingual semantic role labeling. In *Proceedings of the CoNLL*, pages 43–48.
Duy-Cat Can, Hoang-Quynh Le, Quang-Thuy Ha, and Nigel Collier. 2019. A richer-but-smarter shortest dependency path with attentive augmentation for relation extraction. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 2902–2912.
Silviu Cucerzan and David Yarowsky. 1999. Language independent named entity recognition combining morphological and contextual evidence. In *Proceedings of the Joint SIGDAT Conference on Empirical* Methods in Natural Language Processing and Very Large Corpora.
Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In *Proceedings* of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 423–429.
Angel Daza and Anette Frank. 2019. Translate and label! an encoder-decoder approach for cross-lingual semantic role labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 603–615.
Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal dependencies. *Comput. Linguistics*, 47(2):255–308.
Timothy Dozat and Christopher D. Manning. 2017.
Deep biaffine attention for neural dependency parsing. In *Proceedings of the 5th International Conference on Learning Representations*.
Hao Fei, Fei Li, Bobo Li, and Donghong Ji. 2021.
Encoder-decoder based unified semantic role labeling with label-aware syntax. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 12794–12802.
Hao Fei, Shengqiong Wu, Jingye Li, Bobo Li, Fei Li, Libo Qin, Meishan Zhang, Min Zhang, and Tat-Seng Chua. 2022. Lasuie: Unifying information extraction with latent adaptive structure-aware generative language model. In *Proceedings of the Advances* in Neural Information Processing Systems, NeurIPS
2022, pages 15460–15475.
Hao Fei, Meishan Zhang, and Donghong Ji. 2020a.
Cross-lingual semantic role labeling with highquality translated training corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7014–7026.
Hao Fei, Meishan Zhang, Fei Li, and Donghong Ji.
2020b. Cross-lingual semantic role labeling with model transfer. *IEEE ACM Trans. Audio Speech* Lang. Process., 28:2427–2437.
Daniel Gildea and Daniel Jurafsky. 2000. Automatic labeling of semantic roles. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 512–520.
Harry Halpin and Johanna D. Moore. 2006. Event extraction in a plot advice agent. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 857–864.
hristopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. In *Proceedings of Philadelphia: Linguistic Data Consortium*.
Aditya Joshi, Ameya Prabhu, Manish Shrivastava, and Vasudeva Varma. 2016. Towards sub-word level compositions for sentiment analysis of Hindi-English code mixed text. In *Proceedings of the 26th International Conference on Computational Linguistics:*
Technical Papers, pages 2482–2491.
Seokhwan Kim, Minwoo Jeong, Jonghoon Lee, and Gary Geunbae Lee. 2010. A cross-lingual annotation projection approach for relation detection. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 564–571.
Mikhail Kozhevnikov and Ivan Titov. 2013. Crosslingual transfer of semantic role labeling models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1190–
1200.
Igor Labutov and Hod Lipson. 2014. Generating codeswitched text for lexical learning. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 562–571.
Chenwei Lou, Jun Gao, Changlong Yu, Wei Wang, Huan Zhao, Weiwei Tu, and Ruifeng Xu. 2022.
Translation-based implicit annotation projection for zero-shot cross-lingual event argument extraction. In Proceedings of the 45th International ACM SIGIR
Conference on Research and Development in Information Retrieval, pages 2076–2081.
Di Lu, Ananya Subburathinam, Heng Ji, Jonathan May, Shih-Fu Chang, Avi Sil, and Clare Voss. 2020. Crosslingual structure transfer for zero-resource event extraction. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1976–
1981.
Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, Claudia Bedini, Núria Bertomeu Castelló, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 92–97.
Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In *Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics*,
pages 1105–1116.
Phoebe Mulcaire, Swabha Swayamdipta, and Noah A.
Smith. 2018. Polyglot semantic role labeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 667–672.
Jian Ni and Radu Florian. 2019. Neural cross-lingual relation extraction based on bilingual word embedding mapping. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 399–409.
Sebastian Padó and Mirella Lapata. 2009. Cross-lingual annotation projection for semantic roles. *J. Artif.*
Intell. Res., 36:307–340.
Libo Qin, Minheng Ni, Yue Zhang, and Wanxiang Che.
2020. Cosda-ml: Multi-lingual code-switching data augmentation for zero-shot cross-lingual NLP. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, pages 3853–
3860.
Bidisha Samanta, Niloy Ganguly, and Soumen Chakrabarti. 2019. Improved sentiment detection via label transfer from monolingual to synthetic codeswitched text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3528–3537.
Ananya Subburathinam, Di Lu, Heng Ji, Jonathan May, Shih-Fu Chang, Avirup Sil, and Clare Voss. 2019.
Cross-lingual structure transfer for relation and event extraction. In Proceedings of the 2019 Conference
on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 313–325.
Nasrin Taghizadeh and Heshaam Faili. 2021. Crosslingual adaptation using universal dependencies.
ACM Trans. Asian Low Resour. Lang. Inf. Process.,
20(4):65:1–65:23.
Nasrin Taghizadeh and Heshaam Faili. 2022. Crosslingual transfer learning for relation extraction using universal dependencies. *Comput. Speech Lang.*,
71:101265.
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio.
2018. Graph attention networks. In Proceedings of the International Conference on Learning Representations.
Shengqiong Wu, Hao Fei, Yafeng Ren, Donghong Ji, and Jingye Li. 2021. Learn from syntax: Improving pair-wise aspect and opinion terms extraction with rich syntactic knowledge. In *Proceedings of the* Thirtieth International Joint Conference on Artificial Intelligence, pages 3957–3963.
Min Xiao and Yuhong Guo. 2015. Annotation projection-based representation learning for crosslingual dependency parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 73–82.
Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation extraction. In *Proceedings of the Conference on Empirical Methods in Natural Language Processing*, pages 71–78.
Meishan Zhang, Yue Zhang, and Guohong Fu. 2019.
Cross-lingual dependency parsing using code-mixed TreeBank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 997–1006.
Yuhao Zhang, Peng Qi, and Christopher D. Manning.
2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215.
Zhisong Zhang, Emma Strubell, and Eduard Hovy. 2021.
On the benefit of syntactic supervision for crosslingual transfer in semantic role labeling. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6229–6246.
Ranran Zhen, Rui Wang, Guohong Fu, Chengguo Lv, and Meishan Zhang. 2021. Chinese opinion role labeling with corpus translation: A pivot study. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 10139–
10149.
## A The Universal Dependency Labels
In Table 6, we list the dependency labels which are the commonly occurred types. Please refer to Stanford dependency11 for more details about the dependency labels.
| Dependency Label | Description |
|--------------------|---------------------------|
| amod | adjectival modifier |
| advcl | adverbial clause modifier |
| advmod | adverb modifier |
| acomp | adjectival complement |
| auxpass | passive auxiliary |
| compound | compound |
| ccomp | clausal complement |
| cc | coordination |
| conj | conjunct |
| cop | copula |
| det | determiner |
| dep | dependent |
| dobj | direct object |
| mark | marker |
| nsubj | nominal subject |
| nmod | nominal modifier |
| neg | negation modifier |
| xcomp | open clausal complement |
Table 6: The universal dependency labels.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✓ A2. Did you discuss any potential risks of your work?
7
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** 6
✓ B1. Did you cite the creators of artifacts you used?
6
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
9&Appendix-B
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 9
✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Appendix-A
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Appendix-A&B
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 6&Appendix-B
## C ✓ **Did You Run Computational Experiments?** 6&Appendix-B
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
6&Appendix-B
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 6&Appendix-B
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
6&Appendix-B
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)? 6&Appendix-B
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Not applicable. Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? Not applicable. Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Not applicable. Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Not applicable. Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Not applicable. Left blank. |
xu-cheng-2023-spontaneous | Spontaneous gestures encoded by hand positions improve language models: An Information-Theoretic motivated study | https://aclanthology.org/2023.findings-acl.600 | The multi-modality nature of human communication has been utilized to enhance the performance of language modeling-related tasks. Driven by the development of large-scale end-to-end learning techniques and the availability of multi-modal data, it becomes possible to represent non-verbal communication behaviors through joint-learning, and directly study their interaction with verbal communication. However, there is still gaps in existing studies to better address the underlying mechanism of how non-verbal expression contributes to the overall communication purpose. Therefore, we explore two questions using mixed-modal language models trained against monologue video data: first, whether incorporating gesture representations can improve the language model{'}s performance (perplexity); second, whether spontaneous gestures demonstrate entropy rate constancy (ERC), which is an empirical pattern found in most verbal language data that supports the rational communication assumption from Information Theory. We have positive and interesting findings for both questions: speakers indeed use spontaneous gestures to convey {``}meaningful{''} information that enhances verbal communication, which can be captured with a simple spatial encoding scheme. More importantly, gestures are produced and organized rationally in a similar way as words, which optimizes the communication efficiency. | # Spontaneous Gestures Encoded By Hand Positions Can Improve Language Models: An Information-Theoretic Motivated Study
Yang Xu Department of Computer Science San Diego State University [email protected]
## Abstract
The multi-modality nature of human communication has been utilized to enhance the performance of language modeling-related tasks.
Driven by the development of large-scale endto-end learning techniques and the availability of multi-modal data, it becomes possible to represent non-verbal communication behaviors through joint-learning, and directly study their interaction with verbal communication.
However, there are still gaps in existing studies to better address the underlying mechanism of how non-verbal expression contributes to the overall communication purpose. Therefore, we explore two questions using mixedmodal language models trained against monologue video data: first, whether incorporating gesture representations can improve the language model's performance (perplexity); second, whether spontaneous gestures demonstrate entropy rate constancy (ERC), which is an empirical pattern found in most verbal language data that supports the rational communication assumption from Information Theory. We have positive and interesting findings for both questions: speakers indeed use spontaneous gestures to convey "meaningful" information that enhances verbal communication, which can be captured with a simple spatial encoding scheme.
More importantly, gestures are produced and organized rationally in a similar way as words, which optimizes communication efficiency.
## 1 Introduction
Human communication is a multi-modal process where both verbal and non-verbal information are expressed simultaneously. This is true in various forms of communication, one-way (speech) or twoway (conversation). It has been revealed in empirical studies that speakers' expression in the visual modality, including gestures, body poses, eye contacts and other types of non-verbal behaviors, play critical roles in face-to-face communication, as they add subtle information that is hard to con-
Yang Cheng University of Southern California [email protected] vey in verbal language. It is becoming an emerging sub-area in computational linguistics. However, whether and to what degrees these sparse and random non-verbal signals can be treated as a formal communication channel that transmits "serious" information remains a seldom-validated question, especially with computational methods. We believe a key missing step is to explore whether the nonverbal information can be quantified.
The questions that are worth further investigation include (but are not limited to): How rich is the information contained in these non-verbal channels? What are their relationships to verbal information? Can we understand the meanings of different gestures, poses, and motions embedded in spontaneous language in a similar way to understanding word meanings? The goal of this study is to propose a simple but straightforward framework to approach the above questions, under the guidance of Information Theory. Some preliminary, yet prospective results are presented. The code and data for this study is published in this repository https://github.
com/innerfirexy/Life-lessons.
## 2 Related Work 2.1 Studies On Gestures In Communication
Early studies and theories The functions of gestures in communication and the connection to verbal language have been extensively studied in behavioral science, psychology and cognitive sciences. McNeill (1992) has developed the Growth Point theory, which can be conceptualized as a "snapshot" of an utterance at its beginning stage psychologically. McNeill (1992)'s theory classifies gestures into two categories, representative ones, which have clearer semantic meanings (e.g., depicting objects and describing locations), and non-representative ones, which refer to the repetitive movements that have little substan9409 tive meanings. McNeill et al. (2008) further put forward a more fine-grained classification scheme for gestures: iconic, metaphoric, *deictic*, and *beats*,
in which the iconic and metaphoric gestures are directly related to the concrete and abstract content in the verbal language. The psycholinguistics theories and studies indicate the feasibility of investigating the "meanings" of gestures with computational semantic approaches.
Lab-based experimental studies The effect of gestures has been broadly studied in laboratory-based behavioral experiments. Holler and Levinson (2019) study the facilitation from multiple layers of visual and vocal signals can add semantic and pragmatic information in faceto-face communication. Similarly, Macuch Silva et al. (2020) find visible gestures are more powerful form of communication than vocalization in dialogue object description tasks. In these studies, gestures from human subjects are usually *manually* coded by observing the hands' spatial positions and motions to characterize naturalistic and meaningful movements. Trujillo et al. (2019) takes a step forward and develops a protocol for automatically extracting kinematic features from video data, which can be applied to quantitative and qualitative analysis of gestures. Their work provides insight to the hands position-based encoding method for gestures (discussed in section 4.2).
## Computational Studies
More recently, the communicative functions of gestures have been studied in different settings from human-human to human-agent interaction interactions. Synthesized gestures are integrated into virtual characters and robots to facilitate the dialogue fluidity and user experiences (Kopp, 2017). In such systems, the content and form of co-speech gestures are determined from the semantic meanings of utterances being produced (Hartmann et al., 2006),
and/or from given communication goals and situations (Bergmann and Kopp, 2010). The success of these systems also indicates the possibility of understanding gestures in the wild by learning language models that include simple gestural features.
To summarize, the works reviewed above have paved the road for studying gestures in a more
"data-driven" style, that is, using data collected from more naturalistic contexts and more automatic methods for encoding gestures.
## 2.2 Multi-Modal Techniques In Machine Learning And Nlp Research
The recent advances in deep neural network-based machine learning techniques provide new methods to understand the non-verbal components of human communication. Many existing works primarily focus on using multi-modal features as clues for a variety of inference tasks, including video content understanding and summarization (Li et al., 2020; Bertasius et al., 2021), as well as more specific ones such as predicting the shared attention among speakers (Fan et al., 2018) and semantic-aware action segmentation (Gavrilyuk et al., 2018; Xu et al.,
2019). More recently, models that include multiple channels have been developed to characterize context-situated human interactions (Fan et al.,
2021). Advances in representation learning have enabled researchers to study theoretical questions with the tools of multi-modal language models.
Neural sequential models are used for predicting the shared attention among speakers (Fan et al., 2018) and semantic-aware action segmentation
(Gavrilyuk et al., 2018; Xu et al., 2019). More recently, models that include multiple channels have been developed to characterize visually embedded and context-situated language use (Fan et al., 2021; Li et al., 2019, 2021; He et al., 2022). Another line of work focuses on the predicting task in the opposite direction, that is, predicting/generating gesture motion from audio and language data (Ginosar et al., 2019; Yoon et al., 2020; Alexanderson et al., 2020). For short, advances in representation learning have enabled researchers to study theoretical questions in complex models.
## 2.3 The Theoretical Basis Of Informative Communication
To what degrees do non-verbal actions contribute to informative communication? Other than the empirical works reviewed in section 2.1, the same question can also be explored from the perspective of abstract theories. (Sandler, 2018) draws evidence from sign languages to show that the actions of hands and other body parts reflect the *compositional* nature of linguistic components (their methods are further discussed in section 4.2). Their work reveals that the use of bodily articulators maps the way a verbal language origins and evolves. Although the spontaneous gestures of our interest here are different from a strictly defined sign language, Sandler (2018)'s work inspires us that more similar properties can be found between verbal and non-verbal languages at a higher level of abstraction. Information Theory (Shannon, 1948) is the next lens that we use.
Information Theory is broadly applied as the theoretical background for the probabilistic models of language. It also provides philosophical explanations for a broad spectrum of linguistic phenomena. One interesting example is the assumption/principle of *entropy rate constancy* (ERC). Under this assumption, human communication in any form (written, spoken, etc.) should optimize the rate of information transmission rate by keeping the overall entropy rate constant.
In natural language, *entropy* refers to the predictability of words (tokens, syllables) estimated with probabilistic language models. Genzel and Charniak (2002, 2003) first formulated a method to examine ERC for written language by decomposing the entropy term into *local* and *global* entropy:
## H(S|Context) = H(S|L) − I(S, C|L) (1)
in which s can be any symbol whose probability can be estimated, such as a word, punctuation, or sentence. C and L refer to the global and local contexts for s, among which C is purely conceptual and only L can be operationally defined. By ERC, the left term in eq. (1) should remain invariant against the position of s. It results in an expectation that the first term on the right H(s|L)
should *increase* with the position of s, because the second term I(*s, C*|L), i.e., the mutual information between s and itself global context should always decrease, which is confirmed in Genzel and Charniak (2003)'s work. Xu and Reitter (2016, 2018) also confirmed the pattern in spoken language, relating it to the success of task-oriented dialogues
(Xu and Reitter, 2017).
The term H(s|L) can be estimated with various methods. Genzel and Charniak (2002, 2003) used the average negative log-probability of all n-grams in a sentence to estimate H(s|L), and the probabilities are returned from an n-gram language model. Some more recent works have used transformerbased neural language models to examine ERC
in dialogue (Giulianelli et al., 2021, 2022) and in broader data modalities with various operationalizations (Meister et al., 2021).
Now, the goal of this study is to extend the application scope of ERC to the non-verbal realm.
More specifically, if the s in eq. (1) represents any symbol that carries information, for example, a gesture or pose, then the same *increase* pattern should be observed within a sequence of gestures. ERC
can be interpreted as a "rational" strategy for the information sender (speaker) because it requires less predictable content (higher local entropy) to occur at a later position within the message, which maximizes the likelihood for the receiver (listener)
to successfully decode information with the least effort. The question explored here is whether we
"speak" rationally by gestures.
## 3 Questions And Hypotheses
We examine two hypotheses in this study:
Hypothesis 1: Incorporating non-verbal representations as input will improve the performance of language modeling tasks. To test Hypothesis 1, we extract non-verbal representations using the output from pose estimation, and then compose discrete tokens to represent the non-verbal information. The non-verbal tokens are inserted into word sequences and form a hybrid type of input data for training language models. The language models are modified to take non-verbal and verbal input sequences simultaneously and compute a fused internal representation. We expect the inclusion of non-verbal information will increase the performance of language models measured by perplexity. Hypothesis 2: Non-verbal communication conforms to the principle of Entropy Rate Constancy.
To test Hypothesis 2, we approximate the local entropy (H(s|L)) of non-verbal "tokens" using the perplexity scores obtained from neural sequential models, and correlate it with the utterances' relative positions within the monologue data. If we can find that H(s|L) increases with utterance position, then it supports the hypothesis.
## 4 Methods 4.1 Data Collection And Processing
The video data used are collected from 4 YouTube channels, i.e., 4 distinct speakers. There are 1 female and 3 male speakers, and the spoken language is English. All the videos are carefully selected based on the standards that each video must contain only one speaker who faces in front of the camera, and whose hands must be visible. The automatic generated captions in .vtt format are obtained for each video.
The pre-processing step is to extract the fullbody landmark points of the speaker, in preparation for the next gesture representation step. For this task, we use the BlazePose (Bazarevsky et al.,
2020) model, which is a lightweight convolutional neural network-based pose estimator provided in MediaPipe1. It outputs the (*x, y*) coordinates of 33 pose landmarks that characterize the key points of the body pose, including {nose, left-eye, *. . .* , }
(see (Xu et al., 2020) for full description). Here each coordinate (*x, y*) is a pair of fraction values within [0, 1] describing the key point's relative position on a frame, whose zero point is at the upper left corner. In fact, the pose estimator returns a 3-D
coordinate (*x, y, z*) for each point, where the third dimension z is the depth. We discard this z component based on our observation that most speakers do not show hand movement in that direction.
## 4.2 **Encode Gestures Based On Hands' Positions**
The next step is to obtain *representation* for gestures so that they can be studied using language models in a similar way as word embeddings. After having surveyed extensively on previous studies about methods of encoding gestures, we decide to develop an encoding scheme that categorizes gestures into discrete *token* based on the positions of hands, which are inspired by the work of (Trujillo et al., 2019; Sandler, 2018). To briefly summarize their work, Trujillo et al. (2019) measure the vertical amplitude feature of the right dominant hand in relation to a participant's body (upper-left of fig. 1);
Sandler (2018) use the relative positions of dominant and non-dominant hands between torso and face as the evidence for the hierarchical organization of body language (lower-left of fig. 1).
The workflow of our method is in three steps.
The first two steps are to identify the **focus area**
of the speaker's upper body, which a square area whose size *almost* equals the height of the upper body. We come up with this empirical setup based on the observation that this square area covers the vast majority of possible hand positions in our data.
The third step is to encode the gesture based the relative positions of hands within the focus area.
1) Compute the horizontal center of the body xcenter by averaging the x coordinates of *nose*,
left & right *shoulders*, and left & right *hips*.
2) Find the vertical boundaries of the body area.
First, compute the vertical distance between the nose and the mid-point of two eyes, δ =
1https://google.github.io/mediapipe/
|ynose − yeyes|. Then the top bound (forehead) is calculated by: ymin = yeyes − 2δ. This is according to the common knowledge about proportions of the human head (Artyfactory, 2022).
The bottom bound ymax is the mean y coordinates of both *hips* because the speakers are in a sitting pose and only their upper bodies are visible. Lastly, obtain the size of focus area by ymax − ymin.
3) Divide the focus area into 3 × 3 regions, i.e.,
nine regions with indices {1, 2*, . . . ,* 9}. Index each hand with an integer based on which region it is in, and then encode the gesture into an integer number, using the combination of both hands' indices. The encoding formula is:
## G(L, R) = (L − 1) · 3 2 + R (2)
in which L and R are the region index for the left and right hand, respectively. This formula maps any combination of (*L, R*) to a distinct integer number g, which we call **gesture token**.
As shown in the example of fig. 1, the speaker's left and right hands fall into region 9 and 8, so the gesture label is <72>. Because there are 9 possible positions for each hand, the total number of gesture tokens is 9 × 9 = 81. For the convenience of the modeling step later, we use one integer index (instead of a string connected by hyphen) to denote each of these 81 gestures: <1>,
<2>, ..., <81>. The pseudo code is presented in appendix A.1. Some notes: why not string but integer. We understand that encoding (*L, R*) into an integer number is not as straight-forward to interpret the gesture as another method of simply representing it with a string, such as "L-R" to indicate left hand in region L and right hand in region R. But using an integer index has the advantage that the gesture tokens can be directly supplied to the language models, just like word indices.
## 4.3 Prepare Gesture Sequences
Having gestures encoded, we prepare the gesture sequences using the time stamped text transcript for each video. We use the automatically generated text transcript in .vtt format, which contains the
<START> and <END> time stamps for each word
(token) in the subtitle. See the following example:
<00:00:00.510><c> let's</c>
<00:00:00.780><c> talk</c>
<00:00:01.020><c> about</c>
![4_image_0.png](4_image_0.png)
in which each word is annotated by a pair of
<c></c> tag, and the <START> time stamp is appended to the head. We treat the start time for one word as the ending time for the previous word. In this example, the token *let's* elapses from 0.780 to 1.020 (seconds). Multiplying the time stamps with the frame rate of 24 (FPS), which means the frame range is from the 19th to 24th. Then, for each frame within this range, we extract a gesture token using the method described in Section 4.2, resulting in a sequence of gesture tokens, {g19, g20*, . . . , g*24}.
This sequence represents a continuous change of gestures during the articulation of the word, which in most cases, consists of identical tokens. Thus, we select the majority token g m within the sequence as the final representation.
Applying the above process to an utterance consisting of N words, {w1, w2*, . . . , w*N }, we can obtain N majority gesture tokens, {g1, g2*, . . . , g*N }.
Despite the down sampling effect of using majority sampling, there is still a large amount of repetition in the resulted gesture sequence, which could cause sparsity issues for the modeling tasks. For instance, in the first row of table 1, the gesture token is the same <24> for the first 6 tokens, which means that the speaker did not move his/her hands during that period of time. We deal with this issue by "compressing" the repeated gesture tokens. For the same example in table 1, merging the 6 repeats of <49> and 2 repeats of <76> results in a compressed gesture sequence, {<49>, <76>}, which indicates that the speaker has made two distinct gestures during the utterance. Throughout the rest of the paper, we call the original gesture sequence that come with repeats the raw sequence, and the one with repeats merged the *compressed* sequence.
For each raw gesture sequence of length N, its compressed version {gˆ1, gˆ2*, . . . ,* gˆN′} usually has smaller length N′ ≤ N.
## 4.4 Incorporate Gesture Inputs To Lms
We implement two neural network-based models for the language modeling tasks, using LSTM
(Hochreiter and Schmidhuber, 1997) and Transformer (Vaswani et al., 2017) encoders. The models are tailored for handling two types of input:
single-modal (words or gestures alone) and mixedmodal (words + gestures).
## Single-Modal Lm Task
The single-modal model takes as input a sequence of either word (w) or gesture (median g or compressed gˆ) tokens and converts them to the embedding space. Then the token embeddings are fed to the LSTM/Transformer encoders to compute a dense representation for tokens at each time step of
| Word tokens | Raw gesture tokens {g} | Compressed gesture sequence {gˆ} | | | |
|-----------------------|--------------------------|------------------------------------|--------------|--------------|---------------|
| going to give you | <49> | <49> | <49> | <49> | <49> |
| a flatter look glossy | <49> | <76> | <76> (N = 8) | <49> | <76> (N′ = 2) |
| now this is really | <44> | <80> | <71> | <71> | <44> |
| your preference | <44> (N = 6) | <44> | <80> | <71> | <44> (N′ = 4) |
| I think most of us | <79> | <79> | <79> | <79> | <79> |
| can get on board | <79> | <79> | <79> | <79> (N = 9) | <79> (N′ = 1) |
Table 1: Examples of gesture sequences. Integers wrapped by "<>" are gesture tokens.
the sequence. Finally, the dense representation at the current time step t is used to predict the token at the next time step t + 1 using a softmax output.
The model architecture is shown in fig. 2.
The learning object here is the same as a typical sequential language modeling task, i.e., to minimize the negative log probability:
$$N L L=-\sum_{k=1}^{K}\log P(t_{k}|t_{1},t_{2},\ldots,t_{k-1})\quad\quad(3)$$
in which t1*, . . . , t*k−1 is all the tokens (gesture or word) before tk within the same utterance. We directly use this NLL value as the estimated local entropy, i.e., H(g|L) ≜ NLL, which is the target variable of our interest. Detailed model hyperparameters and training procedures are included in appendix A.2.
## Mixed-Modal Lm Task
The mixed-modal model takes the word sequence Sw(u) = {wi} and gesture sequence Sg(u) =
{gi} of the same utterance u simultaneously as input. A pair of sequences, Sw (words) and Sg (gestures) are the input, which is then fed into a modality fusion module, where the embedding representation for words and gestures at each time step, i.e.,
wi and gi, are fused by sum, *concat*, or a *bilinear* fusion component. Finally, the resulting mixed embeddings are encoded by the LSTM/Transformer encoder for the next-word prediction task. The purpose of this model is to verify Hypothesis 1, for which we expect the perplexity scores of a mixed-modal model to be lower than that of a single-modal one. It is also our interest to explore the optimal modality fusion method. The model's architecture is shown in fig. 2b. Detailed hyperparameters will be presented in the Appendix.
## 5 Results 5.1 Statistics
62 videos of a total length of 10 hours and 39 minutes are collected. The average length of each video is 723.7 seconds (SD = 438.1). The data and preprocessing scripts will be open-sourced. 17.9K
lines of subtitles consisting of 121.5K words are collected. We have extracted 81 distinct gesture tokens, whose total number is 121.5K in the raw sequence data (equals the total number of words).
Within the compressed sequence data, the total number of gesture tokens is reduced to 26.12 K.
The top five most frequent gesture tokens (according to the raw, uncompressed data) are <79>,
<71>, <70>, <80> and <76>. Their frequency counts, proportions, and the average entropy values are shown in section 5.1. It can be seen that <79>
is the dominant gesture token, where the speaker's right hand falls in region 7 and left hand in region 9. The entropy value increases as the frequency rank drops, which roughly follows the Zipf's law
(see the frequency vs. rank plots in fig. 3). Because Zipf's law is a common distribution for word tokens (Zipf, 2013; Piantadosi, 2014), it is a side evidence showing that gestures encode semantic information in a similar way as words. A detailed analysis of the gestures' positional and semantic meanings is provided in section 5.4.
Token Freq. Prop. **Entropy**
<79> 42367 0.349 2.97 <71> 20540 0.169 6.06 <70> 20354 0.167 6.25 <80> 9264 0.076 13.99 <76> 2762 0.023 51.58
![6_image_0.png](6_image_0.png)
![6_image_1.png](6_image_1.png)
## 5.2 **Examining Hypothesis 1: Mixed Vs. Single** Modal Comparison
The plots of validation cross-entropy loss against training epochs are shown in fig. 4. We use the prefixes s- and m- to indicate the **single**-modal and **mixed**-modal models, respectively, that is, smodels take pure word sequences as input, while m- models take word+gesture sequences as input. It can be clearly seen that the m-LSTM has lower validation loss than s-LSTM, and same trend is found between m-Transformer and s-Transformer.
It supports *Hypothesis 1*: gestures indeed contain useful information that can improve the language model's performance.
Note that an exponential conversion of the crossentropy loss (i.e., the NLL in eq. (3)) leads to another quantity *perplexity*, which is more commonly used to evaluate the performance of language models. The Transformer-based models have overall lower perplexity than LSTM-based ones, which is expected as a Transformer encoder has more parameters to facilitate the sequence prediction task. But meanwhile, the validation loss for training Transformer models does not decrease as significantly
(see the less smooth curves in fig. 4b) as LSTM
models, which probably indicates some overfitting issue. This can be fixed by collecting more training data.
We also compare three different feature fusion method in training the m-LSTM/Transformer models, and found that sum and *concat* have better performance (lower loss) in language modeling tasks.
The corresponding validation losses for three feature fusion methods, sum, *concat*, and *bilinear*, are shown in fig. 5. It can be seen that sum and *concat* result in a significantly lower loss for m-LSTM, but
![7_image_1.png](7_image_1.png)
the difference is not that observable in m- Transformer because in the latter loss shortly converges after training starts.
![7_image_2.png](7_image_2.png)
## 5.3 Examine Hypothesis 2: Local Entropy Increases With Utterance Position
To examine *Hypothesis 2*, we plot the local entropy of each gesture sequence (median and compressed, respectively) against the corresponding utterance's position in fig. 6, which shows a visible increasing trend. We also use linear models to verify the correlations between local entropy and utterance position, that is, local entropy as dependent variable and utterance position as predictor (no random effect is considered due to limited data size). It is confirmed that utterance position is a significant predictor of local entropy with positive β coefficients. For raw gestures, the *beta*s are smaller: βLSTM = 1.6 × 10−3(*p < .*05),
βTrm = 2.3 × 10−3(*p < .*01); for compressed gestures: βLSTM = 0.097, βTrm = 0.093 (*p < .*001).
Therefore, the increase of local entropy is statistically significant. It supports our hypothesis.
![7_image_0.png](7_image_0.png)
## 5.4 Analysis Of Typical Gestures
We examine the top five frequent gesture tokens <79>, <71> <70>, <80> and <76>, and show some selected screenshots in fig. 7 (See appendix A.3 for more examples). For <79>, <70>
and <80>, the positions of both hands are at the mid-lower position in front of the body. Gesture
<79> has two hands evenly distant from the center, while <70> captures a movement to the right and
<80> to the left. Gesture <76> has the right hand at the same height as the speaker's neck and the left hand hanging down, which is a typical onehand gesture in conversation. One technical detail is that in most screenshots of <76> the left hands are invisible, but the pose estimation algorithm can still infer their positions with accuracies above 95%
(see the report from Mediapipe), which is also why they are included in our analysis. In general, the selected four gestures can represent commonly seen patterns in daily communication.
Based on the results from section 5.2 that including gesture features can improve the performance of language models, we conjecture that there could exist a correlation between gestures and certain semantic representations, i.e., a speaker may use certain type of gestures to convey certain meanings. We verify this guess by examining the embedding vectors of word tokens that colocate with three selected gestures: <70>, <71>,
and <80>. Two other frequent gestures, <79>
and <76> are excluded from the analysis because:
<79> is overwhelmingly frequent, which could result in in-balanced samples across gestures; <76>
is scarcely distributed, which makes it difficult to find sentences solely containing it. Next, we pick sentences that contain one distinct gesture, and then
![8_image_0.png](8_image_0.png)
obtain the corresponding sentence vectors from a pre-trained BERT model. The last hidden layer of 768-d for each word is used to compute the mean sentence vector.
## Gesture <70> <71> <80> <70> **.291 (.007)** .298 (.007) .298 (.008) <71> .298 (.007) .304 (.009) .305 (.008) <80> .298 (.008) .305 (.008) .305 (.008)
We calculate the inner-group pair-wise cosine distances for each gesture, and the outer-group pairwise distances between all gestures. From the results shown in table 3, we can see that for gesture
<70>, its inner-group distance (.291) is smaller than the outer-group ones (.298 and .298), with which t-tests yield *p < .*001 results. It suggests that its corresponding sentences are distributed in a semantic sub-space farther away from others, and
<70> is probably a gesture that co-occurs with some particular meanings. This needs to be further examined in future studies with more data.
To sum, we found preliminary positive evidence for associating gestures with distinct semantic meanings. However, the analysis above is limited in following aspects: First, the data come from a limited population, which means the findings about gesture semantics may lack generality. Second, pretrained embeddings are used instead of fine-tuned ones, which can result in inaccurate description of the semantic space. We believe these limits can be
## 6 Conclusions
Our main conclusions are two-fold: First, incorporating gestural features will significantly improve the performance of language modeling tasks, even when gestures are represented with a simplistic method. Second, the way gestures are used as a complementary non-verbal communication sidechannel follows the principle of entropy rate constancy (ERC) in Information Theory. It means that the information encoded in hand gestures, albeit subtle, is actually organized in a *rational* way that enhances the decoding/understanding of information from a receiver's perspective. This is the first work done, to the best of our knowledge, to extend the scope of ERC to non-verbal communication.
The conclusions are based on empirical results from multi-modal language models trained on monologue speech videos with gesture information represented by discrete tokens. There are two explanations for what causes the observed pattern of increasing entropy: First, more rare gestures
(higher entropy) near the later stage of communication; Second, the entropy for the same gesture also increases during the communication. While the latter indicates a more sophisticated and interesting theory about gesture usage, both explanations require further investigation.
This work is exploratory, but the evidence is promising, as only a small dataset is used, and a simplistic gesture representation method is applied.
For future work, we plan to work with a larger and more diverse dataset with a higher variety in genres
(public speech, etc.) and examine more advanced representation methods, such as continuous embedding and clustering. Another direction to pursue is to interpret the semantic meanings of gestures and other non-verbal features by examining their semantic distance from utterances in vector space.
More specifically, non-parametric clustering algorithms can be useful to identify distinct dynamic actions, which provides a different way to extract non-verbal representations.
## Acknowledgements
This work is supported by National Science Foundation of the United States (CRII-HCC: 2105192).
We sincerely thank all the reviewers for their efforts in pointing out the mistakes in the paper and their insightful advice for future improvement.
## References
Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. 2020. Stylecontrollable speech-driven gesture synthesis using normalising flows. In *Computer Graphics Forum*,
volume 39, pages 487–496. Wiley Online Library.
Artyfactory. 2022. The proportions of the head. https://www.artyfactory.
com/portraits/pencil-portraits/
proportions-of-a-head.html. Accessed:
2023-01-14.
Valentin Bazarevsky, Ivan Grishchenko, Karthik Raveendran, Tyler Zhu, Fan Zhang, and Matthias Grundmann. 2020. Blazepose: On-device real-time body pose tracking. *arXiv preprint* arXiv:2006.10204.
Kirsten Bergmann and Stefan Kopp. 2010. Modeling the production of coverbal iconic gestures by learning bayesian decision networks. Applied Artificial Intelligence, 24(6):530–551.
Gedas Bertasius, Heng Wang, and Lorenzo Torresani.
2021. Is space-time attention all you need for video understanding? *arXiv preprint arXiv:2102.05095*.
Lifeng Fan, Yixin Chen, Ping Wei, Wenguan Wang, and Song-Chun Zhu. 2018. Inferring shared attention in social scene videos. In *Proceedings of the IEEE/CVF*
Conference on Computer Vision and Pattern Recognition, pages 6460–6468.
Lifeng Fan, Shuwen Qiu, Zilong Zheng, Tao Gao, SongChun Zhu, and Yixin Zhu. 2021. Learning triadic belief dynamics in nonverbal communication from videos. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 7312–7321.
Kirill Gavrilyuk, Amir Ghodrati, Zhenyang Li, and Cees GM Snoek. 2018. Actor and action video segmentation from a sentence. In *Proceedings of the* IEEE Conference on Computer Vision and Pattern Recognition, pages 5958–5966.
Dmitriy Genzel and Eugene Charniak. 2002. Entropy rate constancy in text. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 199–206, Philadelphia, PA.
Dmitriy Genzel and Eugene Charniak. 2003. Variation of entropy and parse trees of sentences as a function of the sentence number. In *Proceedings of the* 2003 Conference on Empirical Methods in Natural Language Processing, pages 65–72, Sapporo, Japan.
Shiry Ginosar, Amir Bar, Gefen Kohavi, Caroline Chan, Andrew Owens, and Jitendra Malik. 2019. Learning individual styles of conversational gesture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3497–3506.
Mario Giulianelli, Arabella Sinclair, and Raquel Fernández. 2021. Is information density uniform in task-oriented dialogues? In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 8271–8283.
Mario Giulianelli, Arabella Sinclair, and Raquel Fernández. 2022. Construction repetition reduces information rate in dialogue. In *Proceedings of the 2nd* Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, pages 665–682.
Björn Hartmann, Maurizio Mancini, and Catherine Pelachaud. 2006. Implementing expressive gesture synthesis for embodied conversational agents.
In *International Gesture Workshop*, pages 188–199.
Springer.
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000–16009.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735–
1780.
Judith Holler and Stephen C Levinson. 2019. Multimodal language processing in human communication.
Trends in Cognitive Sciences, 23(8):639–652.
Stefan Kopp. 2017. Computational gesture research:
Studying the functions of gesture in human-agent interaction. In R Breckinridge Church, Martha W
Alibali, and Spencer D Kelly, editors, Why Gesture?:
How the hands function in speaking, thinking and communicating, volume 7, chapter 12, pages 267–
284. John Benjamins Publishing Company.
Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing Liu. 2020. Hero: Hierarchical encoder for video+ language omni-representation pretraining. *arXiv preprint arXiv:2005.00200*.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language.
arXiv preprint arXiv:1908.03557.
Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, and Furu Wei.
2021. Trocr: Transformer-based optical character recognition with pre-trained models. *arXiv preprint* arXiv:2109.10282.
Vinicius Macuch Silva, Judith Holler, Asli Ozyurek, and Seán G Roberts. 2020. Multimodality and the origin of a novel communication system in face-to-face interaction. *Royal Society open science*, 7(1):182056.
David McNeill. 1992. Hand and mind. *Advances in* Visual Semiotics, page 351.
David McNeill, Susan D Duncan, Jonathan Cole, Shaun Gallagher, and Bennett Bertenthal. 2008. Growth points from the very beginning. *Interaction Studies*,
9(1):117–132.
Clara Meister, Tiago Pimentel, Patrick Haller, Lena Jäger, Ryan Cotterell, and Roger Levy. 2021. Revisiting the uniform information density hypothesis.
arXiv preprint arXiv:2109.11635.
Steven T Piantadosi. 2014. Zipf's word frequency law in natural language: A critical review and future directions. *Psychonomic Bulletin and Review*,
21(5):1112–1130.
Wendy Sandler. 2018. The body as evidence for the nature of language. *Frontiers in Psychology*, 9:1782.
Claude Elwood Shannon. 1948. A mathematical theory of communication. The Bell System Technical Journal, 27:379–423.
James P Trujillo, Julija Vaitonyte, Irina Simanova, and Asli Özyürek. 2019. Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research. Behavior Research Methods, 51(2):769–777.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30.
Fei Xu, Kenny Davila, Srirangaraj Setlur, and Venu Govindaraju. 2019. Content extraction from lecture video via speaker action classification based on pose information. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 1047–1054. IEEE.
Hongyi Xu, Eduard Gabriel Bazavan, Andrei Zanfir, William T Freeman, Rahul Sukthankar, and Cristian Sminchisescu. 2020. Ghum & ghuml: Generative 3d human shape and articulated pose models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6184–6193.
Yang Xu and David Reitter. 2016. Entropy converges between dialogue participants: explanations from an information-theoretic perspective. In *Proceedings* of the 54th Annual Meeting of the Association for Computational Linguistics, pages 537–546, Berlin, Germany.
Yang Xu and David Reitter. 2017. Spectral analysis of information density in dialogue predicts collaborative task performance. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 623–633, Vancouver, Canada. Association for Computational Linguistics.
Yang Xu and David Reitter. 2018. Information density converges in dialogue: Towards an informationtheoretic model. *Cognition*, 170:147–163.
Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and Geehyuk Lee. 2020.
Speech gesture generation from the trimodal context of text, audio, and speaker identity. *ACM Transactions on Graphics (TOG)*, 39(6):1–16.
George Kingsley Zipf. 2013. *The Psycho-Biology of* Language: An Introduction to Dynamic Philology.
Routledge.
## A Appendix A.1 Algorithm For Position-Based Gesture Encoding
The algorithm for encoding gestures based on the hand positions is described by the following pseudo code:
Algorithm 1 Hand position-based gesture encoding
Require: 0 < r =
H
W < 1, ε = 0.001, N = 3 Ensure: label ∈ {1, 2*, . . . ,* 81}
1: l_shd_x ← x coord of left shoulder 2: r_shd_x ← x coord of right shoulder 3: l_hip_x ← x coord of left hip 4: r_hip_x ← x coord of right hip 5: nose_x ← x coord of nose 6: xc = (nose_x +l_shd_x+r_shd_x 2 +
l_hip_x+r_hip_x 2)/3 7: xleft = xc − 0.5 · r + ε 8: xright = xc + 0.5 · r − ε 9: w = xright − xleft 10: ybot = ε 11: ytop = 1 − ε 12: h = ytop − ybot 13: l_hnd_x ← x coord of left hand 14: r_hnd_x ← x coord of right hand 15: l_hnd_y ← y coord of left hand 16: r_hnd_y ← y coord of right hand 17: l_col =
⌊min(max(l_hnd_x−xleft,0),w)⌋
r· N + 1 18: r_col =
⌊min(max(r_hnd_x−xleft,0),w)⌋
r· N + 1 19: l_row =
⌊min(max(l_hnd_y−ybot,0),h)⌋
r· N + 1 20: r_row =
⌊min(max(r_hnd_y−xbot,0),h)⌋
r· N + 1 21: l_index = ⌊(l_row − 1) · N + l_col⌋
22: r_index = ⌊(r_row − 1) · N + r_col⌋
23: token = (l_index − 1)· N2 + r_index 24: return token
The algorithm takes an image frame of size H × W (pixels) as input (H = 720, W = 1280 for most videos). r = H/W is the ration of frame height over width, and thus its value is fixed as r = 720/1280 = 0.5625 in our data. All x and y coordinates returned by the body key points detector (Mediapipe) are relative values within the range of [0, 1]. We have also observed that a H × H
square region centered around the central axis of the body can consistently cover the speaker's hands, so that is why we use r as the relative width to define the left and right boundaries of the N × N
split areas (line 7 and 8). The resulting index for left hand l_index ∈ {1*, . . . , N*} and right hand r_index ∈ {1*, . . . , N*}. According to line 23, the final gesture token combing information from both hands token ∈ {1, 2*, . . . , N*2}, which contains 81 distinct values when N = 3. The code for the encoding algorithm will be published in a public repository under the MIT license.
## A.2 Hyper-Parameters And Training Procedures
The LSTM-based encoder has an embedding size of 300 and hidden size of 200, with 2 layers; a fully connected layer is used as the decoder connecting the encoder output and the softmax; dropout layers of probability 0.2 are applied to the outputs of both the encoder and decoder. For the Transformerbased encoder, the model size is 20, hidden size is 100, number of layers is 2; same fully connected linear decoder is used; dropout layers of probability 0.5 are used at the position encoding and each transformer encoder layer. To enable the one-direction
(left to right) modeling effect, a mask matrix (of 0 and 1s) in an upper-triangular shape is used together with each input sequence.
Model parameters are randomly initialized.
Training is done within 40 epochs, with batch size of 20, and initial learning rate lr = 0.05. SGD
optimizer with default momentum is used for training the LSTM model; Adam optimizer is used for training the Transformer model. Data are split into 80% for training and 20% for testing. After each training epoch, evaluation is done over the test set, and the model with the lowest perplexity scores is saved as the best one.
Models are implemented with PyTorch.
torch.nn.CrossEntropyLoss module is used as the loss function. The mathematical meaning of the output from this function is the negative logarithm likelihood (NLL in eq. (3)), and thus we compute the exponential values of the output to get the local entropy scores. The entropy scores used in the plot and statistical analysis are obtained from both train and test sets. Models are trained on 2 Nvidia A1000 cards. The total GPU
hours needed is about 2 hours.
The code for training and testing the language models will be published in a public repository under the MIT license. The binary files of the trained model will also be provided via URLs included in the repository. The intended use of the trained language models is for scientific research about general patterns in human non-verbal communication, but not for the identification of individual speakers nor for other commercial use.
## A.3 Screenshots For Frequent Gestures
Some typical screenshots for the top 4 frequent gestures from all four speakers are shown in fig. 8. We can find similar appearances of the same gesture across different speakers.
![13_image_0.png](13_image_0.png)
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
Section 5.4 (last paragraph)
✗ A2. Did you discuss any potential risks of your work?
No potential risks from this study is identified. The data and model are small scaled, and no user oriented system is developed.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
The abstract summarizes the main results and conclusions. The introduction motivates the study.
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.4, Lstm And Transformer Are Cited.
✓ B1. Did you cite the creators of artifacts you used?
Section 4.4, LSTM and Transformer are cited.
✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
The discussion of license and terms of use for the annotation algorithm and the code for training/testing models is provided in Appendix A.1 and A.2
✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
The intended use of the model created in this study is discussed in Appendex A.2 (last paragraph).
✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
The video data used in this study are publicly available on YouTube. Using public vidoes for scientific research conforms to the copyright policy.
✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
The description of data source (language, demographic groups) is provided in Section 4.1
✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Statistic of data (e.g., token numbers) are reported in Section 5.1. The train/test/dev splits is described in Appendix A.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
## C ✓ **Did You Run Computational Experiments?** Section 5.2, 5.3, And 5.4
✓ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
The number of parameters, GPU cards, and GPU hours are provided in Appendix A.2 (second paragraph).
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
The best parameters used are provided in Appendix A.2 (first paragraph).
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Error bars (shaded areas) indicating 95% bootstrap confidence intervals are plotted.
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
PyTorch is used for implementing the models. The use of specific loss function is discussed in Appendix A.2 (third paragraph).
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |